Nov 16 16:51:19 ctl01 systemd-modules-load[450]: Inserted module 'iscsi_tcp'
Nov 16 16:51:19 ctl01 systemd-modules-load[450]: Inserted module 'ib_iser'
Nov 16 16:51:19 ctl01 systemd[1]: Started Create Static Device Nodes in /dev.
Nov 16 16:51:19 ctl01 systemd[1]: Started Set the console keyboard layout.
Nov 16 16:51:19 ctl01 systemd[1]: Starting udev Kernel Device Manager...
Nov 16 16:51:19 ctl01 systemd[1]: Reached target Local File Systems (Pre).
Nov 16 16:51:19 ctl01 kernel: [    0.000000] Linux version 4.15.0-70-generic (buildd@lgw01-amd64-055) (gcc version 7.4.0 (Ubuntu 7.4.0-1ubuntu1~18.04.1)) #79-Ubuntu SMP Tue Nov 12 10:36:11 UTC 2019 (Ubuntu 4.15.0-70.79-generic 4.15.18)
Nov 16 16:51:19 ctl01 systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 16 16:51:19 ctl01 kernel: [    0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-4.15.0-70-generic root=LABEL=cloudimg-rootfs ro console=tty1 console=ttyS0
Nov 16 16:51:19 ctl01 kernel: [    0.000000] KERNEL supported cpus:
Nov 16 16:51:19 ctl01 systemd[1]: Started Apply Kernel Variables.
Nov 16 16:51:19 ctl01 kernel: [    0.000000]   Intel GenuineIntel
Nov 16 16:51:19 ctl01 systemd[1]: Started udev Coldplug all Devices.
Nov 16 16:51:19 ctl01 kernel: [    0.000000]   AMD AuthenticAMD
Nov 16 16:51:19 ctl01 kernel: [    0.000000]   Centaur CentaurHauls
Nov 16 16:51:19 ctl01 kernel: [    0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 16 16:51:19 ctl01 kernel: [    0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 16 16:51:19 ctl01 kernel: [    0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 16 16:51:19 ctl01 kernel: [    0.000000] x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 16 16:51:19 ctl01 kernel: [    0.000000] x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
Nov 16 16:51:19 ctl01 kernel: [    0.000000] e820: BIOS-provided physical RAM map:
Nov 16 16:51:19 ctl01 kernel: [    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 16 16:51:19 ctl01 kernel: [    0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 16 16:51:19 ctl01 kernel: [    0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 16 16:51:19 ctl01 kernel: [    0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000bffdefff] usable
Nov 16 16:51:19 ctl01 kernel: [    0.000000] BIOS-e820: [mem 0x00000000bffdf000-0x00000000bfffffff] reserved
Nov 16 16:51:19 ctl01 kernel: [    0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 16 16:51:19 ctl01 kernel: [    0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 16 16:51:19 ctl01 kernel: [    0.000000] BIOS-e820: [mem 0x0000000100000000-0x00000003bfffffff] usable
Nov 16 16:51:19 ctl01 kernel: [    0.000000] NX (Execute Disable) protection: active
Nov 16 16:51:19 ctl01 kernel: [    0.000000] SMBIOS 2.8 present.
Nov 16 16:51:19 ctl01 kernel: [    0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Ubuntu-1.8.2-1ubuntu1 04/01/2014
Nov 16 16:51:19 ctl01 kernel: [    0.000000] Hypervisor detected: KVM
Nov 16 16:51:19 ctl01 kernel: [    0.000000] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Nov 16 16:51:19 ctl01 kernel: [    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
Nov 16 16:51:19 ctl01 kernel: [    0.000000] e820: last_pfn = 0x3c0000 max_arch_pfn = 0x400000000
Nov 16 16:51:19 ctl01 kernel: [    0.000000] MTRR default type: write-back
Nov 16 16:51:19 ctl01 kernel: [    0.000000] MTRR fixed ranges enabled:
Nov 16 16:51:19 ctl01 kernel: [    0.000000]   00000-9FFFF write-back
Nov 16 16:51:19 ctl01 kernel: [    0.000000]   A0000-BFFFF uncachable
Nov 16 16:51:19 ctl01 kernel: [    0.000000]   C0000-FFFFF write-protect
Nov 16 16:51:19 ctl01 kernel: [    0.000000] MTRR variable ranges enabled:
Nov 16 16:51:19 ctl01 kernel: [    0.000000]   0 base 00C0000000 mask FFC0000000 uncachable
Nov 16 16:51:19 ctl01 kernel: [    0.000000]   1 disabled
Nov 16 16:51:19 ctl01 kernel: [    0.000000]   2 disabled
Nov 16 16:51:19 ctl01 kernel: [    0.000000]   3 disabled
Nov 16 16:51:19 ctl01 kernel: [    0.000000]   4 disabled
Nov 16 16:51:19 ctl01 kernel: [    0.000000]   5 disabled
Nov 16 16:51:19 ctl01 kernel: [    0.000000]   6 disabled
Nov 16 16:51:19 ctl01 kernel: [    0.000000]   7 disabled
Nov 16 16:51:19 ctl01 kernel: [    0.000000] x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 16 16:51:19 ctl01 kernel: [    0.000000] e820: last_pfn = 0xbffdf max_arch_pfn = 0x400000000
Nov 16 16:51:19 ctl01 kernel: [    0.000000] found SMP MP-table at [mem 0x000f6560-0x000f656f]
Nov 16 16:51:19 ctl01 kernel: [    0.000000] Scanning 1 areas for low memory corruption
Nov 16 16:51:19 ctl01 kernel: [    0.000000] Using GB pages for direct mapping
Nov 16 16:51:19 ctl01 kernel: [    0.000000] BRK [0x1c4f41000, 0x1c4f41fff] PGTABLE
Nov 16 16:51:19 ctl01 kernel: [    0.000000] BRK [0x1c4f42000, 0x1c4f42fff] PGTABLE
Nov 16 16:51:19 ctl01 kernel: [    0.000000] BRK [0x1c4f43000, 0x1c4f43fff] PGTABLE
Nov 16 16:51:19 ctl01 kernel: [    0.000000] BRK [0x1c4f44000, 0x1c4f44fff] PGTABLE
Nov 16 16:51:19 ctl01 kernel: [    0.000000] BRK [0x1c4f45000, 0x1c4f45fff] PGTABLE
Nov 16 16:51:19 ctl01 kernel: [    0.000000] BRK [0x1c4f46000, 0x1c4f46fff] PGTABLE
Nov 16 16:51:19 ctl01 kernel: [    0.000000] RAMDISK: [mem 0x35a87000-0x36d3afff]
Nov 16 16:51:19 ctl01 kernel: [    0.000000] ACPI: Early table checksum verification disabled
Nov 16 16:51:19 ctl01 kernel: [    0.000000] ACPI: RSDP 0x00000000000F6510 000014 (v00 BOCHS )
Nov 16 16:51:19 ctl01 kernel: [    0.000000] ACPI: RSDT 0x00000000BFFE14CC 000030 (v01 BOCHS  BXPCRSDT 00000001 BXPC 00000001)
Nov 16 16:51:19 ctl01 kernel: [    0.000000] ACPI: FACP 0x00000000BFFE0854 000074 (v01 BOCHS  BXPCFACP 00000001 BXPC 00000001)
Nov 16 16:51:19 ctl01 kernel: [    0.000000] ACPI: DSDT 0x00000000BFFDFC80 000BD4 (v01 BOCHS  BXPCDSDT 00000001 BXPC 00000001)
Nov 16 16:51:19 ctl01 kernel: [    0.000000] ACPI: FACS 0x00000000BFFDFC40 000040
Nov 16 16:51:19 ctl01 kernel: [    0.000000] ACPI: SSDT 0x00000000BFFE08C8 000B54 (v01 BOCHS  BXPCSSDT 00000001 BXPC 00000001)
Nov 16 16:51:19 ctl01 kernel: [    0.000000] ACPI: APIC 0x00000000BFFE141C 0000B0 (v01 BOCHS  BXPCAPIC 00000001 BXPC 00000001)
Nov 16 16:51:19 ctl01 kernel: [    0.000000] ACPI: Local APIC address 0xfee00000
Nov 16 16:51:19 ctl01 kernel: [    0.000000] No NUMA configuration found
Nov 16 16:51:19 ctl01 kernel: [    0.000000] Faking a node at [mem 0x0000000000000000-0x00000003bfffffff]
Nov 16 16:51:19 ctl01 kernel: [    0.000000] NODE_DATA(0) allocated [mem 0x3bffd5000-0x3bfffffff]
Nov 16 16:51:19 ctl01 kernel: [    0.000000] kvm-clock: cpu 0, msr 3:bff54001, primary cpu clock
Nov 16 16:51:19 ctl01 kernel: [    0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 16 16:51:19 ctl01 kernel: [    0.000000] kvm-clock: using sched offset of 11193346750 cycles
Nov 16 16:51:19 ctl01 kernel: [    0.000000] clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 16 16:51:19 ctl01 kernel: [    0.000000] Zone ranges:
Nov 16 16:51:19 ctl01 kernel: [    0.000000]   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 16 16:51:19 ctl01 kernel: [    0.000000]   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 16 16:51:19 ctl01 kernel: [    0.000000]   Normal   [mem 0x0000000100000000-0x00000003bfffffff]
Nov 16 16:51:19 ctl01 kernel: [    0.000000]   Device   empty
Nov 16 16:51:19 ctl01 kernel: [    0.000000] Movable zone start for each node
Nov 16 16:51:19 ctl01 kernel: [    0.000000] Early memory node ranges
Nov 16 16:51:19 ctl01 kernel: [    0.000000]   node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 16 16:51:19 ctl01 kernel: [    0.000000]   node   0: [mem 0x0000000000100000-0x00000000bffdefff]
Nov 16 16:51:19 ctl01 kernel: [    0.000000]   node   0: [mem 0x0000000100000000-0x00000003bfffffff]
Nov 16 16:51:19 ctl01 kernel: [    0.000000] Reserved but unavailable: 98 pages
Nov 16 16:51:19 ctl01 kernel: [    0.000000] Initmem setup node 0 [mem 0x0000000000001000-0x00000003bfffffff]
Nov 16 16:51:19 ctl01 kernel: [    0.000000] On node 0 totalpages: 3669885
Nov 16 16:51:19 ctl01 kernel: [    0.000000]   DMA zone: 64 pages used for memmap
Nov 16 16:51:19 ctl01 kernel: [    0.000000]   DMA zone: 21 pages reserved
Nov 16 16:51:19 ctl01 kernel: [    0.000000]   DMA zone: 3998 pages, LIFO batch:0
Nov 16 16:51:19 ctl01 kernel: [    0.000000]   DMA32 zone: 12224 pages used for memmap
Nov 16 16:51:19 ctl01 kernel: [    0.000000]   DMA32 zone: 782303 pages, LIFO batch:31
Nov 16 16:51:19 ctl01 kernel: [    0.000000]   Normal zone: 45056 pages used for memmap
Nov 16 16:51:19 ctl01 kernel: [    0.000000]   Normal zone: 2883584 pages, LIFO batch:31
Nov 16 16:51:19 ctl01 kernel: [    0.000000] ACPI: PM-Timer IO Port: 0x608
Nov 16 16:51:19 ctl01 kernel: [    0.000000] ACPI: Local APIC address 0xfee00000
Nov 16 16:51:19 ctl01 kernel: [    0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 16 16:51:19 ctl01 kernel: [    0.000000] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 16 16:51:19 ctl01 kernel: [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 16 16:51:19 ctl01 kernel: [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 16 16:51:19 ctl01 kernel: [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 16 16:51:19 ctl01 kernel: [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 16 16:51:19 ctl01 kernel: [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 16 16:51:19 ctl01 kernel: [    0.000000] ACPI: IRQ0 used by override.
Nov 16 16:51:19 ctl01 kernel: [    0.000000] ACPI: IRQ5 used by override.
Nov 16 16:51:19 ctl01 kernel: [    0.000000] ACPI: IRQ9 used by override.
Nov 16 16:51:19 ctl01 kernel: [    0.000000] ACPI: IRQ10 used by override.
Nov 16 16:51:19 ctl01 kernel: [    0.000000] ACPI: IRQ11 used by override.
Nov 16 16:51:19 ctl01 kernel: [    0.000000] Using ACPI (MADT) for SMP configuration information
Nov 16 16:51:19 ctl01 kernel: [    0.000000] smpboot: Allowing 8 CPUs, 0 hotplug CPUs
Nov 16 16:51:19 ctl01 kernel: [    0.000000] PM: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 16 16:51:19 ctl01 kernel: [    0.000000] PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 16 16:51:19 ctl01 kernel: [    0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 16 16:51:19 ctl01 kernel: [    0.000000] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 16 16:51:19 ctl01 kernel: [    0.000000] PM: Registered nosave memory: [mem 0xbffdf000-0xbfffffff]
Nov 16 16:51:19 ctl01 kernel: [    0.000000] PM: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 16 16:51:19 ctl01 kernel: [    0.000000] PM: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 16 16:51:19 ctl01 kernel: [    0.000000] PM: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 16 16:51:19 ctl01 kernel: [    0.000000] PM: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 16 16:51:19 ctl01 kernel: [    0.000000] e820: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 16 16:51:19 ctl01 kernel: [    0.000000] Booting paravirtualized kernel on KVM
Nov 16 16:51:19 ctl01 kernel: [    0.000000] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645519600211568 ns
Nov 16 16:51:19 ctl01 kernel: [    0.000000] random: get_random_bytes called from start_kernel+0x99/0x4fd with crng_init=0
Nov 16 16:51:19 ctl01 kernel: [    0.000000] setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 16 16:51:19 ctl01 kernel: [    0.000000] percpu: Embedded 45 pages/cpu s147456 r8192 d28672 u262144
Nov 16 16:51:19 ctl01 kernel: [    0.000000] pcpu-alloc: s147456 r8192 d28672 u262144 alloc=1*2097152
Nov 16 16:51:19 ctl01 kernel: [    0.000000] pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Nov 16 16:51:19 ctl01 kernel: [    0.000000] KVM setup async PF for cpu 0
Nov 16 16:51:19 ctl01 kernel: [    0.000000] kvm-stealtime: cpu 0, msr 3bfc23040
Nov 16 16:51:19 ctl01 kernel: [    0.000000] PV qspinlock hash table entries: 256 (order: 0, 4096 bytes)
Nov 16 16:51:19 ctl01 kernel: [    0.000000] Built 1 zonelists, mobility grouping on.  Total pages: 3612520
Nov 16 16:51:19 ctl01 kernel: [    0.000000] Policy zone: Normal
Nov 16 16:51:19 ctl01 kernel: [    0.000000] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-4.15.0-70-generic root=LABEL=cloudimg-rootfs ro console=tty1 console=ttyS0
Nov 16 16:51:19 ctl01 kernel: [    0.000000] Calgary: detecting Calgary via BIOS EBDA area
Nov 16 16:51:19 ctl01 kernel: [    0.000000] Calgary: Unable to locate Rio Grande table in EBDA - bailing!
Nov 16 16:51:19 ctl01 kernel: [    0.000000] Memory: 14334832K/14679540K available (12300K kernel code, 2481K rwdata, 4264K rodata, 2432K init, 2388K bss, 344708K reserved, 0K cma-reserved)
Nov 16 16:51:19 ctl01 kernel: [    0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 16 16:51:19 ctl01 kernel: [    0.000000] Kernel/User page tables isolation: enabled
Nov 16 16:51:19 ctl01 kernel: [    0.000000] ftrace: allocating 39315 entries in 154 pages
Nov 16 16:51:19 ctl01 kernel: [    0.004000] Hierarchical RCU implementation.
Nov 16 16:51:19 ctl01 kernel: [    0.004000] 	RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 16 16:51:19 ctl01 kernel: [    0.004000] 	Tasks RCU enabled.
Nov 16 16:51:19 ctl01 kernel: [    0.004000] RCU: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 16 16:51:19 ctl01 kernel: [    0.004000] NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 16 16:51:19 ctl01 kernel: [    0.004000] Console: colour VGA+ 80x25
Nov 16 16:51:19 ctl01 kernel: [    0.004000] console [tty1] enabled
Nov 16 16:51:19 ctl01 kernel: [    0.004000] console [ttyS0] enabled
Nov 16 16:51:19 ctl01 kernel: [    0.004000] ACPI: Core revision 20170831
Nov 16 16:51:19 ctl01 kernel: [    0.004000] ACPI: 2 ACPI AML tables successfully acquired and loaded
Nov 16 16:51:19 ctl01 kernel: [    0.004004] APIC: Switch to symmetric I/O mode setup
Nov 16 16:51:19 ctl01 kernel: [    0.005195] x2apic enabled
Nov 16 16:51:19 ctl01 kernel: [    0.006048] Switched APIC routing to physical x2apic.
Nov 16 16:51:19 ctl01 kernel: [    0.008000] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
Nov 16 16:51:19 ctl01 kernel: [    0.008000] tsc: Detected 2799.994 MHz processor
Nov 16 16:51:19 ctl01 kernel: [    0.008000] Calibrating delay loop (skipped) preset value.. 5599.98 BogoMIPS (lpj=11199976)
Nov 16 16:51:19 ctl01 kernel: [    0.008000] pid_max: default: 32768 minimum: 301
Nov 16 16:51:19 ctl01 kernel: [    0.008026] Security Framework initialized
Nov 16 16:51:19 ctl01 kernel: [    0.008834] Yama: becoming mindful.
Nov 16 16:51:19 ctl01 kernel: [    0.009555] AppArmor: AppArmor initialized
Nov 16 16:51:19 ctl01 kernel: [    0.015091] Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes)
Nov 16 16:51:19 ctl01 kernel: [    0.018101] Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes)
Nov 16 16:51:19 ctl01 kernel: [    0.020009] Mount-cache hash table entries: 32768 (order: 6, 262144 bytes)
Nov 16 16:51:19 ctl01 kernel: [    0.021679] Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes)
Nov 16 16:51:19 ctl01 kernel: [    0.023302] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0
Nov 16 16:51:19 ctl01 kernel: [    0.024002] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0
Nov 16 16:51:19 ctl01 kernel: [    0.025099] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 16 16:51:19 ctl01 kernel: [    0.026662] Spectre V2 : Mitigation: Full generic retpoline
Nov 16 16:51:19 ctl01 kernel: [    0.028001] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Nov 16 16:51:19 ctl01 kernel: [    0.029530] Spectre V2 : Enabling Restricted Speculation for firmware calls
Nov 16 16:51:19 ctl01 kernel: [    0.030797] Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 16 16:51:19 ctl01 kernel: [    0.032002] Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp
Nov 16 16:51:19 ctl01 kernel: [    0.033766] MDS: Mitigation: Clear CPU buffers
Nov 16 16:51:19 ctl01 kernel: [    0.039796] Freeing SMP alternatives memory: 36K
Nov 16 16:51:19 ctl01 kernel: [    0.041657] TSC deadline timer enabled
Nov 16 16:51:19 ctl01 kernel: [    0.041660] smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz (family: 0x6, model: 0x3e, stepping: 0x4)
Nov 16 16:51:19 ctl01 kernel: [    0.043494] Performance Events: IvyBridge events, Intel PMU driver.
Nov 16 16:51:19 ctl01 kernel: [    0.044000] ... version:                2
Nov 16 16:51:19 ctl01 kernel: [    0.044000] ... bit width:              48
Nov 16 16:51:19 ctl01 kernel: [    0.044003] ... generic registers:      4
Nov 16 16:51:19 ctl01 kernel: [    0.044781] ... value mask:             0000ffffffffffff
Nov 16 16:51:19 ctl01 kernel: [    0.045769] ... max period:             000000007fffffff
Nov 16 16:51:19 ctl01 kernel: [    0.046830] ... fixed-purpose events:   3
Nov 16 16:51:19 ctl01 kernel: [    0.047605] ... event mask:             000000070000000f
Nov 16 16:51:19 ctl01 kernel: [    0.048038] Hierarchical SRCU implementation.
Nov 16 16:51:19 ctl01 kernel: [    0.049439] smp: Bringing up secondary CPUs ...
Nov 16 16:51:19 ctl01 kernel: [    0.050399] x86: Booting SMP configuration:
Nov 16 16:51:19 ctl01 kernel: [    0.051216] .... node  #0, CPUs:      #1
Nov 16 16:51:19 ctl01 kernel: [    0.004000] kvm-clock: cpu 1, msr 3:bff54041, secondary cpu clock
Nov 16 16:51:19 ctl01 kernel: [    0.056019] KVM setup async PF for cpu 1
Nov 16 16:51:19 ctl01 kernel: [    0.056786] kvm-stealtime: cpu 1, msr 3bfc63040
Nov 16 16:51:19 ctl01 kernel: [    0.057680]  #2
Nov 16 16:51:19 ctl01 kernel: [    0.004000] kvm-clock: cpu 2, msr 3:bff54081, secondary cpu clock
Nov 16 16:51:19 ctl01 kernel: [    0.060035] KVM setup async PF for cpu 2
Nov 16 16:51:19 ctl01 kernel: [    0.061009] kvm-stealtime: cpu 2, msr 3bfca3040
Nov 16 16:51:19 ctl01 kernel: [    0.062140]  #3
Nov 16 16:51:19 ctl01 kernel: [    0.004000] kvm-clock: cpu 3, msr 3:bff540c1, secondary cpu clock
Nov 16 16:51:19 ctl01 kernel: [    0.064019] KVM setup async PF for cpu 3
Nov 16 16:51:19 ctl01 kernel: [    0.064789] kvm-stealtime: cpu 3, msr 3bfce3040
Nov 16 16:51:19 ctl01 kernel: [    0.065677]  #4
Nov 16 16:51:19 ctl01 kernel: [    0.004000] kvm-clock: cpu 4, msr 3:bff54101, secondary cpu clock
Nov 16 16:51:19 ctl01 kernel: [    0.068027] KVM setup async PF for cpu 4
Nov 16 16:51:19 ctl01 kernel: [    0.068788] kvm-stealtime: cpu 4, msr 3bfd23040
Nov 16 16:51:19 ctl01 kernel: [    0.069692]  #5
Nov 16 16:51:19 ctl01 kernel: [    0.004000] kvm-clock: cpu 5, msr 3:bff54141, secondary cpu clock
Nov 16 16:51:19 ctl01 kernel: [    0.072027] KVM setup async PF for cpu 5
Nov 16 16:51:19 ctl01 kernel: [    0.072988] kvm-stealtime: cpu 5, msr 3bfd63040
Nov 16 16:51:19 ctl01 kernel: [    0.074095]  #6
Nov 16 16:51:19 ctl01 kernel: [    0.004000] kvm-clock: cpu 6, msr 3:bff54181, secondary cpu clock
Nov 16 16:51:19 ctl01 kernel: [    0.076025] KVM setup async PF for cpu 6
Nov 16 16:51:19 ctl01 kernel: [    0.076995] kvm-stealtime: cpu 6, msr 3bfda3040
Nov 16 16:51:19 ctl01 kernel: [    0.078115]  #7
Nov 16 16:51:19 ctl01 kernel: [    0.004000] kvm-clock: cpu 7, msr 3:bff541c1, secondary cpu clock
Nov 16 16:51:19 ctl01 kernel: [    0.084024] KVM setup async PF for cpu 7
Nov 16 16:51:19 ctl01 kernel: [    0.084988] kvm-stealtime: cpu 7, msr 3bfde3040
Nov 16 16:51:19 ctl01 kernel: [    0.086106] smp: Brought up 1 node, 8 CPUs
Nov 16 16:51:19 ctl01 kernel: [    0.088005] smpboot: Max logical packages: 8
Nov 16 16:51:19 ctl01 kernel: [    0.088846] smpboot: Total of 8 processors activated (44799.90 BogoMIPS)
Nov 16 16:51:19 ctl01 kernel: [    0.090876] devtmpfs: initialized
Nov 16 16:51:19 ctl01 kernel: [    0.090876] x86/mm: Memory block size: 128MB
Nov 16 16:51:19 ctl01 kernel: [    0.092786] evm: security.selinux
Nov 16 16:51:19 ctl01 kernel: [    0.093475] evm: security.SMACK64
Nov 16 16:51:19 ctl01 kernel: [    0.094161] evm: security.SMACK64EXEC
Nov 16 16:51:19 ctl01 kernel: [    0.094902] evm: security.SMACK64TRANSMUTE
Nov 16 16:51:19 ctl01 kernel: [    0.095707] evm: security.SMACK64MMAP
Nov 16 16:51:19 ctl01 kernel: [    0.096005] evm: security.apparmor
Nov 16 16:51:19 ctl01 kernel: [    0.096710] evm: security.ima
Nov 16 16:51:19 ctl01 kernel: [    0.097331] evm: security.capability
Nov 16 16:51:19 ctl01 kernel: [    0.098087] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645041785100000 ns
Nov 16 16:51:19 ctl01 kernel: [    0.100023] futex hash table entries: 2048 (order: 5, 131072 bytes)
Nov 16 16:51:19 ctl01 kernel: [    0.101295] pinctrl core: initialized pinctrl subsystem
Nov 16 16:51:19 ctl01 kernel: [    0.102525] RTC time: 16:51:06, date: 11/16/19
Nov 16 16:51:19 ctl01 kernel: [    0.104140] NET: Registered protocol family 16
Nov 16 16:51:19 ctl01 kernel: [    0.105117] audit: initializing netlink subsys (disabled)
Nov 16 16:51:19 ctl01 kernel: [    0.106204] audit: type=2000 audit(1573923066.806:1): state=initialized audit_enabled=0 res=1
Nov 16 16:51:19 ctl01 kernel: [    0.108011] cpuidle: using governor ladder
Nov 16 16:51:19 ctl01 kernel: [    0.108893] cpuidle: using governor menu
Nov 16 16:51:19 ctl01 kernel: [    0.109941] ACPI: bus type PCI registered
Nov 16 16:51:19 ctl01 kernel: [    0.110804] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 16 16:51:19 ctl01 kernel: [    0.112116] PCI: Using configuration type 1 for base access
Nov 16 16:51:19 ctl01 kernel: [    0.113224] core: PMU erratum BJ122, BV98, HSD29 workaround disabled, HT off
Nov 16 16:51:19 ctl01 kernel: [    0.116088] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
Nov 16 16:51:19 ctl01 kernel: [    0.117368] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
Nov 16 16:51:19 ctl01 kernel: [    0.118747] ACPI: Added _OSI(Module Device)
Nov 16 16:51:19 ctl01 kernel: [    0.120005] ACPI: Added _OSI(Processor Device)
Nov 16 16:51:19 ctl01 kernel: [    0.120907] ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 16 16:51:19 ctl01 kernel: [    0.121852] ACPI: Added _OSI(Processor Aggregator Device)
Nov 16 16:51:19 ctl01 kernel: [    0.122921] ACPI: Added _OSI(Linux-Dell-Video)
Nov 16 16:51:19 ctl01 kernel: [    0.123790] ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
Nov 16 16:51:19 ctl01 kernel: [    0.124004] ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
Nov 16 16:51:19 ctl01 kernel: [    0.126399] ACPI: Interpreter enabled
Nov 16 16:51:19 ctl01 kernel: [    0.127165] ACPI: (supports S0 S5)
Nov 16 16:51:19 ctl01 kernel: [    0.127865] ACPI: Using IOAPIC for interrupt routing
Nov 16 16:51:19 ctl01 kernel: [    0.128015] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 16 16:51:19 ctl01 kernel: [    0.130277] ACPI: Enabled 16 GPEs in block 00 to 0F
Nov 16 16:51:19 ctl01 kernel: [    0.134278] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 16 16:51:19 ctl01 kernel: [    0.135496] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI]
Nov 16 16:51:19 ctl01 kernel: [    0.136009] acpi PNP0A03:00: _OSC failed (AE_NOT_FOUND); disabling ASPM
Nov 16 16:51:19 ctl01 kernel: [    0.137247] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
Nov 16 16:51:19 ctl01 kernel: [    0.139753] acpiphp: Slot [3] registered
Nov 16 16:51:19 ctl01 kernel: [    0.140058] acpiphp: Slot [4] registered
Nov 16 16:51:19 ctl01 kernel: [    0.140908] acpiphp: Slot [5] registered
Nov 16 16:51:19 ctl01 kernel: [    0.141759] acpiphp: Slot [6] registered
Nov 16 16:51:19 ctl01 kernel: [    0.142600] acpiphp: Slot [7] registered
Nov 16 16:51:19 ctl01 kernel: [    0.143478] acpiphp: Slot [9] registered
Nov 16 16:51:19 ctl01 kernel: [    0.144042] acpiphp: Slot [10] registered
Nov 16 16:51:19 ctl01 kernel: [    0.144899] acpiphp: Slot [11] registered
Nov 16 16:51:19 ctl01 kernel: [    0.145786] acpiphp: Slot [12] registered
Nov 16 16:51:19 ctl01 kernel: [    0.146641] acpiphp: Slot [13] registered
Nov 16 16:51:19 ctl01 kernel: [    0.147500] acpiphp: Slot [14] registered
Nov 16 16:51:19 ctl01 kernel: [    0.148041] acpiphp: Slot [15] registered
Nov 16 16:51:19 ctl01 kernel: [    0.148912] acpiphp: Slot [16] registered
Nov 16 16:51:19 ctl01 kernel: [    0.149781] acpiphp: Slot [17] registered
Nov 16 16:51:19 ctl01 kernel: [    0.150650] acpiphp: Slot [18] registered
Nov 16 16:51:19 ctl01 kernel: [    0.151494] acpiphp: Slot [19] registered
Nov 16 16:51:19 ctl01 kernel: [    0.152041] acpiphp: Slot [20] registered
Nov 16 16:51:19 ctl01 kernel: [    0.152922] acpiphp: Slot [21] registered
Nov 16 16:51:19 ctl01 kernel: [    0.153780] acpiphp: Slot [22] registered
Nov 16 16:51:19 ctl01 kernel: [    0.154663] acpiphp: Slot [23] registered
Nov 16 16:51:19 ctl01 kernel: [    0.155500] acpiphp: Slot [24] registered
Nov 16 16:51:19 ctl01 kernel: [    0.156044] acpiphp: Slot [25] registered
Nov 16 16:51:19 ctl01 kernel: [    0.156913] acpiphp: Slot [26] registered
Nov 16 16:51:19 ctl01 kernel: [    0.157786] acpiphp: Slot [27] registered
Nov 16 16:51:19 ctl01 kernel: [    0.158645] acpiphp: Slot [28] registered
Nov 16 16:51:19 ctl01 kernel: [    0.159478] acpiphp: Slot [29] registered
Nov 16 16:51:19 ctl01 kernel: [    0.160041] acpiphp: Slot [30] registered
Nov 16 16:51:19 ctl01 kernel: [    0.160912] acpiphp: Slot [31] registered
Nov 16 16:51:19 ctl01 kernel: [    0.161757] PCI host bridge to bus 0000:00
Nov 16 16:51:19 ctl01 kernel: [    0.162570] pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 16 16:51:19 ctl01 kernel: [    0.163845] pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 16 16:51:19 ctl01 kernel: [    0.164004] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 16 16:51:19 ctl01 kernel: [    0.165457] pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 16 16:51:19 ctl01 kernel: [    0.166897] pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 16 16:51:19 ctl01 kernel: [    0.167974] pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
Nov 16 16:51:19 ctl01 kernel: [    0.168472] pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
Nov 16 16:51:19 ctl01 kernel: [    0.169132] pci 0000:00:01.1: [8086:7010] type 00 class 0x010180
Nov 16 16:51:19 ctl01 kernel: [    0.176839] pci 0000:00:01.1: reg 0x20: [io  0xc140-0xc14f]
Nov 16 16:51:19 ctl01 kernel: [    0.180927] pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io  0x01f0-0x01f7]
Nov 16 16:51:19 ctl01 kernel: [    0.182266] pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io  0x03f6]
Nov 16 16:51:19 ctl01 kernel: [    0.183545] pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io  0x0170-0x0177]
Nov 16 16:51:19 ctl01 kernel: [    0.184004] pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io  0x0376]
Nov 16 16:51:19 ctl01 kernel: [    0.185404] pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
Nov 16 16:51:19 ctl01 kernel: [    0.185917] pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 16 16:51:19 ctl01 kernel: [    0.187328] pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 16 16:51:19 ctl01 kernel: [    0.188255] pci 0000:00:02.0: [1013:00b8] type 00 class 0x030000
Nov 16 16:51:19 ctl01 kernel: [    0.191089] pci 0000:00:02.0: reg 0x10: [mem 0xfc000000-0xfdffffff pref]
Nov 16 16:51:19 ctl01 kernel: [    0.193196] pci 0000:00:02.0: reg 0x14: [mem 0xfebd0000-0xfebd0fff]
Nov 16 16:51:19 ctl01 kernel: [    0.205190] pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref]
Nov 16 16:51:19 ctl01 kernel: [    0.205430] pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000
Nov 16 16:51:19 ctl01 kernel: [    0.207168] pci 0000:00:03.0: reg 0x10: [io  0xc040-0xc05f]
Nov 16 16:51:19 ctl01 kernel: [    0.209020] pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff]
Nov 16 16:51:19 ctl01 kernel: [    0.219448] pci 0000:00:03.0: reg 0x30: [mem 0xfeac0000-0xfeafffff pref]
Nov 16 16:51:19 ctl01 kernel: [    0.219801] pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000
Nov 16 16:51:19 ctl01 kernel: [    0.220821] pci 0000:00:04.0: reg 0x10: [io  0xc060-0xc07f]
Nov 16 16:51:19 ctl01 kernel: [    0.223790] pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff]
Nov 16 16:51:19 ctl01 kernel: [    0.231739] pci 0000:00:04.0: reg 0x30: [mem 0xfeb00000-0xfeb3ffff pref]
Nov 16 16:51:19 ctl01 kernel: [    0.232116] pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000
Nov 16 16:51:19 ctl01 kernel: [    0.236024] pci 0000:00:05.0: reg 0x10: [io  0xc080-0xc09f]
Nov 16 16:51:19 ctl01 kernel: [    0.237681] pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff]
Nov 16 16:51:19 ctl01 kernel: [    0.247706] pci 0000:00:05.0: reg 0x30: [mem 0xfeb40000-0xfeb7ffff pref]
Nov 16 16:51:19 ctl01 kernel: [    0.248069] pci 0000:00:06.0: [1af4:1000] type 00 class 0x020000
Nov 16 16:51:19 ctl01 kernel: [    0.249870] pci 0000:00:06.0: reg 0x10: [io  0xc0a0-0xc0bf]
Nov 16 16:51:19 ctl01 kernel: [    0.251527] pci 0000:00:06.0: reg 0x14: [mem 0xfebd4000-0xfebd4fff]
Nov 16 16:51:19 ctl01 kernel: [    0.260824] pci 0000:00:06.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref]
Nov 16 16:51:19 ctl01 kernel: [    0.261212] pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000
Nov 16 16:51:19 ctl01 kernel: [    0.263159] pci 0000:00:07.0: reg 0x10: [io  0xc000-0xc03f]
Nov 16 16:51:19 ctl01 kernel: [    0.264912] pci 0000:00:07.0: reg 0x14: [mem 0xfebd5000-0xfebd5fff]
Nov 16 16:51:19 ctl01 kernel: [    0.274158] pci 0000:00:08.0: [8086:2934] type 00 class 0x0c0300
Nov 16 16:51:19 ctl01 kernel: [    0.279842] pci 0000:00:08.0: reg 0x20: [io  0xc0c0-0xc0df]
Nov 16 16:51:19 ctl01 kernel: [    0.281465] pci 0000:00:08.1: [8086:2935] type 00 class 0x0c0300
Nov 16 16:51:19 ctl01 kernel: [    0.285674] pci 0000:00:08.1: reg 0x20: [io  0xc0e0-0xc0ff]
Nov 16 16:51:19 ctl01 kernel: [    0.287637] pci 0000:00:08.2: [8086:2936] type 00 class 0x0c0300
Nov 16 16:51:19 ctl01 kernel: [    0.292833] pci 0000:00:08.2: reg 0x20: [io  0xc100-0xc11f]
Nov 16 16:51:19 ctl01 kernel: [    0.294828] pci 0000:00:08.7: [8086:293a] type 00 class 0x0c0320
Nov 16 16:51:19 ctl01 kernel: [    0.295763] pci 0000:00:08.7: reg 0x10: [mem 0xfebd6000-0xfebd6fff]
Nov 16 16:51:19 ctl01 kernel: [    0.301522] pci 0000:00:09.0: [1af4:1002] type 00 class 0x00ff00
Nov 16 16:51:19 ctl01 kernel: [    0.302500] pci 0000:00:09.0: reg 0x10: [io  0xc120-0xc13f]
Nov 16 16:51:19 ctl01 kernel: [    0.308161] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11)
Nov 16 16:51:19 ctl01 kernel: [    0.309435] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11)
Nov 16 16:51:19 ctl01 kernel: [    0.311834] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11)
Nov 16 16:51:19 ctl01 kernel: [    0.312113] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11)
Nov 16 16:51:19 ctl01 kernel: [    0.313252] ACPI: PCI Interrupt Link [LNKS] (IRQs *9)
Nov 16 16:51:19 ctl01 kernel: [    0.315101] SCSI subsystem initialized
Nov 16 16:51:19 ctl01 kernel: [    0.316016] libata version 3.00 loaded.
Nov 16 16:51:19 ctl01 kernel: [    0.316034] pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 16 16:51:19 ctl01 kernel: [    0.317195] pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 16 16:51:19 ctl01 kernel: [    0.318772] pci 0000:00:02.0: vgaarb: bridge control possible
Nov 16 16:51:19 ctl01 kernel: [    0.319865] vgaarb: loaded
Nov 16 16:51:19 ctl01 kernel: [    0.320027] ACPI: bus type USB registered
Nov 16 16:51:19 ctl01 kernel: [    0.320849] usbcore: registered new interface driver usbfs
Nov 16 16:51:19 ctl01 kernel: [    0.321911] usbcore: registered new interface driver hub
Nov 16 16:51:19 ctl01 kernel: [    0.322991] usbcore: registered new device driver usb
Nov 16 16:51:19 ctl01 kernel: [    0.324077] EDAC MC: Ver: 3.0.0
Nov 16 16:51:19 ctl01 kernel: [    0.325086] PCI: Using ACPI for IRQ routing
Nov 16 16:51:19 ctl01 kernel: [    0.325086] PCI: pci_cache_line_size set to 64 bytes
Nov 16 16:51:19 ctl01 kernel: [    0.325124] e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Nov 16 16:51:19 ctl01 kernel: [    0.325125] e820: reserve RAM buffer [mem 0xbffdf000-0xbfffffff]
Nov 16 16:51:19 ctl01 kernel: [    0.325211] NetLabel: Initializing
Nov 16 16:51:19 ctl01 kernel: [    0.325938] NetLabel:  domain hash size = 128
Nov 16 16:51:19 ctl01 kernel: [    0.328003] NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 16 16:51:19 ctl01 kernel: [    0.329116] NetLabel:  unlabeled traffic allowed by default
Nov 16 16:51:19 ctl01 kernel: [    0.330199] clocksource: Switched to clocksource kvm-clock
Nov 16 16:51:19 ctl01 kernel: [    0.338656] VFS: Disk quotas dquot_6.6.0
Nov 16 16:51:19 ctl01 kernel: [    0.339501] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 16 16:51:19 ctl01 kernel: [    0.340816] AppArmor: AppArmor Filesystem Enabled
Nov 16 16:51:19 ctl01 kernel: [    0.341734] pnp: PnP ACPI init
Nov 16 16:51:19 ctl01 kernel: [    0.342406] pnp 00:00: Plug and Play ACPI device, IDs PNP0b00 (active)
Nov 16 16:51:19 ctl01 kernel: [    0.342440] pnp 00:01: Plug and Play ACPI device, IDs PNP0303 (active)
Nov 16 16:51:19 ctl01 kernel: [    0.342468] pnp 00:02: Plug and Play ACPI device, IDs PNP0f13 (active)
Nov 16 16:51:19 ctl01 kernel: [    0.342494] pnp 00:03: [dma 2]
Nov 16 16:51:19 ctl01 kernel: [    0.342503] pnp 00:03: Plug and Play ACPI device, IDs PNP0700 (active)
Nov 16 16:51:19 ctl01 kernel: [    0.342580] pnp 00:04: Plug and Play ACPI device, IDs PNP0501 (active)
Nov 16 16:51:19 ctl01 kernel: [    0.342800] pnp: PnP ACPI: found 5 devices
Nov 16 16:51:19 ctl01 kernel: [    0.349816] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 16 16:51:19 ctl01 kernel: [    0.351424] pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 16 16:51:19 ctl01 kernel: [    0.351425] pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 16 16:51:19 ctl01 kernel: [    0.351426] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 16 16:51:19 ctl01 kernel: [    0.351427] pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 16 16:51:19 ctl01 kernel: [    0.351479] NET: Registered protocol family 2
Nov 16 16:51:19 ctl01 kernel: [    0.352911] TCP established hash table entries: 131072 (order: 8, 1048576 bytes)
Nov 16 16:51:19 ctl01 kernel: [    0.354435] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes)
Nov 16 16:51:19 ctl01 kernel: [    0.355708] TCP: Hash tables configured (established 131072 bind 65536)
Nov 16 16:51:19 ctl01 kernel: [    0.357322] UDP hash table entries: 8192 (order: 6, 262144 bytes)
Nov 16 16:51:19 ctl01 kernel: [    0.358480] UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes)
Nov 16 16:51:19 ctl01 kernel: [    0.359722] NET: Registered protocol family 1
Nov 16 16:51:19 ctl01 kernel: [    0.360572] pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 16 16:51:19 ctl01 kernel: [    0.361677] pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 16 16:51:19 ctl01 kernel: [    0.362750] pci 0000:00:01.0: Activating ISA DMA hang workarounds
Nov 16 16:51:19 ctl01 kernel: [    0.363922] pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 16 16:51:19 ctl01 kernel: [    0.388182] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 11
Nov 16 16:51:19 ctl01 kernel: [    0.434647] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 10
Nov 16 16:51:19 ctl01 kernel: [    0.481085] ACPI: PCI Interrupt Link [LNKB] enabled at IRQ 10
Nov 16 16:51:19 ctl01 kernel: [    0.527835] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11
Nov 16 16:51:19 ctl01 kernel: [    0.551864] PCI: CLS 0 bytes, default 64
Nov 16 16:51:19 ctl01 kernel: [    0.551894] Unpacking initramfs...
Nov 16 16:51:19 ctl01 kernel: [    0.768479] Freeing initrd memory: 19152K
Nov 16 16:51:19 ctl01 kernel: [    0.769333] PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 16 16:51:19 ctl01 kernel: [    0.770546] software IO TLB: mapped [mem 0xbbfdf000-0xbffdf000] (64MB)
Nov 16 16:51:19 ctl01 kernel: [    0.771757] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3aeaff3, max_idle_ns: 440795255742 ns
Nov 16 16:51:19 ctl01 kernel: [    0.773577] Scanning for low memory corruption every 60 seconds
Nov 16 16:51:19 ctl01 kernel: [    0.775333] Initialise system trusted keyrings
Nov 16 16:51:19 ctl01 kernel: [    0.776209] Key type blacklist registered
Nov 16 16:51:19 ctl01 kernel: [    0.777065] workingset: timestamp_bits=36 max_order=22 bucket_order=0
Nov 16 16:51:19 ctl01 kernel: [    0.779074] zbud: loaded
Nov 16 16:51:19 ctl01 kernel: [    0.780091] squashfs: version 4.0 (2009/01/31) Phillip Lougher
Nov 16 16:51:19 ctl01 kernel: [    0.781394] fuse init (API version 7.26)
Nov 16 16:51:19 ctl01 kernel: [    0.784495] Key type asymmetric registered
Nov 16 16:51:19 ctl01 kernel: [    0.785299] Asymmetric key parser 'x509' registered
Nov 16 16:51:19 ctl01 kernel: [    0.786254] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 16 16:51:19 ctl01 kernel: [    0.787706] io scheduler noop registered
Nov 16 16:51:19 ctl01 kernel: [    0.788564] io scheduler deadline registered
Nov 16 16:51:19 ctl01 kernel: [    0.789413] io scheduler cfq registered (default)
Nov 16 16:51:19 ctl01 kernel: [    0.790824] intel_idle: Please enable MWAIT in BIOS SETUP
Nov 16 16:51:19 ctl01 kernel: [    0.790895] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 16 16:51:19 ctl01 kernel: [    0.792406] ACPI: Power Button [PWRF]
Nov 16 16:51:19 ctl01 kernel: [    0.815956] virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver
Nov 16 16:51:19 ctl01 kernel: [    0.840479] virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver
Nov 16 16:51:19 ctl01 kernel: [    0.864842] virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver
Nov 16 16:51:19 ctl01 kernel: [    0.889736] virtio-pci 0000:00:06.0: virtio_pci: leaving for legacy driver
Nov 16 16:51:19 ctl01 kernel: [    0.914134] virtio-pci 0000:00:07.0: virtio_pci: leaving for legacy driver
Nov 16 16:51:19 ctl01 kernel: [    0.938340] virtio-pci 0000:00:09.0: virtio_pci: leaving for legacy driver
Nov 16 16:51:19 ctl01 kernel: [    0.940556] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled
Nov 16 16:51:19 ctl01 kernel: [    0.966730] 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 16 16:51:19 ctl01 kernel: [    0.969550] Linux agpgart interface v0.103
Nov 16 16:51:19 ctl01 kernel: [    0.973128] loop: module loaded
Nov 16 16:51:19 ctl01 kernel: [    0.973881] ata_piix 0000:00:01.1: version 2.13
Nov 16 16:51:19 ctl01 kernel: [    0.974925] scsi host0: ata_piix
Nov 16 16:51:19 ctl01 kernel: [    0.975842] scsi host1: ata_piix
Nov 16 16:51:19 ctl01 kernel: [    0.976606] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14
Nov 16 16:51:19 ctl01 kernel: [    0.977827] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15
Nov 16 16:51:19 ctl01 kernel: [    0.979120] libphy: Fixed MDIO Bus: probed
Nov 16 16:51:19 ctl01 kernel: [    0.980110] tun: Universal TUN/TAP device driver, 1.6
Nov 16 16:51:19 ctl01 kernel: [    0.981149] PPP generic driver version 2.4.2
Nov 16 16:51:19 ctl01 kernel: [    0.982085] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
Nov 16 16:51:19 ctl01 kernel: [    0.983288] ehci-pci: EHCI PCI platform driver
Nov 16 16:51:19 ctl01 kernel: [    1.007736] ehci-pci 0000:00:08.7: EHCI Host Controller
Nov 16 16:51:19 ctl01 kernel: [    1.015254] ehci-pci 0000:00:08.7: new USB bus registered, assigned bus number 1
Nov 16 16:51:19 ctl01 kernel: [    1.016839] ehci-pci 0000:00:08.7: irq 11, io mem 0xfebd6000
Nov 16 16:51:19 ctl01 kernel: [    1.032031] ehci-pci 0000:00:08.7: USB 2.0 started, EHCI 1.00
Nov 16 16:51:19 ctl01 kernel: [    1.033620] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002
Nov 16 16:51:19 ctl01 kernel: [    1.035401] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 16 16:51:19 ctl01 kernel: [    1.037325] usb usb1: Product: EHCI Host Controller
Nov 16 16:51:19 ctl01 kernel: [    1.038658] usb usb1: Manufacturer: Linux 4.15.0-70-generic ehci_hcd
Nov 16 16:51:19 ctl01 kernel: [    1.040321] usb usb1: SerialNumber: 0000:00:08.7
Nov 16 16:51:19 ctl01 kernel: [    1.041747] hub 1-0:1.0: USB hub found
Nov 16 16:51:19 ctl01 kernel: [    1.042825] hub 1-0:1.0: 6 ports detected
Nov 16 16:51:19 ctl01 kernel: [    1.044176] ehci-platform: EHCI generic platform driver
Nov 16 16:51:19 ctl01 kernel: [    1.045599] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
Nov 16 16:51:19 ctl01 kernel: [    1.046771] ohci-pci: OHCI PCI platform driver
Nov 16 16:51:19 ctl01 kernel: [    1.047624] ohci-platform: OHCI generic platform driver
Nov 16 16:51:19 ctl01 kernel: [    1.048647] uhci_hcd: USB Universal Host Controller Interface driver
Nov 16 16:51:19 ctl01 kernel: [    1.072873] uhci_hcd 0000:00:08.0: UHCI Host Controller
Nov 16 16:51:19 ctl01 kernel: [    1.073856] uhci_hcd 0000:00:08.0: new USB bus registered, assigned bus number 2
Nov 16 16:51:19 ctl01 kernel: [    1.075225] uhci_hcd 0000:00:08.0: detected 2 ports
Nov 16 16:51:19 ctl01 kernel: [    1.076226] uhci_hcd 0000:00:08.0: irq 11, io base 0x0000c0c0
Nov 16 16:51:19 ctl01 kernel: [    1.077357] usb usb2: New USB device found, idVendor=1d6b, idProduct=0001
Nov 16 16:51:19 ctl01 kernel: [    1.078606] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 16 16:51:19 ctl01 kernel: [    1.079947] usb usb2: Product: UHCI Host Controller
Nov 16 16:51:19 ctl01 kernel: [    1.080922] usb usb2: Manufacturer: Linux 4.15.0-70-generic uhci_hcd
Nov 16 16:51:19 ctl01 kernel: [    1.082142] usb usb2: SerialNumber: 0000:00:08.0
Nov 16 16:51:19 ctl01 kernel: [    1.083280] hub 2-0:1.0: USB hub found
Nov 16 16:51:19 ctl01 kernel: [    1.084092] hub 2-0:1.0: 2 ports detected
Nov 16 16:51:19 ctl01 kernel: [    1.108133] uhci_hcd 0000:00:08.1: UHCI Host Controller
Nov 16 16:51:19 ctl01 kernel: [    1.109148] uhci_hcd 0000:00:08.1: new USB bus registered, assigned bus number 3
Nov 16 16:51:19 ctl01 kernel: [    1.110555] uhci_hcd 0000:00:08.1: detected 2 ports
Nov 16 16:51:19 ctl01 kernel: [    1.111551] uhci_hcd 0000:00:08.1: irq 10, io base 0x0000c0e0
Nov 16 16:51:19 ctl01 kernel: [    1.112732] usb usb3: New USB device found, idVendor=1d6b, idProduct=0001
Nov 16 16:51:19 ctl01 kernel: [    1.114001] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 16 16:51:19 ctl01 kernel: [    1.115367] usb usb3: Product: UHCI Host Controller
Nov 16 16:51:19 ctl01 kernel: [    1.116352] usb usb3: Manufacturer: Linux 4.15.0-70-generic uhci_hcd
Nov 16 16:51:19 ctl01 kernel: [    1.117698] usb usb3: SerialNumber: 0000:00:08.1
Nov 16 16:51:19 ctl01 kernel: [    1.118746] hub 3-0:1.0: USB hub found
Nov 16 16:51:19 ctl01 kernel: [    1.119548] hub 3-0:1.0: 2 ports detected
Nov 16 16:51:19 ctl01 kernel: [    1.136780] ata1.01: NODEV after polling detection
Nov 16 16:51:19 ctl01 kernel: [    1.137108] ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 16 16:51:19 ctl01 kernel: [    1.139036] ata1.00: configured for MWDMA2
Nov 16 16:51:19 ctl01 kernel: [    1.140887] scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 16 16:51:19 ctl01 kernel: [    1.143401] sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 16 16:51:19 ctl01 kernel: [    1.144784] cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 16 16:51:19 ctl01 kernel: [    1.145878] sr 0:0:0:0: Attached scsi CD-ROM sr0
Nov 16 16:51:19 ctl01 kernel: [    1.145921] sr 0:0:0:0: Attached scsi generic sg0 type 5
Nov 16 16:51:19 ctl01 kernel: [    1.148324] uhci_hcd 0000:00:08.2: UHCI Host Controller
Nov 16 16:51:19 ctl01 kernel: [    1.149715] uhci_hcd 0000:00:08.2: new USB bus registered, assigned bus number 4
Nov 16 16:51:19 ctl01 kernel: [    1.151153] uhci_hcd 0000:00:08.2: detected 2 ports
Nov 16 16:51:19 ctl01 kernel: [    1.152220] uhci_hcd 0000:00:08.2: irq 10, io base 0x0000c100
Nov 16 16:51:19 ctl01 kernel: [    1.153606] usb usb4: New USB device found, idVendor=1d6b, idProduct=0001
Nov 16 16:51:19 ctl01 kernel: [    1.155108] usb usb4: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 16 16:51:19 ctl01 kernel: [    1.156749] usb usb4: Product: UHCI Host Controller
Nov 16 16:51:19 ctl01 kernel: [    1.157998] usb usb4: Manufacturer: Linux 4.15.0-70-generic uhci_hcd
Nov 16 16:51:19 ctl01 kernel: [    1.159403] usb usb4: SerialNumber: 0000:00:08.2
Nov 16 16:51:19 ctl01 kernel: [    1.160626] hub 4-0:1.0: USB hub found
Nov 16 16:51:19 ctl01 kernel: [    1.161415] hub 4-0:1.0: 2 ports detected
Nov 16 16:51:19 ctl01 kernel: [    1.162379] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 16 16:51:19 ctl01 systemd[1]: Started udev Kernel Device Manager.
Nov 16 16:51:19 ctl01 kernel: [    1.164706] serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 16 16:51:19 ctl01 kernel: [    1.165693] serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 16 16:51:19 ctl01 systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 16 16:51:19 ctl01 kernel: [    1.166806] mousedev: PS/2 mouse device common for all mice
Nov 16 16:51:19 ctl01 systemd[1]: Reached target Local Encrypted Volumes.
Nov 16 16:51:19 ctl01 kernel: [    1.168153] rtc_cmos 00:00: RTC can wake from S4
Nov 16 16:51:19 ctl01 kernel: [    1.169571] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 16 16:51:19 ctl01 kernel: [    1.171283] rtc_cmos 00:00: rtc core: registered rtc_cmos as rtc0
Nov 16 16:51:19 ctl01 systemd[1]: Started Flush Journal to Persistent Storage.
Nov 16 16:51:19 ctl01 kernel: [    1.172638] rtc_cmos 00:00: alarms up to one day, 114 bytes nvram
Nov 16 16:51:19 ctl01 kernel: [    1.173939] i2c /dev entries driver
Nov 16 16:51:19 ctl01 systemd-udevd[497]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:51:19 ctl01 kernel: [    1.174682] device-mapper: uevent: version 1.0.3
Nov 16 16:51:19 ctl01 kernel: [    1.175727] device-mapper: ioctl: 4.37.0-ioctl (2017-09-20) initialised: dm-devel@redhat.com
Nov 16 16:51:19 ctl01 systemd-udevd[496]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:51:19 ctl01 systemd[1]: Found device /dev/ttyS0.
Nov 16 16:51:19 ctl01 systemd-udevd[494]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:51:19 ctl01 systemd-udevd[499]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:51:19 ctl01 systemd-udevd[495]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:51:19 ctl01 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 16 16:51:19 ctl01 systemd[1]: Found device /dev/disk/by-label/UEFI.
Nov 16 16:51:19 ctl01 systemd[1]: Mounting /boot/efi...
Nov 16 16:51:19 ctl01 systemd[1]: Mounted /boot/efi.
Nov 16 16:51:19 ctl01 systemd[1]: Reached target Local File Systems.
Nov 16 16:51:19 ctl01 systemd[1]: Starting Create Volatile Files and Directories...
Nov 16 16:51:19 ctl01 systemd[1]: Starting Tell Plymouth To Write Out Runtime Data...
Nov 16 16:51:19 ctl01 systemd[1]: Starting Commit a transient machine-id on disk...
Nov 16 16:51:19 ctl01 systemd[1]: Starting Set console font and keymap...
Nov 16 16:51:19 ctl01 systemd[1]: Starting ebtables ruleset management...
Nov 16 16:51:19 ctl01 systemd[1]: Starting AppArmor initialization...
Nov 16 16:51:19 ctl01 systemd[1]: Started Create Volatile Files and Directories.
Nov 16 16:51:19 ctl01 systemd[1]: Starting Network Time Synchronization...
Nov 16 16:51:19 ctl01 systemd[1]: Starting Update UTMP about System Boot/Shutdown...
Nov 16 16:51:19 ctl01 systemd[1]: Started Tell Plymouth To Write Out Runtime Data.
Nov 16 16:51:19 ctl01 systemd[1]: Started Set console font and keymap.
Nov 16 16:51:19 ctl01 systemd[1]: Started Update UTMP about System Boot/Shutdown.
Nov 16 16:51:19 ctl01 apparmor[619]:  * Starting AppArmor profiles
Nov 16 16:51:19 ctl01 systemd[1]: Started ebtables ruleset management.
Nov 16 16:51:19 ctl01 apparmor[619]: Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd
Nov 16 16:51:19 ctl01 systemd[1]: Started Commit a transient machine-id on disk.
Nov 16 16:51:19 ctl01 systemd[1]: Started Network Time Synchronization.
Nov 16 16:51:19 ctl01 systemd[1]: Reached target System Time Synchronized.
Nov 16 16:51:19 ctl01 apparmor[619]:    ...done.
Nov 16 16:51:19 ctl01 systemd[1]: Started AppArmor initialization.
Nov 16 16:51:19 ctl01 systemd[1]: Starting Initial cloud-init job (pre-networking)...
Nov 16 16:51:19 ctl01 cloud-init[739]: Cloud-init v. 19.2-36-g059d049c-0ubuntu2~18.04.1 running 'init-local' at Sat, 16 Nov 2019 16:51:12 +0000. Up 6.68 seconds.
Nov 16 16:51:19 ctl01 systemd[1]: Started Initial cloud-init job (pre-networking).
Nov 16 16:51:19 ctl01 systemd[1]: Reached target Network (Pre).
Nov 16 16:51:19 ctl01 systemd[1]: Starting Raise network interfaces...
Nov 16 16:51:19 ctl01 dhclient[827]: Internet Systems Consortium DHCP Client 4.3.5
Nov 16 16:51:19 ctl01 dhclient[827]: Copyright 2004-2016 Internet Systems Consortium.
Nov 16 16:51:19 ctl01 ifup[806]: Internet Systems Consortium DHCP Client 4.3.5
Nov 16 16:51:19 ctl01 ifup[806]: Copyright 2004-2016 Internet Systems Consortium.
Nov 16 16:51:19 ctl01 ifup[806]: All rights reserved.
Nov 16 16:51:19 ctl01 ifup[806]: For info, please visit https://www.isc.org/software/dhcp/
Nov 16 16:51:19 ctl01 dhclient[827]: All rights reserved.
Nov 16 16:51:19 ctl01 dhclient[827]: For info, please visit https://www.isc.org/software/dhcp/
Nov 16 16:51:19 ctl01 dhclient[827]: 
Nov 16 16:51:19 ctl01 dhclient[827]: Listening on LPF/ens3/52:54:00:9c:aa:a4
Nov 16 16:51:19 ctl01 ifup[806]: Listening on LPF/ens3/52:54:00:9c:aa:a4
Nov 16 16:51:19 ctl01 ifup[806]: Sending on   LPF/ens3/52:54:00:9c:aa:a4
Nov 16 16:51:19 ctl01 ifup[806]: Sending on   Socket/fallback
Nov 16 16:51:19 ctl01 kernel: [    1.177776] ledtrig-cpu: registered to indicate activity on CPUs
Nov 16 16:51:19 ctl01 dhclient[827]: Sending on   LPF/ens3/52:54:00:9c:aa:a4
Nov 16 16:51:19 ctl01 kernel: [    1.179519] NET: Registered protocol family 10
Nov 16 16:51:19 ctl01 kernel: [    1.184511] Segment Routing with IPv6
Nov 16 16:51:19 ctl01 ifup[806]: DHCPDISCOVER on ens3 to 255.255.255.255 port 67 interval 3 (xid=0x2fce687b)
Nov 16 16:51:19 ctl01 kernel: [    1.185361] NET: Registered protocol family 17
Nov 16 16:51:19 ctl01 dhclient[827]: Sending on   Socket/fallback
Nov 16 16:51:19 ctl01 kernel: [    1.186461] Key type dns_resolver registered
Nov 16 16:51:19 ctl01 kernel: [    1.188183] mce: Using 10 MCE banks
Nov 16 16:51:19 ctl01 kernel: [    1.189001] RAS: Correctable Errors collector initialized.
Nov 16 16:51:19 ctl01 kernel: [    1.190177] sched_clock: Marking stable (1188159229, 0)->(1661762829, -473603600)
Nov 16 16:51:19 ctl01 dhclient[827]: DHCPDISCOVER on ens3 to 255.255.255.255 port 67 interval 3 (xid=0x2fce687b)
Nov 16 16:51:19 ctl01 kernel: [    1.192157] registered taskstats version 1
Nov 16 16:51:19 ctl01 dhclient[827]: DHCPDISCOVER on ens3 to 255.255.255.255 port 67 interval 7 (xid=0x2fce687b)
Nov 16 16:51:19 ctl01 kernel: [    1.193104] Loading compiled-in X.509 certificates
Nov 16 16:51:19 ctl01 kernel: [    1.196406] Loaded X.509 cert 'Build time autogenerated kernel key: 1859b0531897959199376c446a0bd70df75fd1fc'
Nov 16 16:51:19 ctl01 kernel: [    1.198260] zswap: loaded using pool lzo/zbud
Nov 16 16:51:19 ctl01 kernel: [    1.205369] Key type big_key registered
Nov 16 16:51:19 ctl01 ifup[806]: DHCPDISCOVER on ens3 to 255.255.255.255 port 67 interval 7 (xid=0x2fce687b)
Nov 16 16:51:19 ctl01 dhclient[827]: DHCPREQUEST of 192.168.11.97 on ens3 to 255.255.255.255 port 67 (xid=0x7b68ce2f)
Nov 16 16:51:19 ctl01 kernel: [    1.206237] Key type trusted registered
Nov 16 16:51:19 ctl01 kernel: [    1.209471] Key type encrypted registered
Nov 16 16:51:19 ctl01 kernel: [    1.210614] AppArmor: AppArmor sha1 policy hashing enabled
Nov 16 16:51:19 ctl01 kernel: [    1.212043] ima: No TPM chip found, activating TPM-bypass! (rc=-19)
Nov 16 16:51:19 ctl01 ifup[806]: DHCPREQUEST of 192.168.11.97 on ens3 to 255.255.255.255 port 67 (xid=0x7b68ce2f)
Nov 16 16:51:19 ctl01 kernel: [    1.213659] ima: Allocated hash algorithm: sha1
Nov 16 16:51:19 ctl01 kernel: [    1.214886] evm: HMAC attrs: 0x1
Nov 16 16:51:19 ctl01 kernel: [    1.216262]   Magic number: 11:270:894
Nov 16 16:51:19 ctl01 kernel: [    1.217490] rtc_cmos 00:00: setting system clock to 2019-11-16 16:51:07 UTC (1573923067)
Nov 16 16:51:19 ctl01 kernel: [    1.219302] BIOS EDD facility v0.16 2004-Jun-25, 0 devices found
Nov 16 16:51:19 ctl01 ifup[806]: DHCPOFFER of 192.168.11.97 from 192.168.11.3
Nov 16 16:51:19 ctl01 kernel: [    1.220589] EDD information not available.
Nov 16 16:51:19 ctl01 dhclient[827]: DHCPOFFER of 192.168.11.97 from 192.168.11.3
Nov 16 16:51:19 ctl01 kernel: [    1.593258] Freeing unused kernel image memory: 2432K
Nov 16 16:51:19 ctl01 ifup[806]: DHCPACK of 192.168.11.97 from 192.168.11.3
Nov 16 16:51:19 ctl01 kernel: [    1.620103] Write protecting the kernel read-only data: 20480k
Nov 16 16:51:19 ctl01 dhclient[827]: DHCPACK of 192.168.11.97 from 192.168.11.3
Nov 16 16:51:19 ctl01 ifup[806]: Failed to try-reload-or-restart systemd-resolved.service: Unit systemd-resolved.service is masked.
Nov 16 16:51:19 ctl01 kernel: [    1.624151] Freeing unused kernel image memory: 2008K
Nov 16 16:51:19 ctl01 dhclient[827]: bound to 192.168.11.97 -- renewal in 1445 seconds.
Nov 16 16:51:19 ctl01 kernel: [    1.626125] Freeing unused kernel image memory: 1880K
Nov 16 16:51:19 ctl01 ifup[806]: bound to 192.168.11.97 -- renewal in 1445 seconds.
Nov 16 16:51:19 ctl01 kernel: [    1.638389] x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 16 16:51:19 ctl01 kernel: [    1.639904] x86/mm: Checking user space page tables
Nov 16 16:51:19 ctl01 systemd[1]: Started Raise network interfaces.
Nov 16 16:51:19 ctl01 kernel: [    1.648520] x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 16 16:51:19 ctl01 kernel: [    1.737868] input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 16 16:51:19 ctl01 systemd[1]: Starting Initial cloud-init job (metadata service crawler)...
Nov 16 16:51:19 ctl01 kernel: [    1.739067] GPT:Primary header thinks Alt. header is not at the end of the disk.
Nov 16 16:51:19 ctl01 kernel: [    1.741512] input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 16 16:51:19 ctl01 systemd[1]: Reached target Network.
Nov 16 16:51:19 ctl01 kernel: [    1.742952] GPT:4612095 != 209715199
Nov 16 16:51:19 ctl01 kernel: [    1.745929] GPT:Alternate GPT header not at the end of the disk.
Nov 16 16:51:19 ctl01 cloud-init[882]: Cloud-init v. 19.2-36-g059d049c-0ubuntu2~18.04.1 running 'init' at Sat, 16 Nov 2019 16:51:16 +0000. Up 10.63 seconds.
Nov 16 16:51:19 ctl01 kernel: [    1.747690] GPT:4612095 != 209715199
Nov 16 16:51:19 ctl01 kernel: [    1.748688] GPT: Use GNU Parted to correct GPT errors.
Nov 16 16:51:19 ctl01 cloud-init[882]: ci-info: ++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 16 16:51:19 ctl01 kernel: [    1.749016] FDC 0 is a S82078B
Nov 16 16:51:19 ctl01 kernel: [    1.749990]  vda: vda1 vda14 vda15
Nov 16 16:51:19 ctl01 kernel: [    1.757288] AVX version of gcm_enc/dec engaged.
Nov 16 16:51:19 ctl01 cloud-init[882]: ci-info: +--------+-------+----------------------------+---------------+--------+-------------------+
Nov 16 16:51:19 ctl01 kernel: [    1.758448] AES CTR mode by8 optimization enabled
Nov 16 16:51:19 ctl01 kernel: [    1.772647] virtio_net virtio0 ens3: renamed from eth0
Nov 16 16:51:19 ctl01 kernel: [    1.796228] virtio_net virtio1 ens4: renamed from eth1
Nov 16 16:51:19 ctl01 cloud-init[882]: ci-info: | Device |   Up  |          Address           |      Mask     | Scope  |     Hw-Address    |
Nov 16 16:51:19 ctl01 kernel: [    1.820227] virtio_net virtio2 ens5: renamed from eth2
Nov 16 16:51:19 ctl01 cloud-init[882]: ci-info: +--------+-------+----------------------------+---------------+--------+-------------------+
Nov 16 16:51:19 ctl01 kernel: [    1.848128] virtio_net virtio3 ens6: renamed from eth3
Nov 16 16:51:19 ctl01 kernel: [    3.468026] raid6: sse2x1   gen()  7744 MB/s
Nov 16 16:51:19 ctl01 kernel: [    3.516025] raid6: sse2x1   xor()  5849 MB/s
Nov 16 16:51:19 ctl01 kernel: [    3.564031] raid6: sse2x2   gen()  9189 MB/s
Nov 16 16:51:19 ctl01 cloud-init[882]: ci-info: |  ens3  |  True |       192.168.11.97        | 255.255.255.0 | global | 52:54:00:9c:aa:a4 |
Nov 16 16:51:19 ctl01 kernel: [    3.612036] raid6: sse2x2   xor()  4894 MB/s
Nov 16 16:51:19 ctl01 kernel: [    3.660025] raid6: sse2x4   gen() 11145 MB/s
Nov 16 16:51:19 ctl01 kernel: [    3.708026] raid6: sse2x4   xor()  7770 MB/s
Nov 16 16:51:19 ctl01 kernel: [    3.709104] raid6: using algorithm sse2x4 gen() 11145 MB/s
Nov 16 16:51:19 ctl01 kernel: [    3.710429] raid6: .... xor() 7770 MB/s, rmw enabled
Nov 16 16:51:19 ctl01 cloud-init[882]: ci-info: |  ens3  |  True | fe80::5054:ff:fe9c:aaa4/64 |       .       |  link  | 52:54:00:9c:aa:a4 |
Nov 16 16:51:19 ctl01 kernel: [    3.711636] raid6: using ssse3x2 recovery algorithm
Nov 16 16:51:19 ctl01 cloud-init[882]: ci-info: |  ens4  | False |             .              |       .       |   .    | 52:54:00:9d:e7:5c |
Nov 16 16:51:19 ctl01 kernel: [    3.714377] xor: automatically using best checksumming function   avx       
Nov 16 16:51:19 ctl01 kernel: [    3.717182] async_tx: api initialized (async)
Nov 16 16:51:19 ctl01 kernel: [    3.782784] Btrfs loaded, crc32c=crc32c-intel
Nov 16 16:51:19 ctl01 cloud-init[882]: ci-info: |  ens5  | False |             .              |       .       |   .    | 52:54:00:08:c5:ff |
Nov 16 16:51:19 ctl01 kernel: [    3.840502] EXT4-fs (vda1): mounted filesystem with ordered data mode. Opts: (null)
Nov 16 16:51:19 ctl01 kernel: [    3.849982] random: fast init done
Nov 16 16:51:19 ctl01 kernel: [    4.119108] ip_tables: (C) 2000-2006 Netfilter Core Team
Nov 16 16:51:19 ctl01 cloud-init[882]: ci-info: |  ens6  | False |             .              |       .       |   .    | 52:54:00:c0:ab:72 |
Nov 16 16:51:19 ctl01 kernel: [    4.131880] random: systemd: uninitialized urandom read (16 bytes read)
Nov 16 16:51:19 ctl01 cloud-init[882]: ci-info: |   lo   |  True |         127.0.0.1          |   255.0.0.0   |  host  |         .         |
Nov 16 16:51:19 ctl01 kernel: [    4.137996] systemd[1]: systemd 237 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybrid)
Nov 16 16:51:19 ctl01 kernel: [    4.143021] systemd[1]: Detected virtualization kvm.
Nov 16 16:51:19 ctl01 cloud-init[882]: ci-info: |   lo   |  True |          ::1/128           |       .       |  host  |         .         |
Nov 16 16:51:19 ctl01 kernel: [    4.144426] systemd[1]: Detected architecture x86-64.
Nov 16 16:51:19 ctl01 cloud-init[882]: ci-info: +--------+-------+----------------------------+---------------+--------+-------------------+
Nov 16 16:51:19 ctl01 kernel: [    4.145873] random: systemd: uninitialized urandom read (16 bytes read)
Nov 16 16:51:19 ctl01 cloud-init[882]: ci-info: ++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++
Nov 16 16:51:19 ctl01 kernel: [    4.147643] random: systemd: uninitialized urandom read (16 bytes read)
Nov 16 16:51:19 ctl01 kernel: [    4.155574] systemd[1]: Set hostname to <ubuntu>.
Nov 16 16:51:19 ctl01 cloud-init[882]: ci-info: +-------+--------------+--------------+---------------+-----------+-------+
Nov 16 16:51:19 ctl01 kernel: [    4.157883] systemd[1]: Initializing machine ID from KVM UUID.
Nov 16 16:51:19 ctl01 cloud-init[882]: ci-info: | Route | Destination  |   Gateway    |    Genmask    | Interface | Flags |
Nov 16 16:51:19 ctl01 cloud-init[882]: ci-info: +-------+--------------+--------------+---------------+-----------+-------+
Nov 16 16:51:19 ctl01 kernel: [    4.159340] systemd[1]: Installed transient /etc/machine-id file.
Nov 16 16:51:19 ctl01 kernel: [    4.442174] systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Nov 16 16:51:19 ctl01 cloud-init[882]: ci-info: |   0   |   0.0.0.0    | 192.168.11.3 |    0.0.0.0    |    ens3   |   UG  |
Nov 16 16:51:19 ctl01 kernel: [    4.447707] systemd[1]: Created slice System Slice.
Nov 16 16:51:19 ctl01 kernel: [    4.449680] systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 16 16:51:19 ctl01 kernel: [    4.452311] systemd[1]: Listening on LVM2 metadata daemon socket.
Nov 16 16:51:19 ctl01 kernel: [    4.478929] EXT4-fs (vda1): re-mounted. Opts: (null)
Nov 16 16:51:19 ctl01 cloud-init[882]: ci-info: |   1   | 192.168.11.0 |   0.0.0.0    | 255.255.255.0 |    ens3   |   U   |
Nov 16 16:51:19 ctl01 kernel: [    4.517443] Loading iSCSI transport class v2.0-870.
Nov 16 16:51:19 ctl01 kernel: [    4.534662] iscsi: registered transport (tcp)
Nov 16 16:51:19 ctl01 kernel: [    4.564600] iscsi: registered transport (iser)
Nov 16 16:51:19 ctl01 kernel: [    4.604835] systemd-journald[459]: Received request to flush runtime journal from PID 1
Nov 16 16:51:19 ctl01 kernel: [    5.195998] audit: type=1400 audit(1573923071.472:2): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/bin/lxc-start" pid=684 comm="apparmor_parser"
Nov 16 16:51:19 ctl01 kernel: [    5.279771] audit: type=1400 audit(1573923071.556:3): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/bin/man" pid=685 comm="apparmor_parser"
Nov 16 16:51:19 ctl01 cloud-init[882]: ci-info: +-------+--------------+--------------+---------------+-----------+-------+
Nov 16 16:51:19 ctl01 kernel: [    5.280340] audit: type=1400 audit(1573923071.560:4): apparmor="STATUS" operation="profile_load" profile="unconfined" name="man_filter" pid=685 comm="apparmor_parser"
Nov 16 16:51:19 ctl01 kernel: [    5.280913] audit: type=1400 audit(1573923071.560:5): apparmor="STATUS" operation="profile_load" profile="unconfined" name="man_groff" pid=685 comm="apparmor_parser"
Nov 16 16:51:19 ctl01 cloud-init[882]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 16 16:51:19 ctl01 cloud-init[882]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 16 16:51:19 ctl01 kernel: [    5.330315] audit: type=1400 audit(1573923071.608:6): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/sbin/tcpdump" pid=690 comm="apparmor_parser"
Nov 16 16:51:19 ctl01 kernel: [    5.427605] audit: type=1400 audit(1573923071.704:7): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/sbin/dhclient" pid=680 comm="apparmor_parser"
Nov 16 16:51:19 ctl01 cloud-init[882]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 16 16:51:19 ctl01 kernel: [    5.428036] audit: type=1400 audit(1573923071.704:8): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=680 comm="apparmor_parser"
Nov 16 16:51:19 ctl01 cloud-init[882]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 16 16:51:19 ctl01 kernel: [    5.428526] audit: type=1400 audit(1573923071.708:9): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/NetworkManager/nm-dhcp-helper" pid=680 comm="apparmor_parser"
Nov 16 16:51:19 ctl01 cloud-init[882]: ci-info: |   1   |  fe80::/64  |    ::   |    ens3   |   U   |
Nov 16 16:51:19 ctl01 kernel: [    5.428953] audit: type=1400 audit(1573923071.708:10): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/connman/scripts/dhclient-script" pid=680 comm="apparmor_parser"
Nov 16 16:51:19 ctl01 cloud-init[882]: ci-info: |   3   |    local    |    ::   |    ens3   |   U   |
Nov 16 16:51:19 ctl01 kernel: [    5.468782] audit: type=1400 audit(1573923071.748:11): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/snapd/snap-confine" pid=687 comm="apparmor_parser"
Nov 16 16:51:19 ctl01 kernel: [    6.853486] ISO 9660 Extensions: Microsoft Joliet Level 3
Nov 16 16:51:19 ctl01 cloud-init[882]: ci-info: |   4   |   ff00::/8  |    ::   |    ens3   |   U   |
Nov 16 16:51:19 ctl01 kernel: [    6.857021] ISO 9660 Extensions: RRIP_1991A
Nov 16 16:51:19 ctl01 kernel: [   11.128463] EXT4-fs (vda1): resizing filesystem from 548091 to 26185979 blocks
Nov 16 16:51:19 ctl01 kernel: [   11.350517] EXT4-fs (vda1): resized filesystem to 26185979
Nov 16 16:51:19 ctl01 kernel: [   12.760345] new mount options do not match the existing superblock, will be ignored
Nov 16 16:51:19 ctl01 cloud-init[882]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 16 16:51:19 ctl01 cloud-init[882]: Generating public/private rsa key pair.
Nov 16 16:51:19 ctl01 cloud-init[882]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key.
Nov 16 16:51:19 ctl01 cloud-init[882]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub.
Nov 16 16:51:19 ctl01 cloud-init[882]: The key fingerprint is:
Nov 16 16:51:19 ctl01 cloud-init[882]: SHA256:Xsn+d5bzxHxss50rSRtUXkCfCEYse1WkfKtqeBLqv+U root@ctl01
Nov 16 16:51:19 ctl01 cloud-init[882]: The key's randomart image is:
Nov 16 16:51:19 ctl01 cloud-init[882]: +---[RSA 2048]----+
Nov 16 16:51:19 ctl01 cloud-init[882]: |          o+ .=+ |
Nov 16 16:51:19 ctl01 cloud-init[882]: |         ...o.ooo|
Nov 16 16:51:19 ctl01 cloud-init[882]: |          o .+ooo|
Nov 16 16:51:19 ctl01 cloud-init[882]: |         o o ....|
Nov 16 16:51:19 ctl01 cloud-init[882]: |        S = .  . |
Nov 16 16:51:19 ctl01 cloud-init[882]: |       . +   o.+ |
Nov 16 16:51:19 ctl01 cloud-init[882]: |        o +...+.O|
Nov 16 16:51:19 ctl01 cloud-init[882]: |       . oo+.+.=O|
Nov 16 16:51:19 ctl01 cloud-init[882]: |      ...o=E...**|
Nov 16 16:51:19 ctl01 cloud-init[882]: +----[SHA256]-----+
Nov 16 16:51:19 ctl01 cloud-init[882]: Generating public/private dsa key pair.
Nov 16 16:51:19 ctl01 cloud-init[882]: Your identification has been saved in /etc/ssh/ssh_host_dsa_key.
Nov 16 16:51:19 ctl01 cloud-init[882]: Your public key has been saved in /etc/ssh/ssh_host_dsa_key.pub.
Nov 16 16:51:19 ctl01 cloud-init[882]: The key fingerprint is:
Nov 16 16:51:19 ctl01 cloud-init[882]: SHA256:n66gC39qtSCj+CDikekpLYiW6U5N4B6XuWTOZQn9pAw root@ctl01
Nov 16 16:51:19 ctl01 cloud-init[882]: The key's randomart image is:
Nov 16 16:51:19 ctl01 cloud-init[882]: +---[DSA 1024]----+
Nov 16 16:51:19 ctl01 cloud-init[882]: |                 |
Nov 16 16:51:19 ctl01 cloud-init[882]: |    .            |
Nov 16 16:51:19 ctl01 cloud-init[882]: | . E . .         |
Nov 16 16:51:19 ctl01 cloud-init[882]: |. . * =          |
Nov 16 16:51:19 ctl01 cloud-init[882]: | o B * .S        |
Nov 16 16:51:19 ctl01 cloud-init[882]: |. # = .  . .     |
Nov 16 16:51:19 ctl01 cloud-init[882]: |BO+B o..  o      |
Nov 16 16:51:19 ctl01 cloud-init[882]: |&=+o..o. .       |
Nov 16 16:51:19 ctl01 cloud-init[882]: |=O..=+  ...      |
Nov 16 16:51:19 ctl01 cloud-init[882]: +----[SHA256]-----+
Nov 16 16:51:19 ctl01 cloud-init[882]: Generating public/private ecdsa key pair.
Nov 16 16:51:19 ctl01 cloud-init[882]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key.
Nov 16 16:51:19 ctl01 cloud-init[882]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub.
Nov 16 16:51:19 ctl01 cloud-init[882]: The key fingerprint is:
Nov 16 16:51:19 ctl01 cloud-init[882]: SHA256:iS2nxbeWQ6C8V7gpBCvZD9nHLeQL/95U+TdFSWcD5dw root@ctl01
Nov 16 16:51:19 ctl01 cloud-init[882]: The key's randomart image is:
Nov 16 16:51:19 ctl01 cloud-init[882]: +---[ECDSA 256]---+
Nov 16 16:51:19 ctl01 cloud-init[882]: |             .o+o|
Nov 16 16:51:19 ctl01 cloud-init[882]: |              +.=|
Nov 16 16:51:19 ctl01 cloud-init[882]: |    .   o      +E|
Nov 16 16:51:19 ctl01 cloud-init[882]: |   o * O =     o |
Nov 16 16:51:19 ctl01 cloud-init[882]: |  o = O S =   o .|
Nov 16 16:51:19 ctl01 cloud-init[882]: |   . + X O o . ..|
Nov 16 16:51:19 ctl01 cloud-init[882]: |      = * = .  .o|
Nov 16 16:51:19 ctl01 cloud-init[882]: |       o o +    o|
Nov 16 16:51:19 ctl01 cloud-init[882]: |         .o .    |
Nov 16 16:51:19 ctl01 cloud-init[882]: +----[SHA256]-----+
Nov 16 16:51:19 ctl01 cloud-init[882]: Generating public/private ed25519 key pair.
Nov 16 16:51:19 ctl01 cloud-init[882]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key.
Nov 16 16:51:19 ctl01 cloud-init[882]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub.
Nov 16 16:51:19 ctl01 cloud-init[882]: The key fingerprint is:
Nov 16 16:51:19 ctl01 cloud-init[882]: SHA256:EMKKg0hQA35B60A/tXKiCQjumsaB7YhIOZrMBn5656w root@ctl01
Nov 16 16:51:19 ctl01 cloud-init[882]: The key's randomart image is:
Nov 16 16:51:19 ctl01 cloud-init[882]: +--[ED25519 256]--+
Nov 16 16:51:19 ctl01 cloud-init[882]: |=+++...          |
Nov 16 16:51:19 ctl01 cloud-init[882]: |*...=...         |
Nov 16 16:51:19 ctl01 cloud-init[882]: |*= O o.          |
Nov 16 16:51:19 ctl01 cloud-init[882]: |O.O =  .         |
Nov 16 16:51:19 ctl01 cloud-init[882]: |oX..    S        |
Nov 16 16:51:19 ctl01 cloud-init[882]: |%=o              |
Nov 16 16:51:19 ctl01 cloud-init[882]: |OB..             |
Nov 16 16:51:19 ctl01 cloud-init[882]: |o o...           |
Nov 16 16:51:19 ctl01 cloud-init[882]: | ..E+o           |
Nov 16 16:51:19 ctl01 cloud-init[882]: +----[SHA256]-----+
Nov 16 16:51:19 ctl01 systemd[1]: Started Initial cloud-init job (metadata service crawler).
Nov 16 16:51:19 ctl01 systemd[1]: Reached target System Initialization.
Nov 16 16:51:19 ctl01 systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 16 16:51:19 ctl01 systemd[1]: Starting Socket activation for snappy daemon.
Nov 16 16:51:19 ctl01 systemd[1]: Started Discard unused blocks once a week.
Nov 16 16:51:19 ctl01 systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 16 16:51:19 ctl01 systemd[1]: Listening on UUID daemon activation socket.
Nov 16 16:51:19 ctl01 systemd[1]: Started ACPI Events Check.
Nov 16 16:51:19 ctl01 systemd[1]: Started Message of the Day.
Nov 16 16:51:19 ctl01 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 16 16:51:19 ctl01 systemd[1]: Listening on ACPID Listen Socket.
Nov 16 16:51:19 ctl01 systemd[1]: Reached target Paths.
Nov 16 16:51:19 ctl01 systemd[1]: Starting LXD - unix socket.
Nov 16 16:51:19 ctl01 systemd[1]: Started Daily apt download activities.
Nov 16 16:51:19 ctl01 systemd[1]: Started Daily apt upgrade and clean activities.
Nov 16 16:51:19 ctl01 systemd[1]: Reached target Timers.
Nov 16 16:51:19 ctl01 systemd[1]: Reached target Network is Online.
Nov 16 16:51:19 ctl01 systemd[1]: Starting Availability of block devices...
Nov 16 16:51:19 ctl01 systemd[1]: Reached target Remote File Systems (Pre).
Nov 16 16:51:19 ctl01 systemd[1]: Reached target Remote File Systems.
Nov 16 16:51:19 ctl01 systemd[1]: Reached target Cloud-config availability.
Nov 16 16:51:19 ctl01 systemd[1]: Listening on Socket activation for snappy daemon.
Nov 16 16:51:19 ctl01 systemd[1]: Listening on LXD - unix socket.
Nov 16 16:51:19 ctl01 systemd[1]: Started Availability of block devices.
Nov 16 16:51:19 ctl01 systemd[1]: Reached target Sockets.
Nov 16 16:51:19 ctl01 systemd[1]: Reached target Basic System.
Nov 16 16:51:19 ctl01 systemd[1]: Started FUSE filesystem for LXC.
Nov 16 16:51:19 ctl01 systemd[1]: Started D-Bus System Message Bus.
Nov 16 16:51:19 ctl01 dbus-daemon[1006]: [system] AppArmor D-Bus mediation is enabled
Nov 16 16:51:19 ctl01 systemd[1]: Starting Login Service...
Nov 16 16:51:19 ctl01 systemd[1]: Starting System Logging Service...
Nov 16 16:51:19 ctl01 systemd[1]: Starting Accounts Service...
Nov 16 16:51:19 ctl01 lxcfs[1001]: mount namespace: 5
Nov 16 16:51:19 ctl01 lxcfs[1001]: hierarchies:
Nov 16 16:51:19 ctl01 lxcfs[1001]:   0: fd:   6: devices
Nov 16 16:51:19 ctl01 lxcfs[1001]:   1: fd:   7: cpuset
Nov 16 16:51:19 ctl01 lxcfs[1001]:   2: fd:   8: pids
Nov 16 16:51:19 ctl01 lxcfs[1001]:   3: fd:   9: net_cls,net_prio
Nov 16 16:51:19 ctl01 lxcfs[1001]:   4: fd:  10: rdma
Nov 16 16:51:19 ctl01 lxcfs[1001]:   5: fd:  11: perf_event
Nov 16 16:51:19 ctl01 lxcfs[1001]:   6: fd:  12: hugetlb
Nov 16 16:51:19 ctl01 lxcfs[1001]:   7: fd:  13: memory
Nov 16 16:51:19 ctl01 lxcfs[1001]:   8: fd:  14: freezer
Nov 16 16:51:19 ctl01 lxcfs[1001]:   9: fd:  15: cpu,cpuacct
Nov 16 16:51:19 ctl01 lxcfs[1001]:  10: fd:  16: blkio
Nov 16 16:51:19 ctl01 lxcfs[1001]:  11: fd:  17: name=systemd
Nov 16 16:51:19 ctl01 lxcfs[1001]:  12: fd:  18: unified
Nov 16 16:51:19 ctl01 systemd[1]: Started Deferred execution scheduler.
Nov 16 16:51:19 ctl01 systemd[1]: Starting Pollinate to seed the pseudo random number generator...
Nov 16 16:51:19 ctl01 systemd[1]: Starting dnsmasq - A lightweight DHCP and caching DNS server...
Nov 16 16:51:19 ctl01 systemd[1]: Starting LSB: automatic crash report generation...
Nov 16 16:51:19 ctl01 systemd[1]: Starting Snappy daemon...
Nov 16 16:51:19 ctl01 systemd[1]: Started irqbalance daemon.
Nov 16 16:51:19 ctl01 systemd[1]: Starting Permit User Sessions...
Nov 16 16:51:19 ctl01 systemd[1]: Starting The Salt Minion...
Nov 16 16:51:19 ctl01 systemd[1]: Starting LXD - container startup/shutdown...
Nov 16 16:51:19 ctl01 systemd[1]: Starting LSB: Record successful boot for GRUB...
Nov 16 16:51:19 ctl01 systemd[1]: Started Regular background program processing daemon.
Nov 16 16:51:19 ctl01 systemd[1]: Started Permit User Sessions.
Nov 16 16:51:19 ctl01 cron[1100]: (CRON) INFO (pidfile fd = 3)
Nov 16 16:51:19 ctl01 systemd[1]: Started Login Service.
Nov 16 16:51:19 ctl01 grub-common[1092]:  * Recording successful boot for GRUB
Nov 16 16:51:19 ctl01 apport[1049]:  * Starting automatic crash report generation: apport
Nov 16 16:51:19 ctl01 systemd[1]: Started Unattended Upgrades Shutdown.
Nov 16 16:51:19 ctl01 systemd[1]: Starting Terminate Plymouth Boot Screen...
Nov 16 16:51:19 ctl01 systemd[1]: Starting Hold until boot process finishes up...
Nov 16 16:51:19 ctl01 systemd[1]: Started Hold until boot process finishes up.
Nov 16 16:51:19 ctl01 systemd[1]: Starting Set console scheme...
Nov 16 16:51:19 ctl01 systemd[1]: Started Serial Getty on ttyS0.
Nov 16 16:51:19 ctl01 systemd[1]: Started Terminate Plymouth Boot Screen.
Nov 16 16:51:19 ctl01 systemd[1]: Started System Logging Service.
Nov 16 16:51:19 ctl01 rsyslogd: imuxsock: Acquired UNIX socket '/run/systemd/journal/syslog' (fd 3) from systemd.  [v8.32.0]
Nov 16 16:51:19 ctl01 rsyslogd: rsyslogd's groupid changed to 106
Nov 16 16:51:19 ctl01 rsyslogd: rsyslogd's userid changed to 102
Nov 16 16:51:19 ctl01 rsyslogd:  [origin software="rsyslogd" swVersion="8.32.0" x-pid="1022" x-info="http://www.rsyslog.com"] start
Nov 16 16:51:19 ctl01 systemd[1]: Started Set console scheme.
Nov 16 16:51:19 ctl01 systemd[1]: Created slice system-getty.slice.
Nov 16 16:51:19 ctl01 pollinate[1040]: client sent challenge to [https://entropy.ubuntu.com/]
Nov 16 16:51:19 ctl01 systemd[1]: Started Getty on tty1.
Nov 16 16:51:19 ctl01 cron[1100]: (CRON) INFO (Running @reboot jobs)
Nov 16 16:51:19 ctl01 apport[1049]:    ...done.
Nov 16 16:51:19 ctl01 systemd[1]: Reached target Login Prompts.
Nov 16 16:51:19 ctl01 systemd[1]: Started LSB: automatic crash report generation.
Nov 16 16:51:19 ctl01 dnsmasq[1043]: dnsmasq: syntax check OK.
Nov 16 16:51:19 ctl01 grub-common[1092]:    ...done.
Nov 16 16:51:19 ctl01 systemd[1]: Started LSB: Record successful boot for GRUB.
Nov 16 16:51:19 ctl01 dbus-daemon[1006]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.2' (uid=0 pid=1024 comm="/usr/lib/accountsservice/accounts-daemon " label="unconfined")
Nov 16 16:51:19 ctl01 systemd[1]: Starting Authorization Manager...
Nov 16 16:51:19 ctl01 dnsmasq[1273]: started, version 2.79 cachesize 150
Nov 16 16:51:19 ctl01 dnsmasq[1273]: compile time options: IPv6 GNU-getopt DBus i18n IDN DHCP DHCPv6 no-Lua TFTP conntrack ipset auth DNSSEC loop-detect inotify
Nov 16 16:51:19 ctl01 dnsmasq[1273]: reading /etc/resolv.conf
Nov 16 16:51:19 ctl01 dnsmasq[1273]: using nameserver 8.8.8.8#53
Nov 16 16:51:19 ctl01 dnsmasq[1273]: read /etc/hosts - 7 addresses
Nov 16 16:51:19 ctl01 systemd[1]: Started dnsmasq - A lightweight DHCP and caching DNS server.
Nov 16 16:51:19 ctl01 systemd[1]: Reached target Host and Network Name Lookups.
Nov 16 16:51:19 ctl01 polkitd[1255]: started daemon version 0.105 using authority implementation `local' version `0.105'
Nov 16 16:51:19 ctl01 dbus-daemon[1006]: [system] Successfully activated service 'org.freedesktop.PolicyKit1'
Nov 16 16:51:19 ctl01 systemd[1]: Started Authorization Manager.
Nov 16 16:51:19 ctl01 accounts-daemon[1024]: started daemon version 0.6.45
Nov 16 16:51:19 ctl01 systemd[1]: Started Accounts Service.
Nov 16 16:51:19 ctl01 snapd[1057]: AppArmor status: apparmor is enabled and all features are available
Nov 16 16:51:19 ctl01 snapd[1057]: helpers.go:145: error trying to compare the snap system key: system-key missing on disk
Nov 16 16:51:19 ctl01 snapd[1057]: daemon.go:338: started snapd/2.40+18.04 (series 16; classic) ubuntu/18.04 (amd64) linux/4.15.0-70-generic.
Nov 16 16:51:19 ctl01 systemd[1]: Started LXD - container startup/shutdown.
Nov 16 16:51:20 ctl01 pollinate[1040]: client verified challenge/response with [https://entropy.ubuntu.com/]
Nov 16 16:51:20 ctl01 pollinate[1040]: client hashed response from [https://entropy.ubuntu.com/]
Nov 16 16:51:20 ctl01 pollinate[1040]: client successfully seeded [/dev/urandom]
Nov 16 16:51:20 ctl01 systemd[1]: Started Pollinate to seed the pseudo random number generator.
Nov 16 16:51:20 ctl01 systemd[1]: Starting OpenBSD Secure Shell server...
Nov 16 16:51:20 ctl01 systemd[1]: Started Snappy daemon.
Nov 16 16:51:20 ctl01 systemd[1]: Starting Wait until snapd is fully seeded...
Nov 16 16:51:20 ctl01 systemd[1]: Started OpenBSD Secure Shell server.
Nov 16 16:51:20 ctl01 systemd[1]: Started The Salt Minion.
Nov 16 16:51:21 ctl01 systemd[1]: Started Wait until snapd is fully seeded.
Nov 16 16:51:21 ctl01 systemd[1]: Starting Apply the settings specified in cloud-config...
Nov 16 16:51:21 ctl01 systemd[1]: Reached target Multi-User System.
Nov 16 16:51:21 ctl01 systemd[1]: Reached target Graphical Interface.
Nov 16 16:51:21 ctl01 systemd[1]: Starting Update UTMP about System Runlevel Changes...
Nov 16 16:51:21 ctl01 systemd[1]: Started Update UTMP about System Runlevel Changes.
Nov 16 16:51:21 ctl01 salt-minion[1076]: [ERROR   ] DNS lookup or connection check of 'salt' failed.
Nov 16 16:51:21 ctl01 salt-minion[1076]: [ERROR   ] Master hostname: 'salt' not found or not responsive. Retrying in 30 seconds
Nov 16 16:51:22 ctl01 cloud-init[1445]: Cloud-init v. 19.2-36-g059d049c-0ubuntu2~18.04.1 running 'modules:config' at Sat, 16 Nov 2019 16:51:21 +0000. Up 15.23 seconds.
Nov 16 16:51:22 ctl01 systemd[1]: Started Apply the settings specified in cloud-config.
Nov 16 16:51:22 ctl01 systemd[1]: Starting Execute cloud user/final scripts...
Nov 16 16:51:23 ctl01 kernel: [   16.824103] random: crng init done
Nov 16 16:51:23 ctl01 kernel: [   16.824105] random: 7 urandom warning(s) missed due to ratelimiting
Nov 16 16:51:23 ctl01 systemd[1]: Reloading.
Nov 16 16:51:23 ctl01 cloud-init[1531]: Synchronizing state of networking.service with SysV service script with /lib/systemd/systemd-sysv-install.
Nov 16 16:51:23 ctl01 cloud-init[1531]: Executing: /lib/systemd/systemd-sysv-install enable networking
Nov 16 16:51:23 ctl01 systemd[1]: Reloading.
Nov 16 16:51:24 ctl01 systemd[1]: message repeated 2 times: [ Reloading.]
Nov 16 16:51:24 ctl01 cloud-init[1531]: Synchronizing state of salt-minion.service with SysV service script with /lib/systemd/systemd-sysv-install.
Nov 16 16:51:24 ctl01 cloud-init[1531]: Executing: /lib/systemd/systemd-sysv-install enable salt-minion
Nov 16 16:51:24 ctl01 systemd[1]: Reloading.
Nov 16 16:51:24 ctl01 systemd[1]: message repeated 2 times: [ Reloading.]
Nov 16 16:51:24 ctl01 systemd[1]: Stopping The Salt Minion...
Nov 16 16:51:24 ctl01 salt-minion[1076]: [WARNING ] Minion received a SIGTERM. Exiting.
Nov 16 16:51:25 ctl01 snapd[1057]: daemon.go:576: gracefully waiting for running hooks
Nov 16 16:51:25 ctl01 snapd[1057]: daemon.go:578: done waiting for running hooks
Nov 16 16:51:25 ctl01 snapd[1057]: daemon stop requested to wait for socket activation
Nov 16 16:51:25 ctl01 salt-minion[1076]: The Salt Minion is shutdown. Minion received a SIGTERM. Exited.
Nov 16 16:51:25 ctl01 systemd[1]: Stopped The Salt Minion.
Nov 16 16:51:25 ctl01 systemd[1]: Starting The Salt Minion...
Nov 16 16:51:25 ctl01 systemd[1]: Started The Salt Minion.
Nov 16 16:51:25 ctl01 ec2: 
Nov 16 16:51:25 ctl01 ec2: #############################################################
Nov 16 16:51:25 ctl01 ec2: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 16 16:51:25 ctl01 ec2: 1024 SHA256:n66gC39qtSCj+CDikekpLYiW6U5N4B6XuWTOZQn9pAw root@ctl01 (DSA)
Nov 16 16:51:25 ctl01 ec2: 256 SHA256:iS2nxbeWQ6C8V7gpBCvZD9nHLeQL/95U+TdFSWcD5dw root@ctl01 (ECDSA)
Nov 16 16:51:25 ctl01 ec2: 256 SHA256:EMKKg0hQA35B60A/tXKiCQjumsaB7YhIOZrMBn5656w root@ctl01 (ED25519)
Nov 16 16:51:25 ctl01 ec2: 2048 SHA256:Xsn+d5bzxHxss50rSRtUXkCfCEYse1WkfKtqeBLqv+U root@ctl01 (RSA)
Nov 16 16:51:25 ctl01 ec2: -----END SSH HOST KEY FINGERPRINTS-----
Nov 16 16:51:25 ctl01 ec2: #############################################################
Nov 16 16:51:25 ctl01 cloud-init[1531]: Cloud-init v. 19.2-36-g059d049c-0ubuntu2~18.04.1 running 'modules:final' at Sat, 16 Nov 2019 16:51:23 +0000. Up 16.82 seconds.
Nov 16 16:51:25 ctl01 cloud-init[1531]: Cloud-init v. 19.2-36-g059d049c-0ubuntu2~18.04.1 finished at Sat, 16 Nov 2019 16:51:25 +0000. Datasource DataSourceNoCloud [seed=/dev/sr0][dsmode=net].  Up 19.21 seconds
Nov 16 16:51:25 ctl01 systemd[1]: Started Execute cloud user/final scripts.
Nov 16 16:51:25 ctl01 systemd[1]: Reached target Cloud-init target.
Nov 16 16:51:25 ctl01 systemd[1]: Startup finished in 4.067s (kernel) + 15.216s (userspace) = 19.283s.
Nov 16 16:51:42 ctl01 systemd-timesyncd[626]: Synchronized to time server 91.189.91.157:123 (ntp.ubuntu.com).
Nov 16 16:53:58 ctl01 salt-minion[1786]: [WARNING ] The function "module.run" is using its deprecated version and will expire in version "Sodium".
Nov 16 16:54:04 ctl01 systemd[1]: Started /usr/bin/apt-get -q -y remove telnet.
Nov 16 16:54:10 ctl01 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install smartmontools.
Nov 16 16:54:20 ctl01 systemd[1]: Reloading.
Nov 16 16:54:23 ctl01 systemd[1]: message repeated 2 times: [ Reloading.]
Nov 16 16:54:24 ctl01 systemd[1]: Started Self Monitoring and Reporting Technology (SMART) Daemon.
Nov 16 16:54:24 ctl01 systemd[1]: Reloading.
Nov 16 16:54:24 ctl01 smartd[4195]: smartd 6.6 2016-05-31 r4324 [x86_64-linux-4.15.0-70-generic] (local build)
Nov 16 16:54:24 ctl01 smartd[4195]: Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
Nov 16 16:54:24 ctl01 smartd[4195]: Opened configuration file /etc/smartd.conf
Nov 16 16:54:24 ctl01 smartd[4195]: Drive: DEVICESCAN, implied '-a' Directive on line 21 of file /etc/smartd.conf
Nov 16 16:54:24 ctl01 smartd[4195]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices
Nov 16 16:54:24 ctl01 smartd[4195]: DEVICESCAN failed: glob(3) aborted matching pattern /dev/discs/disc*
Nov 16 16:54:24 ctl01 smartd[4195]: In the system's table of devices NO devices found to scan
Nov 16 16:54:24 ctl01 smartd[4195]: Unable to monitor any SMART enabled devices. Try debug (-d) option. Exiting...
Nov 16 16:54:25 ctl01 systemd[1]: smartd.service: Main process exited, code=exited, status=17/n/a
Nov 16 16:54:25 ctl01 systemd[1]: smartd.service: Failed with result 'exit-code'.
Nov 16 16:54:25 ctl01 systemd[1]: Started Self Monitoring and Reporting Technology (SMART) Daemon.
Nov 16 16:54:25 ctl01 smartd[4235]: smartd 6.6 2016-05-31 r4324 [x86_64-linux-4.15.0-70-generic] (local build)
Nov 16 16:54:25 ctl01 smartd[4235]: Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
Nov 16 16:54:25 ctl01 smartd[4235]: Opened configuration file /etc/smartd.conf
Nov 16 16:54:25 ctl01 smartd[4235]: Drive: DEVICESCAN, implied '-a' Directive on line 21 of file /etc/smartd.conf
Nov 16 16:54:25 ctl01 smartd[4235]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices
Nov 16 16:54:25 ctl01 smartd[4235]: DEVICESCAN failed: glob(3) aborted matching pattern /dev/discs/disc*
Nov 16 16:54:25 ctl01 smartd[4235]: In the system's table of devices NO devices found to scan
Nov 16 16:54:25 ctl01 smartd[4235]: Unable to monitor any SMART enabled devices. Try debug (-d) option. Exiting...
Nov 16 16:54:25 ctl01 systemd[1]: smartd.service: Main process exited, code=exited, status=17/n/a
Nov 16 16:54:25 ctl01 systemd[1]: smartd.service: Failed with result 'exit-code'.
Nov 16 16:54:26 ctl01 systemd[1]: Reloading.
Nov 16 16:54:28 ctl01 systemd[1]: message repeated 2 times: [ Reloading.]
Nov 16 16:54:29 ctl01 systemd[1]: Created slice system-postfix.slice.
Nov 16 16:54:29 ctl01 systemd[1]: Starting Postfix Mail Transport Agent (instance -)...
Nov 16 16:54:29 ctl01 configure-instance.sh[4368]: postconf: fatal: open /etc/postfix/main.cf: No such file or directory
Nov 16 16:54:30 ctl01 configure-instance.sh[4368]: postconf: fatal: open /etc/postfix/main.cf: No such file or directory
Nov 16 16:54:31 ctl01 systemd[1]: postfix@-.service: Control process exited, code=exited status=1
Nov 16 16:54:31 ctl01 systemd[1]: postfix@-.service: Failed with result 'exit-code'.
Nov 16 16:54:31 ctl01 systemd[1]: Failed to start Postfix Mail Transport Agent (instance -).
Nov 16 16:54:38 ctl01 systemd[1]: Reloading.
Nov 16 16:54:38 ctl01 systemd[1]: Starting Postfix Mail Transport Agent (instance -)...
Nov 16 16:54:38 ctl01 postfix/postfix-script[4731]: starting the Postfix mail system
Nov 16 16:54:38 ctl01 postfix/master[4733]: daemon started -- version 3.3.0, configuration /etc/postfix
Nov 16 16:54:38 ctl01 systemd[1]: Started Postfix Mail Transport Agent (instance -).
Nov 16 16:54:38 ctl01 systemd[1]: Starting Postfix Mail Transport Agent...
Nov 16 16:54:38 ctl01 systemd[1]: Started Postfix Mail Transport Agent.
Nov 16 16:54:39 ctl01 systemd[1]: Reloading.
Nov 16 16:54:41 ctl01 systemd[1]: Reloading.
Nov 16 16:54:41 ctl01 systemd[1]: Stopping System Logging Service...
Nov 16 16:54:41 ctl01 rsyslogd:  [origin software="rsyslogd" swVersion="8.32.0" x-pid="1022" x-info="http://www.rsyslog.com"] exiting on signal 15.
Nov 16 16:54:41 ctl01 systemd[1]: Stopped System Logging Service.
Nov 16 16:54:41 ctl01 systemd[1]: Starting System Logging Service...
Nov 16 16:54:41 ctl01 rsyslogd: imuxsock: Acquired UNIX socket '/run/systemd/journal/syslog' (fd 3) from systemd.  [v8.32.0]
Nov 16 16:54:41 ctl01 rsyslogd: rsyslogd's groupid changed to 106
Nov 16 16:54:41 ctl01 systemd[1]: Started System Logging Service.
Nov 16 16:54:41 ctl01 rsyslogd: rsyslogd's userid changed to 102
Nov 16 16:54:41 ctl01 rsyslogd:  [origin software="rsyslogd" swVersion="8.32.0" x-pid="5222" x-info="http://www.rsyslog.com"] start
Nov 16 16:54:43 ctl01 dbus-daemon[1006]: [system] Activating via systemd: service name='org.freedesktop.timedate1' unit='dbus-org.freedesktop.timedate1.service' requested by ':1.22' (uid=0 pid=5299 comm="timedatectl " label="unconfined")
Nov 16 16:54:43 ctl01 systemd[1]: Starting Time & Date Service...
Nov 16 16:54:43 ctl01 dbus-daemon[1006]: [system] Successfully activated service 'org.freedesktop.timedate1'
Nov 16 16:54:43 ctl01 systemd[1]: Started Time & Date Service.
Nov 16 16:54:44 ctl01 salt-minion[1786]: [WARNING ] State for file: /boot/grub/grub.cfg - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
Nov 16 16:54:44 ctl01 kernel: [  217.914090] nf_conntrack version 0.5.0 (65536 buckets, 262144 max)
Nov 16 16:54:50 ctl01 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install sysfsutils.
Nov 16 16:54:52 ctl01 systemd[1]: Reloading.
Nov 16 16:54:52 ctl01 systemd[1]: Reloading.
Nov 16 16:54:52 ctl01 systemd[1]: Starting LSB: Set sysfs variables from /etc/sysfs.conf...
Nov 16 16:54:52 ctl01 sysfsutils[6116]:  * Setting sysfs variables...
Nov 16 16:54:52 ctl01 sysfsutils[6116]:    ...done.
Nov 16 16:54:52 ctl01 systemd[1]: Started LSB: Set sysfs variables from /etc/sysfs.conf.
Nov 16 16:54:52 ctl01 systemd[1]: Reloading.
Nov 16 16:54:56 ctl01 systemd[1]: Started /bin/systemctl disable ondemand.service.
Nov 16 16:54:56 ctl01 systemd[1]: Reloading.
Nov 16 16:54:56 ctl01 dbus-daemon[1006]: [system] Activating via systemd: service name='org.freedesktop.locale1' unit='dbus-org.freedesktop.locale1.service' requested by ':1.25' (uid=0 pid=6394 comm="localectl " label="unconfined")
Nov 16 16:54:56 ctl01 systemd[1]: Starting Locale Service...
Nov 16 16:54:56 ctl01 dbus-daemon[1006]: [system] Successfully activated service 'org.freedesktop.locale1'
Nov 16 16:54:56 ctl01 systemd[1]: Started Locale Service.
Nov 16 16:54:56 ctl01 systemd-localed[6395]: Changed locale to LANG=en_US.UTF-8.
Nov 16 16:54:57 ctl01 salt-minion[1786]: [WARNING ] The function "module.run" is using its deprecated version and will expire in version "Sodium".
Nov 16 16:54:57 ctl01 systemd[1]: Reloading.
Nov 16 16:54:57 ctl01 salt-minion[1786]: [WARNING ] State for file: /etc/shadow - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
Nov 16 16:54:57 ctl01 salt-minion[1786]: [WARNING ] State for file: /etc/gshadow - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
Nov 16 16:54:57 ctl01 salt-minion[1786]: [WARNING ] State for file: /etc/group- - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
Nov 16 16:54:57 ctl01 salt-minion[1786]: [WARNING ] State for file: /etc/group - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
Nov 16 16:54:57 ctl01 salt-minion[1786]: [WARNING ] State for file: /etc/passwd- - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
Nov 16 16:54:57 ctl01 salt-minion[1786]: [WARNING ] State for file: /etc/passwd - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
Nov 16 16:54:57 ctl01 salt-minion[1786]: [WARNING ] State for file: /etc/gshadow- - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
Nov 16 16:54:57 ctl01 salt-minion[1786]: [WARNING ] State for file: /etc/shadow- - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
Nov 16 16:54:58 ctl01 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install openvswitch-switch bridge-utils vlan.
Nov 16 16:55:01 ctl01 systemd[1]: Reloading.
Nov 16 16:55:02 ctl01 systemd[1]: message repeated 2 times: [ Reloading.]
Nov 16 16:55:02 ctl01 systemd[1]: Starting Open vSwitch Database Unit...
Nov 16 16:55:02 ctl01 ovs-ctl[6710]:  * /etc/openvswitch/conf.db does not exist
Nov 16 16:55:02 ctl01 ovs-ctl[6710]:  * Creating empty database /etc/openvswitch/conf.db
Nov 16 16:55:02 ctl01 ovs-ctl[6710]:  * Starting ovsdb-server
Nov 16 16:55:02 ctl01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=7.16.1
Nov 16 16:55:02 ctl01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=2.11.1 "external-ids:system-id=\"becfb856-2070-4eef-a411-0caf8d359f97\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"ubuntu\"" "system-version=\"18.04\""
Nov 16 16:55:02 ctl01 ovs-ctl[6710]:  * Configuring Open vSwitch system IDs
Nov 16 16:55:02 ctl01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . external-ids:hostname=ctl01
Nov 16 16:55:02 ctl01 ovs-ctl[6710]:  * Enabling remote OVSDB managers
Nov 16 16:55:02 ctl01 systemd[1]: Started Open vSwitch Database Unit.
Nov 16 16:55:02 ctl01 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 16 16:55:02 ctl01 ovs-ctl[6768]:  * Inserting openvswitch module
Nov 16 16:55:02 ctl01 kernel: [  235.483102] openvswitch: Open vSwitch switching datapath
Nov 16 16:55:02 ctl01 ovs-ctl[6768]:  * Starting ovs-vswitchd
Nov 16 16:55:02 ctl01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . external-ids:hostname=ctl01
Nov 16 16:55:02 ctl01 ovs-ctl[6768]:  * Enabling remote OVSDB managers
Nov 16 16:55:02 ctl01 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 16 16:55:02 ctl01 systemd[1]: Starting Open vSwitch...
Nov 16 16:55:02 ctl01 systemd[1]: Started Open vSwitch.
Nov 16 16:55:02 ctl01 systemd[1]: Reloading.
Nov 16 16:55:07 ctl01 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install bridge-utils.
Nov 16 16:55:09 ctl01 systemd[1]: Reloading.
Nov 16 16:55:09 ctl01 systemd[1]: Started /bin/systemctl enable networking.service.
Nov 16 16:55:09 ctl01 systemd[1]: Reloading.
Nov 16 16:55:09 ctl01 systemd[1]: message repeated 2 times: [ Reloading.]
Nov 16 16:55:09 ctl01 dnsmasq[1273]: reading /etc/resolv.conf
Nov 16 16:55:09 ctl01 dnsmasq[1273]: using nameserver 8.8.8.8#53
Nov 16 16:55:09 ctl01 salt-minion[1786]: [WARNING ] The network state sls is requiring a reboot of the system to properly apply network configuration.
Nov 16 16:55:09 ctl01 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install vlan.
Nov 16 16:55:10 ctl01 kernel: [  243.652878] 8021q: 802.1Q VLAN Support v1.8
Nov 16 16:55:10 ctl01 kernel: [  243.652888] 8021q: adding VLAN 0 to HW filter on device ens3
Nov 16 16:55:10 ctl01 kernel: [  243.652965] 8021q: adding VLAN 0 to HW filter on device ens5
Nov 16 16:55:10 ctl01 systemd-udevd[7579]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:55:10 ctl01 systemd[1]: Reloading OpenBSD Secure Shell server.
Nov 16 16:55:10 ctl01 systemd[1]: Found device /sys/subsystem/net/devices/ens5.1000.
Nov 16 16:55:10 ctl01 systemd[1]: Reloaded OpenBSD Secure Shell server.
Nov 16 16:55:10 ctl01 systemd[1]: Started ifup for ens5.1000.
Nov 16 16:55:10 ctl01 sh[7682]: ifup: waiting for lock on /run/network/ifstate.ens5
Nov 16 16:55:10 ctl01 systemd[1]: Reloading Postfix Mail Transport Agent (instance -).
Nov 16 16:55:10 ctl01 postfix/postfix-script[7704]: refreshing the Postfix mail system
Nov 16 16:55:10 ctl01 postfix/master[4733]: reload -- version 3.3.0, configuration /etc/postfix
Nov 16 16:55:10 ctl01 systemd[1]: Reloaded Postfix Mail Transport Agent (instance -).
Nov 16 16:55:10 ctl01 systemd[1]: Reloading Postfix Mail Transport Agent.
Nov 16 16:55:10 ctl01 systemd[1]: Reloaded Postfix Mail Transport Agent.
Nov 16 16:55:10 ctl01 sh[7682]: ifup: interface ens5.1000 already configured
Nov 16 16:55:11 ctl01 salt-minion[1786]: [ERROR   ] Command '['umount', '/dev/shm']' failed with return code: 32
Nov 16 16:55:11 ctl01 salt-minion[1786]: [ERROR   ] stderr: umount: /dev/shm: target is busy.
Nov 16 16:55:11 ctl01 salt-minion[1786]: [ERROR   ] retcode: 32
Nov 16 16:55:13 ctl01 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef dist-upgrade.
Nov 16 16:55:26 ctl01 systemd[1]: Stopped Locale Service.
Nov 16 16:55:26 ctl01 systemd[1]: Stopped target Cloud-init target.
Nov 16 16:55:26 ctl01 systemd[1]: Stopping Authorization Manager...
Nov 16 16:55:26 ctl01 systemd[1]: Closed Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 16 16:55:26 ctl01 systemd[1]: Stopped Execute cloud user/final scripts.
Nov 16 16:55:26 ctl01 systemd[1]: Stopped Apply the settings specified in cloud-config.
Nov 16 16:55:26 ctl01 systemd[1]: Stopped target Cloud-config availability.
Nov 16 16:55:26 ctl01 systemd[1]: Stopped target Timers.
Nov 16 16:55:26 ctl01 systemd[1]: Stopped Message of the Day.
Nov 16 16:55:26 ctl01 systemd[1]: Stopped Daily apt upgrade and clean activities.
Nov 16 16:55:26 ctl01 systemd[1]: Stopped Daily apt download activities.
Nov 16 16:55:26 ctl01 systemd[1]: Stopped Discard unused blocks once a week.
Nov 16 16:55:26 ctl01 systemd[1]: Stopped target System Time Synchronized.
Nov 16 16:55:26 ctl01 systemd[1]: Stopped Daily Cleanup of Temporary Directories.
Nov 16 16:55:26 ctl01 systemd[1]: Stopping Availability of block devices...
Nov 16 16:55:26 ctl01 systemd[1]: Stopped target Graphical Interface.
Nov 16 16:55:26 ctl01 systemd[1]: Stopped target Multi-User System.
Nov 16 16:55:26 ctl01 systemd[1]: Stopping OpenBSD Secure Shell server...
Nov 16 16:55:26 ctl01 systemd[1]: Stopping irqbalance daemon...
Nov 16 16:55:26 ctl01 systemd[1]: Stopping Unattended Upgrades Shutdown...
Nov 16 16:55:26 ctl01 systemd[1]: Stopped Wait until snapd is fully seeded.
Nov 16 16:55:26 ctl01 systemd[1]: Stopping D-Bus System Message Bus...
Nov 16 16:55:26 ctl01 systemd[1]: Stopping Regular background program processing daemon...
Nov 16 16:55:26 ctl01 systemd[1]: Stopped Postfix Mail Transport Agent.
Nov 16 16:55:26 ctl01 systemd[1]: Stopping Postfix Mail Transport Agent (instance -)...
Nov 16 16:55:26 ctl01 systemd[1]: Stopped target Login Prompts.
Nov 16 16:55:26 ctl01 systemd[1]: Stopping Getty on tty1...
Nov 16 16:55:26 ctl01 systemd[1]: Stopping Serial Getty on ttyS0...
Nov 16 16:55:26 ctl01 systemd[1]: Stopping Deferred execution scheduler...
Nov 16 16:55:26 ctl01 systemd[1]: Stopping LSB: Set sysfs variables from /etc/sysfs.conf...
Nov 16 16:55:26 ctl01 systemd[1]: Stopping FUSE filesystem for LXC...
Nov 16 16:55:26 ctl01 systemd[1]: Stopping The Salt Minion...
Nov 16 16:55:26 ctl01 systemd[1]: Stopping System Logging Service...
Nov 16 16:55:26 ctl01 systemd[1]: Stopping LXD - container startup/shutdown...
Nov 16 16:55:26 ctl01 salt-minion[1786]: [WARNING ] Minion received a SIGTERM. Exiting.
Nov 16 16:55:26 ctl01 systemd[1]: Stopping LSB: Record successful boot for GRUB...
Nov 16 16:55:26 ctl01 systemd[1]: Stopping Accounts Service...
Nov 16 16:55:26 ctl01 systemd[1]: Stopping LSB: automatic crash report generation...
Nov 16 16:55:58 ctl01 systemd-modules-load[449]: Inserted module 'iscsi_tcp'
Nov 16 16:55:58 ctl01 systemd-modules-load[449]: Inserted module 'ib_iser'
Nov 16 16:55:58 ctl01 systemd-modules-load[449]: Inserted module 'nf_conntrack'
Nov 16 16:55:58 ctl01 systemd[1]: Mounted Kernel Configuration File System.
Nov 16 16:55:58 ctl01 systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 16 16:55:58 ctl01 systemd[1]: Starting udev Kernel Device Manager...
Nov 16 16:55:58 ctl01 systemd[1]: Started Apply Kernel Variables.
Nov 16 16:55:58 ctl01 systemd[1]: Started Set the console keyboard layout.
Nov 16 16:55:58 ctl01 systemd[1]: Reached target Local File Systems (Pre).
Nov 16 16:55:58 ctl01 systemd[1]: Started Flush Journal to Persistent Storage.
Nov 16 16:55:58 ctl01 systemd[1]: Started udev Kernel Device Manager.
Nov 16 16:55:58 ctl01 systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 16 16:55:58 ctl01 systemd[1]: Reached target Local Encrypted Volumes.
Nov 16 16:55:58 ctl01 systemd[1]: Found device /dev/ttyS0.
Nov 16 16:55:58 ctl01 systemd-udevd[504]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:55:58 ctl01 systemd-udevd[508]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:55:58 ctl01 systemd-udevd[500]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:55:58 ctl01 systemd-udevd[488]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:55:58 ctl01 systemd-udevd[487]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:55:58 ctl01 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 16 16:55:58 ctl01 systemd-udevd[502]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:55:58 ctl01 systemd-udevd[502]: Could not generate persistent MAC address for br-mgmt: No such file or directory
Nov 16 16:55:58 ctl01 systemd[1]: Found device Virtio network device.
Nov 16 16:55:58 ctl01 systemd[1]: Found device Virtio network device.
Nov 16 16:55:58 ctl01 systemd[1]: Found device /sys/subsystem/net/devices/br-mgmt.
Nov 16 16:55:58 ctl01 systemd[1]: Found device Virtio network device.
Nov 16 16:55:58 ctl01 systemd[1]: Found device /dev/disk/by-label/UEFI.
Nov 16 16:55:58 ctl01 systemd[1]: Mounting /boot/efi...
Nov 16 16:55:58 ctl01 systemd[1]: Mounted /boot/efi.
Nov 16 16:55:58 ctl01 systemd[1]: Reached target Local File Systems.
Nov 16 16:55:58 ctl01 systemd[1]: Starting Tell Plymouth To Write Out Runtime Data...
Nov 16 16:55:58 ctl01 systemd[1]: Starting Set console font and keymap...
Nov 16 16:55:58 ctl01 systemd[1]: Starting AppArmor initialization...
Nov 16 16:55:58 ctl01 systemd[1]: Starting Create Volatile Files and Directories...
Nov 16 16:55:58 ctl01 systemd[1]: Starting ebtables ruleset management...
Nov 16 16:55:58 ctl01 systemd[1]: Started Set console font and keymap.
Nov 16 16:55:58 ctl01 systemd[1]: Started Tell Plymouth To Write Out Runtime Data.
Nov 16 16:55:58 ctl01 systemd[1]: Started Create Volatile Files and Directories.
Nov 16 16:55:58 ctl01 apparmor[933]:  * Starting AppArmor profiles
Nov 16 16:55:58 ctl01 systemd[1]: Starting Update UTMP about System Boot/Shutdown...
Nov 16 16:55:58 ctl01 systemd[1]: Starting Network Time Synchronization...
Nov 16 16:55:58 ctl01 systemd[1]: Started Update UTMP about System Boot/Shutdown.
Nov 16 16:55:58 ctl01 systemd[1]: Started ebtables ruleset management.
Nov 16 16:55:58 ctl01 apparmor[933]: Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd
Nov 16 16:55:58 ctl01 apparmor[933]:    ...done.
Nov 16 16:55:58 ctl01 systemd[1]: Started AppArmor initialization.
Nov 16 16:55:58 ctl01 systemd[1]: Starting Initial cloud-init job (pre-networking)...
Nov 16 16:55:58 ctl01 systemd[1]: Started Network Time Synchronization.
Nov 16 16:55:58 ctl01 systemd[1]: Reached target System Time Synchronized.
Nov 16 16:55:58 ctl01 cloud-init[1069]: Cloud-init v. 19.2-36-g059d049c-0ubuntu2~18.04.1 running 'init-local' at Sat, 16 Nov 2019 16:55:53 +0000. Up 11.10 seconds.
Nov 16 16:55:58 ctl01 systemd[1]: Started Initial cloud-init job (pre-networking).
Nov 16 16:55:58 ctl01 systemd[1]: Reached target Network (Pre).
Nov 16 16:55:58 ctl01 systemd[1]: Started ifup for ens5.
Nov 16 16:55:58 ctl01 systemd[1]: Started ifup for ens3.
Nov 16 16:55:58 ctl01 systemd[1]: Started ifup for br-mgmt.
Nov 16 16:55:58 ctl01 systemd[1]: Started ifup for ens4.
Nov 16 16:55:58 ctl01 systemd[1]: Starting Open vSwitch Database Unit...
Nov 16 16:55:58 ctl01 sh[1139]: Waiting for br-mgmt to get ready (MAXWAIT is 32 seconds).
Nov 16 16:55:58 ctl01 sh[1128]: WARNING:  Could not open /proc/net/vlan/config.  Maybe you need to load the 8021q module, or maybe you are not using PROCFS??
Nov 16 16:55:58 ctl01 sh[1128]: Set name-type for VLAN subsystem. Should be visible in /proc/net/vlan/config
Nov 16 16:55:58 ctl01 sh[1128]: Added VLAN with VID == 1000 to IF -:ens5:-
Nov 16 16:55:58 ctl01 systemd-udevd[1417]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:55:58 ctl01 ovs-ctl[1157]:  * Starting ovsdb-server
Nov 16 16:55:58 ctl01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=7.16.1
Nov 16 16:55:58 ctl01 systemd[1]: Found device /sys/subsystem/net/devices/ens5.1000.
Nov 16 16:55:58 ctl01 systemd[1]: Started ifup for ens5.1000.
Nov 16 16:55:58 ctl01 sh[1527]: Set name-type for VLAN subsystem. Should be visible in /proc/net/vlan/config
Nov 16 16:55:58 ctl01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=2.11.1 "external-ids:system-id=\"becfb856-2070-4eef-a411-0caf8d359f97\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"ubuntu\"" "system-version=\"18.04\""
Nov 16 16:55:58 ctl01 ovs-ctl[1157]:  * Configuring Open vSwitch system IDs
Nov 16 16:55:58 ctl01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . external-ids:hostname=ctl01
Nov 16 16:55:58 ctl01 ovs-ctl[1157]:  * Enabling remote OVSDB managers
Nov 16 16:55:58 ctl01 systemd[1]: Started Open vSwitch Database Unit.
Nov 16 16:55:58 ctl01 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 16 16:55:58 ctl01 ovs-ctl[1602]:  * Inserting openvswitch module
Nov 16 16:55:58 ctl01 ovs-ctl[1602]:  * Starting ovs-vswitchd
Nov 16 16:55:58 ctl01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . external-ids:hostname=ctl01
Nov 16 16:55:58 ctl01 ovs-ctl[1602]:  * Enabling remote OVSDB managers
Nov 16 16:55:58 ctl01 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 16 16:55:58 ctl01 systemd[1]: Starting Raise network interfaces...
Nov 16 16:55:58 ctl01 systemd[1]: Started Raise network interfaces.
Nov 16 16:55:58 ctl01 systemd[1]: Starting Initial cloud-init job (metadata service crawler)...
Nov 16 16:55:58 ctl01 cloud-init[1886]: Cloud-init v. 19.2-36-g059d049c-0ubuntu2~18.04.1 running 'init' at Sat, 16 Nov 2019 16:55:54 +0000. Up 12.60 seconds.
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: ++++++++++++++++++++++++++++++++++++++++Net device info++++++++++++++++++++++++++++++++++++++++
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: +-----------+-------+----------------------------+---------------+--------+-------------------+
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: |   Device  |   Up  |          Address           |      Mask     | Scope  |     Hw-Address    |
Nov 16 16:55:58 ctl01 kernel: [    0.000000] Linux version 4.15.0-70-generic (buildd@lgw01-amd64-055) (gcc version 7.4.0 (Ubuntu 7.4.0-1ubuntu1~18.04.1)) #79-Ubuntu SMP Tue Nov 12 10:36:11 UTC 2019 (Ubuntu 4.15.0-70.79-generic 4.15.18)
Nov 16 16:55:58 ctl01 kernel: [    0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-4.15.0-70-generic root=LABEL=cloudimg-rootfs ro console=tty1 console=ttyS0
Nov 16 16:55:58 ctl01 kernel: [    0.000000] KERNEL supported cpus:
Nov 16 16:55:58 ctl01 kernel: [    0.000000]   Intel GenuineIntel
Nov 16 16:55:58 ctl01 kernel: [    0.000000]   AMD AuthenticAMD
Nov 16 16:55:58 ctl01 kernel: [    0.000000]   Centaur CentaurHauls
Nov 16 16:55:58 ctl01 kernel: [    0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 16 16:55:58 ctl01 kernel: [    0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 16 16:55:58 ctl01 kernel: [    0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 16 16:55:58 ctl01 kernel: [    0.000000] x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 16 16:55:58 ctl01 kernel: [    0.000000] x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
Nov 16 16:55:58 ctl01 kernel: [    0.000000] e820: BIOS-provided physical RAM map:
Nov 16 16:55:58 ctl01 kernel: [    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 16 16:55:58 ctl01 kernel: [    0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 16 16:55:58 ctl01 kernel: [    0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 16 16:55:58 ctl01 kernel: [    0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000bffdefff] usable
Nov 16 16:55:58 ctl01 kernel: [    0.000000] BIOS-e820: [mem 0x00000000bffdf000-0x00000000bfffffff] reserved
Nov 16 16:55:58 ctl01 kernel: [    0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 16 16:55:58 ctl01 kernel: [    0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 16 16:55:58 ctl01 kernel: [    0.000000] BIOS-e820: [mem 0x0000000100000000-0x00000003bfffffff] usable
Nov 16 16:55:58 ctl01 kernel: [    0.000000] NX (Execute Disable) protection: active
Nov 16 16:55:58 ctl01 kernel: [    0.000000] SMBIOS 2.8 present.
Nov 16 16:55:58 ctl01 kernel: [    0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Ubuntu-1.8.2-1ubuntu1 04/01/2014
Nov 16 16:55:58 ctl01 kernel: [    0.000000] Hypervisor detected: KVM
Nov 16 16:55:58 ctl01 kernel: [    0.000000] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Nov 16 16:55:58 ctl01 kernel: [    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
Nov 16 16:55:58 ctl01 kernel: [    0.000000] e820: last_pfn = 0x3c0000 max_arch_pfn = 0x400000000
Nov 16 16:55:58 ctl01 kernel: [    0.000000] MTRR default type: write-back
Nov 16 16:55:58 ctl01 kernel: [    0.000000] MTRR fixed ranges enabled:
Nov 16 16:55:58 ctl01 kernel: [    0.000000]   00000-9FFFF write-back
Nov 16 16:55:58 ctl01 kernel: [    0.000000]   A0000-BFFFF uncachable
Nov 16 16:55:58 ctl01 kernel: [    0.000000]   C0000-FFFFF write-protect
Nov 16 16:55:58 ctl01 kernel: [    0.000000] MTRR variable ranges enabled:
Nov 16 16:55:58 ctl01 kernel: [    0.000000]   0 base 00C0000000 mask FFC0000000 uncachable
Nov 16 16:55:58 ctl01 kernel: [    0.000000]   1 disabled
Nov 16 16:55:58 ctl01 kernel: [    0.000000]   2 disabled
Nov 16 16:55:58 ctl01 kernel: [    0.000000]   3 disabled
Nov 16 16:55:58 ctl01 kernel: [    0.000000]   4 disabled
Nov 16 16:55:58 ctl01 kernel: [    0.000000]   5 disabled
Nov 16 16:55:58 ctl01 kernel: [    0.000000]   6 disabled
Nov 16 16:55:58 ctl01 kernel: [    0.000000]   7 disabled
Nov 16 16:55:58 ctl01 kernel: [    0.000000] x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 16 16:55:58 ctl01 kernel: [    0.000000] e820: last_pfn = 0xbffdf max_arch_pfn = 0x400000000
Nov 16 16:55:58 ctl01 kernel: [    0.000000] found SMP MP-table at [mem 0x000f6560-0x000f656f]
Nov 16 16:55:58 ctl01 kernel: [    0.000000] Scanning 1 areas for low memory corruption
Nov 16 16:55:58 ctl01 kernel: [    0.000000] Using GB pages for direct mapping
Nov 16 16:55:58 ctl01 kernel: [    0.000000] BRK [0x7ef41000, 0x7ef41fff] PGTABLE
Nov 16 16:55:58 ctl01 kernel: [    0.000000] BRK [0x7ef42000, 0x7ef42fff] PGTABLE
Nov 16 16:55:58 ctl01 kernel: [    0.000000] BRK [0x7ef43000, 0x7ef43fff] PGTABLE
Nov 16 16:55:58 ctl01 kernel: [    0.000000] BRK [0x7ef44000, 0x7ef44fff] PGTABLE
Nov 16 16:55:58 ctl01 kernel: [    0.000000] RAMDISK: [mem 0x35a87000-0x36d3afff]
Nov 16 16:55:58 ctl01 kernel: [    0.000000] ACPI: Early table checksum verification disabled
Nov 16 16:55:58 ctl01 kernel: [    0.000000] ACPI: RSDP 0x00000000000F6510 000014 (v00 BOCHS )
Nov 16 16:55:58 ctl01 kernel: [    0.000000] ACPI: RSDT 0x00000000BFFE14CC 000030 (v01 BOCHS  BXPCRSDT 00000001 BXPC 00000001)
Nov 16 16:55:58 ctl01 kernel: [    0.000000] ACPI: FACP 0x00000000BFFE0854 000074 (v01 BOCHS  BXPCFACP 00000001 BXPC 00000001)
Nov 16 16:55:58 ctl01 kernel: [    0.000000] ACPI: DSDT 0x00000000BFFDFC80 000BD4 (v01 BOCHS  BXPCDSDT 00000001 BXPC 00000001)
Nov 16 16:55:58 ctl01 kernel: [    0.000000] ACPI: FACS 0x00000000BFFDFC40 000040
Nov 16 16:55:58 ctl01 kernel: [    0.000000] ACPI: SSDT 0x00000000BFFE08C8 000B54 (v01 BOCHS  BXPCSSDT 00000001 BXPC 00000001)
Nov 16 16:55:58 ctl01 kernel: [    0.000000] ACPI: APIC 0x00000000BFFE141C 0000B0 (v01 BOCHS  BXPCAPIC 00000001 BXPC 00000001)
Nov 16 16:55:58 ctl01 kernel: [    0.000000] ACPI: Local APIC address 0xfee00000
Nov 16 16:55:58 ctl01 kernel: [    0.000000] No NUMA configuration found
Nov 16 16:55:58 ctl01 kernel: [    0.000000] Faking a node at [mem 0x0000000000000000-0x00000003bfffffff]
Nov 16 16:55:58 ctl01 kernel: [    0.000000] NODE_DATA(0) allocated [mem 0x3bffd3000-0x3bfffdfff]
Nov 16 16:55:58 ctl01 kernel: [    0.000000] kvm-clock: cpu 0, msr 3:bff52001, primary cpu clock
Nov 16 16:55:58 ctl01 kernel: [    0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 16 16:55:58 ctl01 kernel: [    0.000000] kvm-clock: using sched offset of 285422915336 cycles
Nov 16 16:55:58 ctl01 kernel: [    0.000000] clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 16 16:55:58 ctl01 kernel: [    0.000000] Zone ranges:
Nov 16 16:55:58 ctl01 kernel: [    0.000000]   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 16 16:55:58 ctl01 kernel: [    0.000000]   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 16 16:55:58 ctl01 kernel: [    0.000000]   Normal   [mem 0x0000000100000000-0x00000003bfffffff]
Nov 16 16:55:58 ctl01 kernel: [    0.000000]   Device   empty
Nov 16 16:55:58 ctl01 kernel: [    0.000000] Movable zone start for each node
Nov 16 16:55:58 ctl01 kernel: [    0.000000] Early memory node ranges
Nov 16 16:55:58 ctl01 kernel: [    0.000000]   node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 16 16:55:58 ctl01 kernel: [    0.000000]   node   0: [mem 0x0000000000100000-0x00000000bffdefff]
Nov 16 16:55:58 ctl01 kernel: [    0.000000]   node   0: [mem 0x0000000100000000-0x00000003bfffffff]
Nov 16 16:55:58 ctl01 kernel: [    0.000000] Reserved but unavailable: 98 pages
Nov 16 16:55:58 ctl01 kernel: [    0.000000] Initmem setup node 0 [mem 0x0000000000001000-0x00000003bfffffff]
Nov 16 16:55:58 ctl01 kernel: [    0.000000] On node 0 totalpages: 3669885
Nov 16 16:55:58 ctl01 kernel: [    0.000000]   DMA zone: 64 pages used for memmap
Nov 16 16:55:58 ctl01 kernel: [    0.000000]   DMA zone: 21 pages reserved
Nov 16 16:55:58 ctl01 kernel: [    0.000000]   DMA zone: 3998 pages, LIFO batch:0
Nov 16 16:55:58 ctl01 kernel: [    0.000000]   DMA32 zone: 12224 pages used for memmap
Nov 16 16:55:58 ctl01 kernel: [    0.000000]   DMA32 zone: 782303 pages, LIFO batch:31
Nov 16 16:55:58 ctl01 kernel: [    0.000000]   Normal zone: 45056 pages used for memmap
Nov 16 16:55:58 ctl01 kernel: [    0.000000]   Normal zone: 2883584 pages, LIFO batch:31
Nov 16 16:55:58 ctl01 kernel: [    0.000000] ACPI: PM-Timer IO Port: 0x608
Nov 16 16:55:58 ctl01 kernel: [    0.000000] ACPI: Local APIC address 0xfee00000
Nov 16 16:55:58 ctl01 kernel: [    0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 16 16:55:58 ctl01 kernel: [    0.000000] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 16 16:55:58 ctl01 kernel: [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 16 16:55:58 ctl01 kernel: [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 16 16:55:58 ctl01 kernel: [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 16 16:55:58 ctl01 kernel: [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 16 16:55:58 ctl01 kernel: [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 16 16:55:58 ctl01 kernel: [    0.000000] ACPI: IRQ0 used by override.
Nov 16 16:55:58 ctl01 kernel: [    0.000000] ACPI: IRQ5 used by override.
Nov 16 16:55:58 ctl01 kernel: [    0.000000] ACPI: IRQ9 used by override.
Nov 16 16:55:58 ctl01 kernel: [    0.000000] ACPI: IRQ10 used by override.
Nov 16 16:55:58 ctl01 kernel: [    0.000000] ACPI: IRQ11 used by override.
Nov 16 16:55:58 ctl01 kernel: [    0.000000] Using ACPI (MADT) for SMP configuration information
Nov 16 16:55:58 ctl01 kernel: [    0.000000] smpboot: Allowing 8 CPUs, 0 hotplug CPUs
Nov 16 16:55:58 ctl01 kernel: [    0.000000] PM: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 16 16:55:58 ctl01 kernel: [    0.000000] PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 16 16:55:58 ctl01 kernel: [    0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 16 16:55:58 ctl01 kernel: [    0.000000] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 16 16:55:58 ctl01 kernel: [    0.000000] PM: Registered nosave memory: [mem 0xbffdf000-0xbfffffff]
Nov 16 16:55:58 ctl01 kernel: [    0.000000] PM: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 16 16:55:58 ctl01 kernel: [    0.000000] PM: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 16 16:55:58 ctl01 kernel: [    0.000000] PM: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 16 16:55:58 ctl01 kernel: [    0.000000] PM: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 16 16:55:58 ctl01 kernel: [    0.000000] e820: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 16 16:55:58 ctl01 kernel: [    0.000000] Booting paravirtualized kernel on KVM
Nov 16 16:55:58 ctl01 kernel: [    0.000000] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645519600211568 ns
Nov 16 16:55:58 ctl01 kernel: [    0.000000] random: get_random_bytes called from start_kernel+0x99/0x4fd with crng_init=0
Nov 16 16:55:58 ctl01 kernel: [    0.000000] setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 16 16:55:58 ctl01 kernel: [    0.000000] percpu: Embedded 45 pages/cpu s147456 r8192 d28672 u262144
Nov 16 16:55:58 ctl01 kernel: [    0.000000] pcpu-alloc: s147456 r8192 d28672 u262144 alloc=1*2097152
Nov 16 16:55:58 ctl01 kernel: [    0.000000] pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Nov 16 16:55:58 ctl01 kernel: [    0.000000] KVM setup async PF for cpu 0
Nov 16 16:55:58 ctl01 kernel: [    0.000000] kvm-stealtime: cpu 0, msr 3bfc23040
Nov 16 16:55:58 ctl01 kernel: [    0.000000] PV qspinlock hash table entries: 256 (order: 0, 4096 bytes)
Nov 16 16:55:58 ctl01 kernel: [    0.000000] Built 1 zonelists, mobility grouping on.  Total pages: 3612520
Nov 16 16:55:58 ctl01 kernel: [    0.000000] Policy zone: Normal
Nov 16 16:55:58 ctl01 kernel: [    0.000000] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-4.15.0-70-generic root=LABEL=cloudimg-rootfs ro console=tty1 console=ttyS0
Nov 16 16:55:58 ctl01 kernel: [    0.000000] Calgary: detecting Calgary via BIOS EBDA area
Nov 16 16:55:58 ctl01 kernel: [    0.000000] Calgary: Unable to locate Rio Grande table in EBDA - bailing!
Nov 16 16:55:58 ctl01 kernel: [    0.000000] Memory: 14334824K/14679540K available (12300K kernel code, 2481K rwdata, 4264K rodata, 2432K init, 2388K bss, 344716K reserved, 0K cma-reserved)
Nov 16 16:55:58 ctl01 kernel: [    0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 16 16:55:58 ctl01 kernel: [    0.000000] Kernel/User page tables isolation: enabled
Nov 16 16:55:58 ctl01 kernel: [    0.000000] ftrace: allocating 39315 entries in 154 pages
Nov 16 16:55:58 ctl01 kernel: [    0.004000] Hierarchical RCU implementation.
Nov 16 16:55:58 ctl01 kernel: [    0.004000] 	RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 16 16:55:58 ctl01 kernel: [    0.004000] 	Tasks RCU enabled.
Nov 16 16:55:58 ctl01 kernel: [    0.004000] RCU: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 16 16:55:58 ctl01 kernel: [    0.004000] NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 16 16:55:58 ctl01 kernel: [    0.004000] Console: colour VGA+ 80x25
Nov 16 16:55:58 ctl01 kernel: [    0.004000] console [tty1] enabled
Nov 16 16:55:58 ctl01 kernel: [    0.004000] console [ttyS0] enabled
Nov 16 16:55:58 ctl01 kernel: [    0.004000] ACPI: Core revision 20170831
Nov 16 16:55:58 ctl01 kernel: [    0.004000] ACPI: 2 ACPI AML tables successfully acquired and loaded
Nov 16 16:55:58 ctl01 kernel: [    0.004004] APIC: Switch to symmetric I/O mode setup
Nov 16 16:55:58 ctl01 kernel: [    0.005228] x2apic enabled
Nov 16 16:55:58 ctl01 kernel: [    0.006158] Switched APIC routing to physical x2apic.
Nov 16 16:55:58 ctl01 kernel: [    0.008000] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
Nov 16 16:55:58 ctl01 kernel: [    0.008000] tsc: Detected 2799.994 MHz processor
Nov 16 16:55:58 ctl01 kernel: [    0.008000] Calibrating delay loop (skipped) preset value.. 5599.98 BogoMIPS (lpj=11199976)
Nov 16 16:55:58 ctl01 kernel: [    0.008002] pid_max: default: 32768 minimum: 301
Nov 16 16:55:58 ctl01 kernel: [    0.008965] Security Framework initialized
Nov 16 16:55:58 ctl01 kernel: [    0.009797] Yama: becoming mindful.
Nov 16 16:55:58 ctl01 kernel: [    0.010534] AppArmor: AppArmor initialized
Nov 16 16:55:58 ctl01 kernel: [    0.013870] Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes)
Nov 16 16:55:58 ctl01 kernel: [    0.016586] Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes)
Nov 16 16:55:58 ctl01 kernel: [    0.018067] Mount-cache hash table entries: 32768 (order: 6, 262144 bytes)
Nov 16 16:55:58 ctl01 kernel: [    0.019428] Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes)
Nov 16 16:55:58 ctl01 kernel: [    0.020280] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0
Nov 16 16:55:58 ctl01 kernel: [    0.021399] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0
Nov 16 16:55:58 ctl01 kernel: [    0.022560] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 16 16:55:58 ctl01 kernel: [    0.024002] Spectre V2 : Mitigation: Full generic retpoline
Nov 16 16:55:58 ctl01 kernel: [    0.025133] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Nov 16 16:55:58 ctl01 kernel: [    0.026696] Spectre V2 : Enabling Restricted Speculation for firmware calls
Nov 16 16:55:58 ctl01 kernel: [    0.028008] Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 16 16:55:58 ctl01 kernel: [    0.029604] Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp
Nov 16 16:55:58 ctl01 kernel: [    0.032029] MDS: Mitigation: Clear CPU buffers
Nov 16 16:55:58 ctl01 kernel: [    0.038155] Freeing SMP alternatives memory: 36K
Nov 16 16:55:58 ctl01 kernel: [    0.040790] TSC deadline timer enabled
Nov 16 16:55:58 ctl01 kernel: [    0.040793] smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz (family: 0x6, model: 0x3e, stepping: 0x4)
Nov 16 16:55:58 ctl01 kernel: [    0.042816] Performance Events: IvyBridge events, Intel PMU driver.
Nov 16 16:55:58 ctl01 kernel: [    0.043993] ... version:                2
Nov 16 16:55:58 ctl01 kernel: [    0.044000] ... bit width:              48
Nov 16 16:55:58 ctl01 kernel: [    0.044004] ... generic registers:      4
Nov 16 16:55:58 ctl01 kernel: [    0.044817] ... value mask:             0000ffffffffffff
Nov 16 16:55:58 ctl01 kernel: [    0.046264] ... max period:             000000007fffffff
Nov 16 16:55:58 ctl01 kernel: [    0.047289] ... fixed-purpose events:   3
Nov 16 16:55:58 ctl01 kernel: [    0.048004] ... event mask:             000000070000000f
Nov 16 16:55:58 ctl01 kernel: [    0.049041] Hierarchical SRCU implementation.
Nov 16 16:55:58 ctl01 kernel: [    0.050520] smp: Bringing up secondary CPUs ...
Nov 16 16:55:58 ctl01 kernel: [    0.051493] x86: Booting SMP configuration:
Nov 16 16:55:58 ctl01 kernel: [    0.052006] .... node  #0, CPUs:      #1
Nov 16 16:55:58 ctl01 kernel: [    0.004000] kvm-clock: cpu 1, msr 3:bff52041, secondary cpu clock
Nov 16 16:55:58 ctl01 kernel: [    0.056025] KVM setup async PF for cpu 1
Nov 16 16:55:58 ctl01 kernel: [    0.056876] kvm-stealtime: cpu 1, msr 3bfc63040
Nov 16 16:55:58 ctl01 kernel: [    0.057846]  #2
Nov 16 16:55:58 ctl01 kernel: [    0.004000] kvm-clock: cpu 2, msr 3:bff52081, secondary cpu clock
Nov 16 16:55:58 ctl01 kernel: [    0.060017] KVM setup async PF for cpu 2
Nov 16 16:55:58 ctl01 kernel: [    0.060845] kvm-stealtime: cpu 2, msr 3bfca3040
Nov 16 16:55:58 ctl01 kernel: [    0.061776]  #3
Nov 16 16:55:58 ctl01 kernel: [    0.004000] kvm-clock: cpu 3, msr 3:bff520c1, secondary cpu clock
Nov 16 16:55:58 ctl01 kernel: [    0.064016] KVM setup async PF for cpu 3
Nov 16 16:55:58 ctl01 kernel: [    0.064814] kvm-stealtime: cpu 3, msr 3bfce3040
Nov 16 16:55:58 ctl01 kernel: [    0.065735]  #4
Nov 16 16:55:58 ctl01 kernel: [    0.004000] kvm-clock: cpu 4, msr 3:bff52101, secondary cpu clock
Nov 16 16:55:58 ctl01 kernel: [    0.072016] KVM setup async PF for cpu 4
Nov 16 16:55:58 ctl01 kernel: [    0.072821] kvm-stealtime: cpu 4, msr 3bfd23040
Nov 16 16:55:58 ctl01 kernel: [    0.073726]  #5
Nov 16 16:55:58 ctl01 kernel: [    0.004000] kvm-clock: cpu 5, msr 3:bff52141, secondary cpu clock
Nov 16 16:55:58 ctl01 kernel: [    0.076017] KVM setup async PF for cpu 5
Nov 16 16:55:58 ctl01 kernel: [    0.076840] kvm-stealtime: cpu 5, msr 3bfd63040
Nov 16 16:55:58 ctl01 kernel: [    0.077765]  #6
Nov 16 16:55:58 ctl01 kernel: [    0.004000] kvm-clock: cpu 6, msr 3:bff52181, secondary cpu clock
Nov 16 16:55:58 ctl01 kernel: [    0.080016] KVM setup async PF for cpu 6
Nov 16 16:55:58 ctl01 kernel: [    0.080866] kvm-stealtime: cpu 6, msr 3bfda3040
Nov 16 16:55:58 ctl01 kernel: [    0.081784]  #7
Nov 16 16:55:58 ctl01 kernel: [    0.004000] kvm-clock: cpu 7, msr 3:bff521c1, secondary cpu clock
Nov 16 16:55:58 ctl01 kernel: [    0.084017] KVM setup async PF for cpu 7
Nov 16 16:55:58 ctl01 kernel: [    0.085956] kvm-stealtime: cpu 7, msr 3bfde3040
Nov 16 16:55:58 ctl01 kernel: [    0.088007] smp: Brought up 1 node, 8 CPUs
Nov 16 16:55:58 ctl01 kernel: [    0.092005] smpboot: Max logical packages: 8
Nov 16 16:55:58 ctl01 kernel: [    0.092920] smpboot: Total of 8 processors activated (44799.90 BogoMIPS)
Nov 16 16:55:58 ctl01 kernel: [    0.094602] devtmpfs: initialized
Nov 16 16:55:58 ctl01 kernel: [    0.096059] x86/mm: Memory block size: 128MB
Nov 16 16:55:58 ctl01 kernel: [    0.098041] evm: security.selinux
Nov 16 16:55:58 ctl01 kernel: [    0.098803] evm: security.SMACK64
Nov 16 16:55:58 ctl01 kernel: [    0.099549] evm: security.SMACK64EXEC
Nov 16 16:55:58 ctl01 kernel: [    0.100005] evm: security.SMACK64TRANSMUTE
Nov 16 16:55:58 ctl01 kernel: [    0.100903] evm: security.SMACK64MMAP
Nov 16 16:55:58 ctl01 kernel: [    0.101713] evm: security.apparmor
Nov 16 16:55:58 ctl01 kernel: [    0.102478] evm: security.ima
Nov 16 16:55:58 ctl01 kernel: [    0.103169] evm: security.capability
Nov 16 16:55:58 ctl01 kernel: [    0.104157] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645041785100000 ns
Nov 16 16:55:58 ctl01 kernel: [    0.106052] futex hash table entries: 2048 (order: 5, 131072 bytes)
Nov 16 16:55:58 ctl01 kernel: [    0.107382] pinctrl core: initialized pinctrl subsystem
Nov 16 16:55:58 ctl01 kernel: [    0.108176] RTC time: 16:55:42, date: 11/16/19
Nov 16 16:55:58 ctl01 kernel: [    0.109976] NET: Registered protocol family 16
Nov 16 16:55:58 ctl01 kernel: [    0.110988] audit: initializing netlink subsys (disabled)
Nov 16 16:55:58 ctl01 kernel: [    0.112033] audit: type=2000 audit(1573923341.005:1): state=initialized audit_enabled=0 res=1
Nov 16 16:55:58 ctl01 kernel: [    0.112157] cpuidle: using governor ladder
Nov 16 16:55:58 ctl01 kernel: [    0.112963] cpuidle: using governor menu
Nov 16 16:55:58 ctl01 kernel: [    0.116203] ACPI: bus type PCI registered
Nov 16 16:55:58 ctl01 kernel: [    0.117096] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 16 16:55:58 ctl01 kernel: [    0.118501] PCI: Using configuration type 1 for base access
Nov 16 16:55:58 ctl01 kernel: [    0.120009] core: PMU erratum BJ122, BV98, HSD29 workaround disabled, HT off
Nov 16 16:55:58 ctl01 kernel: [    0.122560] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
Nov 16 16:55:58 ctl01 kernel: [    0.124005] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
Nov 16 16:55:58 ctl01 kernel: [    0.125381] ACPI: Added _OSI(Module Device)
Nov 16 16:55:58 ctl01 kernel: [    0.125381] ACPI: Added _OSI(Processor Device)
Nov 16 16:55:58 ctl01 kernel: [    0.126036] ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 16 16:55:58 ctl01 kernel: [    0.128006] ACPI: Added _OSI(Processor Aggregator Device)
Nov 16 16:55:58 ctl01 kernel: [    0.129323] ACPI: Added _OSI(Linux-Dell-Video)
Nov 16 16:55:58 ctl01 kernel: [    0.130267] ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
Nov 16 16:55:58 ctl01 kernel: [    0.131414] ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
Nov 16 16:55:58 ctl01 kernel: [    0.133467] ACPI: Interpreter enabled
Nov 16 16:55:58 ctl01 kernel: [    0.134362] ACPI: (supports S0 S5)
Nov 16 16:55:58 ctl01 kernel: [    0.135154] ACPI: Using IOAPIC for interrupt routing
Nov 16 16:55:58 ctl01 kernel: [    0.136014] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 16 16:55:58 ctl01 kernel: [    0.138419] ACPI: Enabled 16 GPEs in block 00 to 0F
Nov 16 16:55:58 ctl01 kernel: [    0.142830] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 16 16:55:58 ctl01 kernel: [    0.144009] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI]
Nov 16 16:55:58 ctl01 kernel: [    0.145386] acpi PNP0A03:00: _OSC failed (AE_NOT_FOUND); disabling ASPM
Nov 16 16:55:58 ctl01 kernel: [    0.146753] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
Nov 16 16:55:58 ctl01 kernel: [    0.148313] acpiphp: Slot [3] registered
Nov 16 16:55:58 ctl01 kernel: [    0.149248] acpiphp: Slot [4] registered
Nov 16 16:55:58 ctl01 kernel: [    0.150185] acpiphp: Slot [5] registered
Nov 16 16:55:58 ctl01 kernel: [    0.151030] acpiphp: Slot [6] registered
Nov 16 16:55:58 ctl01 kernel: [    0.151883] acpiphp: Slot [7] registered
Nov 16 16:55:58 ctl01 kernel: [    0.152058] acpiphp: Slot [9] registered
Nov 16 16:55:58 ctl01 kernel: [    0.152992] acpiphp: Slot [10] registered
Nov 16 16:55:58 ctl01 kernel: [    0.153908] acpiphp: Slot [11] registered
Nov 16 16:55:58 ctl01 kernel: [    0.154774] acpiphp: Slot [12] registered
Nov 16 16:55:58 ctl01 kernel: [    0.155677] acpiphp: Slot [13] registered
Nov 16 16:55:58 ctl01 kernel: [    0.156042] acpiphp: Slot [14] registered
Nov 16 16:55:58 ctl01 kernel: [    0.156983] acpiphp: Slot [15] registered
Nov 16 16:55:58 ctl01 kernel: [    0.157850] acpiphp: Slot [16] registered
Nov 16 16:55:58 ctl01 kernel: [    0.158759] acpiphp: Slot [17] registered
Nov 16 16:55:58 ctl01 kernel: [    0.159656] acpiphp: Slot [18] registered
Nov 16 16:55:58 ctl01 kernel: [    0.160047] acpiphp: Slot [19] registered
Nov 16 16:55:58 ctl01 kernel: [    0.160998] acpiphp: Slot [20] registered
Nov 16 16:55:58 ctl01 kernel: [    0.161910] acpiphp: Slot [21] registered
Nov 16 16:55:58 ctl01 kernel: [    0.162883] acpiphp: Slot [22] registered
Nov 16 16:55:58 ctl01 kernel: [    0.163835] acpiphp: Slot [23] registered
Nov 16 16:55:58 ctl01 kernel: [    0.164041] acpiphp: Slot [24] registered
Nov 16 16:55:58 ctl01 kernel: [    0.164929] acpiphp: Slot [25] registered
Nov 16 16:55:58 ctl01 kernel: [    0.165861] acpiphp: Slot [26] registered
Nov 16 16:55:58 ctl01 kernel: [    0.166816] acpiphp: Slot [27] registered
Nov 16 16:55:58 ctl01 kernel: [    0.167721] acpiphp: Slot [28] registered
Nov 16 16:55:58 ctl01 kernel: [    0.168043] acpiphp: Slot [29] registered
Nov 16 16:55:58 ctl01 kernel: [    0.168966] acpiphp: Slot [30] registered
Nov 16 16:55:58 ctl01 kernel: [    0.169902] acpiphp: Slot [31] registered
Nov 16 16:55:58 ctl01 kernel: [    0.170803] PCI host bridge to bus 0000:00
Nov 16 16:55:58 ctl01 kernel: [    0.171690] pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 16 16:55:58 ctl01 kernel: [    0.172004] pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 16 16:55:58 ctl01 kernel: [    0.173469] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 16 16:55:58 ctl01 kernel: [    0.174978] pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 16 16:55:58 ctl01 kernel: [    0.176004] pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 16 16:55:58 ctl01 kernel: [    0.177250] pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
Nov 16 16:55:58 ctl01 kernel: [    0.177851] pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
Nov 16 16:55:58 ctl01 kernel: [    0.178511] pci 0000:00:01.1: [8086:7010] type 00 class 0x010180
Nov 16 16:55:58 ctl01 kernel: [    0.184800] pci 0000:00:01.1: reg 0x20: [io  0xc140-0xc14f]
Nov 16 16:55:58 ctl01 kernel: [    0.187971] pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io  0x01f0-0x01f7]
Nov 16 16:55:58 ctl01 kernel: [    0.188004] pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io  0x03f6]
Nov 16 16:55:58 ctl01 kernel: [    0.189342] pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io  0x0170-0x0177]
Nov 16 16:55:58 ctl01 kernel: [    0.190650] pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io  0x0376]
Nov 16 16:55:58 ctl01 kernel: [    0.192014] pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
Nov 16 16:55:58 ctl01 kernel: [    0.192527] pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 16 16:55:58 ctl01 kernel: [    0.194386] pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 16 16:55:58 ctl01 kernel: [    0.195993] pci 0000:00:02.0: [1013:00b8] type 00 class 0x030000
Nov 16 16:55:58 ctl01 kernel: [    0.198926] pci 0000:00:02.0: reg 0x10: [mem 0xfc000000-0xfdffffff pref]
Nov 16 16:55:58 ctl01 kernel: [    0.201334] pci 0000:00:02.0: reg 0x14: [mem 0xfebd0000-0xfebd0fff]
Nov 16 16:55:58 ctl01 kernel: [    0.214562] pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref]
Nov 16 16:55:58 ctl01 kernel: [    0.214802] pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000
Nov 16 16:55:58 ctl01 kernel: [    0.216005] pci 0000:00:03.0: reg 0x10: [io  0xc040-0xc05f]
Nov 16 16:55:58 ctl01 kernel: [    0.218441] pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff]
Nov 16 16:55:58 ctl01 kernel: [    0.228005] pci 0000:00:03.0: reg 0x30: [mem 0xfeac0000-0xfeafffff pref]
Nov 16 16:55:58 ctl01 kernel: [    0.228378] pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000
Nov 16 16:55:58 ctl01 kernel: [    0.231907] pci 0000:00:04.0: reg 0x10: [io  0xc060-0xc07f]
Nov 16 16:55:58 ctl01 kernel: [    0.232831] pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff]
Nov 16 16:55:58 ctl01 kernel: [    0.243197] pci 0000:00:04.0: reg 0x30: [mem 0xfeb00000-0xfeb3ffff pref]
Nov 16 16:55:58 ctl01 kernel: [    0.243586] pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000
Nov 16 16:55:58 ctl01 kernel: [    0.244925] pci 0000:00:05.0: reg 0x10: [io  0xc080-0xc09f]
Nov 16 16:55:58 ctl01 kernel: [    0.246611] pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff]
Nov 16 16:55:58 ctl01 kernel: [    0.256006] pci 0000:00:05.0: reg 0x30: [mem 0xfeb40000-0xfeb7ffff pref]
Nov 16 16:55:58 ctl01 kernel: [    0.256386] pci 0000:00:06.0: [1af4:1000] type 00 class 0x020000
Nov 16 16:55:58 ctl01 kernel: [    0.258169] pci 0000:00:06.0: reg 0x10: [io  0xc0a0-0xc0bf]
Nov 16 16:55:58 ctl01 kernel: [    0.259883] pci 0000:00:06.0: reg 0x14: [mem 0xfebd4000-0xfebd4fff]
Nov 16 16:55:58 ctl01 kernel: [    0.270795] pci 0000:00:06.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref]
Nov 16 16:55:58 ctl01 kernel: [    0.271188] pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000
Nov 16 16:55:58 ctl01 kernel: [    0.272929] pci 0000:00:07.0: reg 0x10: [io  0xc000-0xc03f]
Nov 16 16:55:58 ctl01 kernel: [    0.274821] pci 0000:00:07.0: reg 0x14: [mem 0xfebd5000-0xfebd5fff]
Nov 16 16:55:58 ctl01 kernel: [    0.284191] pci 0000:00:08.0: [8086:2934] type 00 class 0x0c0300
Nov 16 16:55:58 ctl01 kernel: [    0.289889] pci 0000:00:08.0: reg 0x20: [io  0xc0c0-0xc0df]
Nov 16 16:55:58 ctl01 kernel: [    0.291899] pci 0000:00:08.1: [8086:2935] type 00 class 0x0c0300
Nov 16 16:55:58 ctl01 kernel: [    0.296005] pci 0000:00:08.1: reg 0x20: [io  0xc0e0-0xc0ff]
Nov 16 16:55:58 ctl01 kernel: [    0.299146] pci 0000:00:08.2: [8086:2936] type 00 class 0x0c0300
Nov 16 16:55:58 ctl01 kernel: [    0.303620] pci 0000:00:08.2: reg 0x20: [io  0xc100-0xc11f]
Nov 16 16:55:58 ctl01 kernel: [    0.305642] pci 0000:00:08.7: [8086:293a] type 00 class 0x0c0320
Nov 16 16:55:58 ctl01 kernel: [    0.306974] pci 0000:00:08.7: reg 0x10: [mem 0xfebd6000-0xfebd6fff]
Nov 16 16:55:58 ctl01 kernel: [    0.313048] pci 0000:00:09.0: [1af4:1002] type 00 class 0x00ff00
Nov 16 16:55:58 ctl01 kernel: [    0.314002] pci 0000:00:09.0: reg 0x10: [io  0xc120-0xc13f]
Nov 16 16:55:58 ctl01 kernel: [    0.321214] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11)
Nov 16 16:55:58 ctl01 kernel: [    0.322524] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11)
Nov 16 16:55:58 ctl01 kernel: [    0.323749] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11)
Nov 16 16:55:58 ctl01 kernel: [    0.324123] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11)
Nov 16 16:55:58 ctl01 kernel: [    0.325360] ACPI: PCI Interrupt Link [LNKS] (IRQs *9)
Nov 16 16:55:58 ctl01 kernel: [    0.328481] SCSI subsystem initialized
Nov 16 16:55:58 ctl01 kernel: [    0.329519] libata version 3.00 loaded.
Nov 16 16:55:58 ctl01 kernel: [    0.329519] pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 16 16:55:58 ctl01 kernel: [    0.329519] pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 16 16:55:58 ctl01 kernel: [    0.332008] pci 0000:00:02.0: vgaarb: bridge control possible
Nov 16 16:55:58 ctl01 kernel: [    0.333273] vgaarb: loaded
Nov 16 16:55:58 ctl01 kernel: [    0.334012] ACPI: bus type USB registered
Nov 16 16:55:58 ctl01 kernel: [    0.334959] usbcore: registered new interface driver usbfs
Nov 16 16:55:58 ctl01 kernel: [    0.336011] usbcore: registered new interface driver hub
Nov 16 16:55:58 ctl01 kernel: [    0.337198] usbcore: registered new device driver usb
Nov 16 16:55:58 ctl01 kernel: [    0.338378] EDAC MC: Ver: 3.0.0
Nov 16 16:55:58 ctl01 kernel: [    0.338378] PCI: Using ACPI for IRQ routing
Nov 16 16:55:58 ctl01 kernel: [    0.340005] PCI: pci_cache_line_size set to 64 bytes
Nov 16 16:55:58 ctl01 kernel: [    0.340288] e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Nov 16 16:55:58 ctl01 kernel: [    0.340289] e820: reserve RAM buffer [mem 0xbffdf000-0xbfffffff]
Nov 16 16:55:58 ctl01 kernel: [    0.340377] NetLabel: Initializing
Nov 16 16:55:58 ctl01 kernel: [    0.341182] NetLabel:  domain hash size = 128
Nov 16 16:55:58 ctl01 kernel: [    0.342060] NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 16 16:55:58 ctl01 kernel: [    0.343221] NetLabel:  unlabeled traffic allowed by default
Nov 16 16:55:58 ctl01 kernel: [    0.344099] clocksource: Switched to clocksource kvm-clock
Nov 16 16:55:58 ctl01 kernel: [    0.355510] VFS: Disk quotas dquot_6.6.0
Nov 16 16:55:58 ctl01 kernel: [    0.356467] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 16 16:55:58 ctl01 kernel: [    0.357813] AppArmor: AppArmor Filesystem Enabled
Nov 16 16:55:58 ctl01 kernel: [    0.358730] pnp: PnP ACPI init
Nov 16 16:55:58 ctl01 kernel: [    0.359408] pnp 00:00: Plug and Play ACPI device, IDs PNP0b00 (active)
Nov 16 16:55:58 ctl01 kernel: [    0.359451] pnp 00:01: Plug and Play ACPI device, IDs PNP0303 (active)
Nov 16 16:55:58 ctl01 kernel: [    0.359479] pnp 00:02: Plug and Play ACPI device, IDs PNP0f13 (active)
Nov 16 16:55:58 ctl01 kernel: [    0.359505] pnp 00:03: [dma 2]
Nov 16 16:55:58 ctl01 kernel: [    0.359514] pnp 00:03: Plug and Play ACPI device, IDs PNP0700 (active)
Nov 16 16:55:58 ctl01 kernel: [    0.359591] pnp 00:04: Plug and Play ACPI device, IDs PNP0501 (active)
Nov 16 16:55:58 ctl01 kernel: [    0.359814] pnp: PnP ACPI: found 5 devices
Nov 16 16:55:58 ctl01 kernel: [    0.366947] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 16 16:55:58 ctl01 kernel: [    0.368632] pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 16 16:55:58 ctl01 kernel: [    0.368633] pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 16 16:55:58 ctl01 kernel: [    0.368634] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 16 16:55:58 ctl01 kernel: [    0.368634] pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 16 16:55:58 ctl01 kernel: [    0.368689] NET: Registered protocol family 2
Nov 16 16:55:58 ctl01 kernel: [    0.369721] TCP established hash table entries: 131072 (order: 8, 1048576 bytes)
Nov 16 16:55:58 ctl01 kernel: [    0.371282] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes)
Nov 16 16:55:58 ctl01 kernel: [    0.372730] TCP: Hash tables configured (established 131072 bind 65536)
Nov 16 16:55:58 ctl01 kernel: [    0.373996] UDP hash table entries: 8192 (order: 6, 262144 bytes)
Nov 16 16:55:58 ctl01 kernel: [    0.375153] UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes)
Nov 16 16:55:58 ctl01 kernel: [    0.376481] NET: Registered protocol family 1
Nov 16 16:55:58 ctl01 kernel: [    0.377330] pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 16 16:55:58 ctl01 kernel: [    0.378433] pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 16 16:55:58 ctl01 kernel: [    0.379518] pci 0000:00:01.0: Activating ISA DMA hang workarounds
Nov 16 16:55:58 ctl01 kernel: [    0.380704] pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 16 16:55:58 ctl01 kernel: [    0.404595] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 11
Nov 16 16:55:58 ctl01 kernel: [    0.451481] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 10
Nov 16 16:55:58 ctl01 kernel: [    0.497681] ACPI: PCI Interrupt Link [LNKB] enabled at IRQ 10
Nov 16 16:55:58 ctl01 kernel: [    0.543878] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11
Nov 16 16:55:58 ctl01 kernel: [    0.568100] PCI: CLS 0 bytes, default 64
Nov 16 16:55:58 ctl01 kernel: [    0.568132] Unpacking initramfs...
Nov 16 16:55:58 ctl01 kernel: [    0.788106] Freeing initrd memory: 19152K
Nov 16 16:55:58 ctl01 kernel: [    0.789004] PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 16 16:55:58 ctl01 kernel: [    0.790227] software IO TLB: mapped [mem 0xbbfdf000-0xbffdf000] (64MB)
Nov 16 16:55:58 ctl01 kernel: [    0.791491] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3aeaff3, max_idle_ns: 440795255742 ns
Nov 16 16:55:58 ctl01 kernel: [    0.793431] Scanning for low memory corruption every 60 seconds
Nov 16 16:55:58 ctl01 kernel: [    0.795298] Initialise system trusted keyrings
Nov 16 16:55:58 ctl01 kernel: [    0.796238] Key type blacklist registered
Nov 16 16:55:58 ctl01 kernel: [    0.797150] workingset: timestamp_bits=36 max_order=22 bucket_order=0
Nov 16 16:55:58 ctl01 kernel: [    0.799297] zbud: loaded
Nov 16 16:55:58 ctl01 kernel: [    0.800477] squashfs: version 4.0 (2009/01/31) Phillip Lougher
Nov 16 16:55:58 ctl01 kernel: [    0.802190] fuse init (API version 7.26)
Nov 16 16:55:58 ctl01 kernel: [    0.804623] Key type asymmetric registered
Nov 16 16:55:58 ctl01 kernel: [    0.805483] Asymmetric key parser 'x509' registered
Nov 16 16:55:58 ctl01 kernel: [    0.806456] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 16 16:55:58 ctl01 kernel: [    0.808016] io scheduler noop registered
Nov 16 16:55:58 ctl01 kernel: [    0.808877] io scheduler deadline registered
Nov 16 16:55:58 ctl01 kernel: [    0.809748] io scheduler cfq registered (default)
Nov 16 16:55:58 ctl01 kernel: [    0.811227] intel_idle: Please enable MWAIT in BIOS SETUP
Nov 16 16:55:58 ctl01 kernel: [    0.811325] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 16 16:55:58 ctl01 kernel: [    0.812909] ACPI: Power Button [PWRF]
Nov 16 16:55:58 ctl01 kernel: [    0.837577] virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver
Nov 16 16:55:58 ctl01 kernel: [    0.863140] virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver
Nov 16 16:55:58 ctl01 kernel: [    0.888656] virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver
Nov 16 16:55:58 ctl01 kernel: [    0.913899] virtio-pci 0000:00:06.0: virtio_pci: leaving for legacy driver
Nov 16 16:55:58 ctl01 kernel: [    0.939444] virtio-pci 0000:00:07.0: virtio_pci: leaving for legacy driver
Nov 16 16:55:58 ctl01 kernel: [    0.964544] virtio-pci 0000:00:09.0: virtio_pci: leaving for legacy driver
Nov 16 16:55:58 ctl01 kernel: [    0.966758] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled
Nov 16 16:55:58 ctl01 kernel: [    0.991337] 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 16 16:55:58 ctl01 kernel: [    0.994511] Linux agpgart interface v0.103
Nov 16 16:55:58 ctl01 kernel: [    0.997800] loop: module loaded
Nov 16 16:55:58 ctl01 kernel: [    0.998604] ata_piix 0000:00:01.1: version 2.13
Nov 16 16:55:58 ctl01 kernel: [    0.999754] scsi host0: ata_piix
Nov 16 16:55:58 ctl01 kernel: [    1.000727] scsi host1: ata_piix
Nov 16 16:55:58 ctl01 kernel: [    1.001522] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14
Nov 16 16:55:58 ctl01 kernel: [    1.002869] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15
Nov 16 16:55:58 ctl01 kernel: [    1.004298] libphy: Fixed MDIO Bus: probed
Nov 16 16:55:58 ctl01 kernel: [    1.005268] tun: Universal TUN/TAP device driver, 1.6
Nov 16 16:55:58 ctl01 kernel: [    1.006391] PPP generic driver version 2.4.2
Nov 16 16:55:58 ctl01 kernel: [    1.007414] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
Nov 16 16:55:58 ctl01 kernel: [    1.008801] ehci-pci: EHCI PCI platform driver
Nov 16 16:55:58 ctl01 kernel: [    1.034194] ehci-pci 0000:00:08.7: EHCI Host Controller
Nov 16 16:55:58 ctl01 kernel: [    1.042778] ehci-pci 0000:00:08.7: new USB bus registered, assigned bus number 1
Nov 16 16:55:58 ctl01 kernel: [    1.044714] ehci-pci 0000:00:08.7: irq 11, io mem 0xfebd6000
Nov 16 16:55:58 ctl01 kernel: [    1.060055] ehci-pci 0000:00:08.7: USB 2.0 started, EHCI 1.00
Nov 16 16:55:58 ctl01 kernel: [    1.061950] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002
Nov 16 16:55:58 ctl01 kernel: [    1.064105] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 16 16:55:58 ctl01 kernel: [    1.066410] usb usb1: Product: EHCI Host Controller
Nov 16 16:55:58 ctl01 kernel: [    1.067982] usb usb1: Manufacturer: Linux 4.15.0-70-generic ehci_hcd
Nov 16 16:55:58 ctl01 kernel: [    1.069339] usb usb1: SerialNumber: 0000:00:08.7
Nov 16 16:55:58 ctl01 kernel: [    1.070493] hub 1-0:1.0: USB hub found
Nov 16 16:55:58 ctl01 kernel: [    1.071366] hub 1-0:1.0: 6 ports detected
Nov 16 16:55:58 ctl01 kernel: [    1.072488] ehci-platform: EHCI generic platform driver
Nov 16 16:55:58 ctl01 kernel: [    1.073640] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
Nov 16 16:55:58 ctl01 kernel: [    1.074958] ohci-pci: OHCI PCI platform driver
Nov 16 16:55:58 ctl01 kernel: [    1.075964] ohci-platform: OHCI generic platform driver
Nov 16 16:55:58 ctl01 kernel: [    1.077104] uhci_hcd: USB Universal Host Controller Interface driver
Nov 16 16:55:58 ctl01 kernel: [    1.104974] uhci_hcd 0000:00:08.0: UHCI Host Controller
Nov 16 16:55:58 ctl01 kernel: [    1.106131] uhci_hcd 0000:00:08.0: new USB bus registered, assigned bus number 2
Nov 16 16:55:58 ctl01 kernel: [    1.107749] uhci_hcd 0000:00:08.0: detected 2 ports
Nov 16 16:55:58 ctl01 kernel: [    1.108898] uhci_hcd 0000:00:08.0: irq 11, io base 0x0000c0c0
Nov 16 16:55:58 ctl01 kernel: [    1.110211] usb usb2: New USB device found, idVendor=1d6b, idProduct=0001
Nov 16 16:55:58 ctl01 kernel: [    1.111669] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 16 16:55:58 ctl01 kernel: [    1.113276] usb usb2: Product: UHCI Host Controller
Nov 16 16:55:58 ctl01 kernel: [    1.114346] usb usb2: Manufacturer: Linux 4.15.0-70-generic uhci_hcd
Nov 16 16:55:58 ctl01 kernel: [    1.115805] usb usb2: SerialNumber: 0000:00:08.0
Nov 16 16:55:58 ctl01 kernel: [    1.116970] hub 2-0:1.0: USB hub found
Nov 16 16:55:58 ctl01 kernel: [    1.117861] hub 2-0:1.0: 2 ports detected
Nov 16 16:55:58 ctl01 kernel: [    1.145700] uhci_hcd 0000:00:08.1: UHCI Host Controller
Nov 16 16:55:58 ctl01 kernel: [    1.146869] uhci_hcd 0000:00:08.1: new USB bus registered, assigned bus number 3
Nov 16 16:55:58 ctl01 kernel: [    1.148541] uhci_hcd 0000:00:08.1: detected 2 ports
Nov 16 16:55:58 ctl01 kernel: [    1.149695] uhci_hcd 0000:00:08.1: irq 10, io base 0x0000c0e0
Nov 16 16:55:58 ctl01 kernel: [    1.151093] usb usb3: New USB device found, idVendor=1d6b, idProduct=0001
Nov 16 16:55:58 ctl01 kernel: [    1.152579] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 16 16:55:58 ctl01 kernel: [    1.154188] usb usb3: Product: UHCI Host Controller
Nov 16 16:55:58 ctl01 kernel: [    1.155281] usb usb3: Manufacturer: Linux 4.15.0-70-generic uhci_hcd
Nov 16 16:55:58 ctl01 kernel: [    1.156683] usb usb3: SerialNumber: 0000:00:08.1
Nov 16 16:55:58 ctl01 kernel: [    1.157839] hub 3-0:1.0: USB hub found
Nov 16 16:55:58 ctl01 kernel: [    1.158752] hub 3-0:1.0: 2 ports detected
Nov 16 16:55:58 ctl01 kernel: [    1.164645] ata1.01: NODEV after polling detection
Nov 16 16:55:58 ctl01 kernel: [    1.164966] ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 16 16:55:58 ctl01 kernel: [    1.166764] ata1.00: configured for MWDMA2
Nov 16 16:55:58 ctl01 kernel: [    1.168405] scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 16 16:55:58 ctl01 kernel: [    1.171207] sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 16 16:55:58 ctl01 kernel: [    1.172696] cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 16 16:55:58 ctl01 kernel: [    1.173950] sr 0:0:0:0: Attached scsi CD-ROM sr0
Nov 16 16:55:58 ctl01 kernel: [    1.174000] sr 0:0:0:0: Attached scsi generic sg0 type 5
Nov 16 16:55:58 ctl01 kernel: [    1.197765] uhci_hcd 0000:00:08.2: UHCI Host Controller
Nov 16 16:55:58 ctl01 kernel: [    1.199138] uhci_hcd 0000:00:08.2: new USB bus registered, assigned bus number 4
Nov 16 16:55:58 ctl01 kernel: [    1.200977] uhci_hcd 0000:00:08.2: detected 2 ports
Nov 16 16:55:58 ctl01 kernel: [    1.202270] uhci_hcd 0000:00:08.2: irq 10, io base 0x0000c100
Nov 16 16:55:58 ctl01 kernel: [    1.203781] usb usb4: New USB device found, idVendor=1d6b, idProduct=0001
Nov 16 16:55:58 ctl01 kernel: [    1.205437] usb usb4: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 16 16:55:58 ctl01 kernel: [    1.207412] usb usb4: Product: UHCI Host Controller
Nov 16 16:55:58 ctl01 kernel: [    1.208654] usb usb4: Manufacturer: Linux 4.15.0-70-generic uhci_hcd
Nov 16 16:55:58 ctl01 kernel: [    1.210182] usb usb4: SerialNumber: 0000:00:08.2
Nov 16 16:55:58 ctl01 kernel: [    1.211502] hub 4-0:1.0: USB hub found
Nov 16 16:55:58 ctl01 kernel: [    1.212546] hub 4-0:1.0: 2 ports detected
Nov 16 16:55:58 ctl01 kernel: [    1.213729] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 16 16:55:58 ctl01 kernel: [    1.216789] serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 16 16:55:58 ctl01 kernel: [    1.218038] serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 16 16:55:58 ctl01 kernel: [    1.219511] mousedev: PS/2 mouse device common for all mice
Nov 16 16:55:58 ctl01 kernel: [    1.221266] rtc_cmos 00:00: RTC can wake from S4
Nov 16 16:55:58 ctl01 kernel: [    1.223072] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 16 16:55:58 ctl01 kernel: [    1.225239] rtc_cmos 00:00: rtc core: registered rtc_cmos as rtc0
Nov 16 16:55:58 ctl01 kernel: [    1.226944] rtc_cmos 00:00: alarms up to one day, 114 bytes nvram
Nov 16 16:55:58 ctl01 kernel: [    1.228594] i2c /dev entries driver
Nov 16 16:55:58 ctl01 kernel: [    1.229580] device-mapper: uevent: version 1.0.3
Nov 16 16:55:58 ctl01 kernel: [    1.230948] device-mapper: ioctl: 4.37.0-ioctl (2017-09-20) initialised: dm-devel@redhat.com
Nov 16 16:55:58 ctl01 kernel: [    1.233566] ledtrig-cpu: registered to indicate activity on CPUs
Nov 16 16:55:58 ctl01 kernel: [    1.235794] NET: Registered protocol family 10
Nov 16 16:55:58 ctl01 kernel: [    1.240838] Segment Routing with IPv6
Nov 16 16:55:58 ctl01 kernel: [    1.241875] NET: Registered protocol family 17
Nov 16 16:55:58 ctl01 kernel: [    1.243171] Key type dns_resolver registered
Nov 16 16:55:58 ctl01 kernel: [    1.245333] mce: Using 10 MCE banks
Nov 16 16:55:58 ctl01 kernel: [    1.246324] RAS: Correctable Errors collector initialized.
Nov 16 16:55:58 ctl01 kernel: [    1.247706] sched_clock: Marking stable (1245300410, 0)->(1687122563, -441822153)
Nov 16 16:55:58 ctl01 kernel: [    1.250117] registered taskstats version 1
Nov 16 16:55:58 ctl01 kernel: [    1.251253] Loading compiled-in X.509 certificates
Nov 16 16:55:58 ctl01 kernel: [    1.254945] Loaded X.509 cert 'Build time autogenerated kernel key: 1859b0531897959199376c446a0bd70df75fd1fc'
Nov 16 16:55:58 ctl01 kernel: [    1.257019] zswap: loaded using pool lzo/zbud
Nov 16 16:55:58 ctl01 kernel: [    1.261591] Key type big_key registered
Nov 16 16:55:58 ctl01 kernel: [    1.262535] Key type trusted registered
Nov 16 16:55:58 ctl01 kernel: [    1.265076] Key type encrypted registered
Nov 16 16:55:58 ctl01 kernel: [    1.266050] AppArmor: AppArmor sha1 policy hashing enabled
Nov 16 16:55:58 ctl01 kernel: [    1.267267] ima: No TPM chip found, activating TPM-bypass! (rc=-19)
Nov 16 16:55:58 ctl01 kernel: [    1.268675] ima: Allocated hash algorithm: sha1
Nov 16 16:55:58 ctl01 kernel: [    1.269695] evm: HMAC attrs: 0x1
Nov 16 16:55:58 ctl01 kernel: [    1.270871]   Magic number: 11:820:944
Nov 16 16:55:58 ctl01 kernel: [    1.271763] tty tty1: hash matches
Nov 16 16:55:58 ctl01 kernel: [    1.272875] rtc_cmos 00:00: setting system clock to 2019-11-16 16:55:43 UTC (1573923343)
Nov 16 16:55:58 ctl01 kernel: [    1.274825] BIOS EDD facility v0.16 2004-Jun-25, 0 devices found
Nov 16 16:55:58 ctl01 kernel: [    1.276074] EDD information not available.
Nov 16 16:55:58 ctl01 kernel: [    1.625689] Freeing unused kernel image memory: 2432K
Nov 16 16:55:58 ctl01 kernel: [    1.632057] Write protecting the kernel read-only data: 20480k
Nov 16 16:55:58 ctl01 kernel: [    1.634949] Freeing unused kernel image memory: 2008K
Nov 16 16:55:58 ctl01 kernel: [    1.636867] Freeing unused kernel image memory: 1880K
Nov 16 16:55:58 ctl01 kernel: [    1.645851] x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 16 16:55:58 ctl01 kernel: [    1.647378] x86/mm: Checking user space page tables
Nov 16 16:55:58 ctl01 kernel: [    1.656022] x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 16 16:55:58 ctl01 kernel: [    1.744270]  vda: vda1 vda14 vda15
Nov 16 16:55:58 ctl01 kernel: [    1.745760] input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 16 16:55:58 ctl01 kernel: [    1.748217] input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 16 16:55:58 ctl01 kernel: [    1.752833] FDC 0 is a S82078B
Nov 16 16:55:58 ctl01 kernel: [    1.757854] AVX version of gcm_enc/dec engaged.
Nov 16 16:55:58 ctl01 kernel: [    1.759023] AES CTR mode by8 optimization enabled
Nov 16 16:55:58 ctl01 kernel: [    1.762024] virtio_net virtio2 ens5: renamed from eth2
Nov 16 16:55:58 ctl01 kernel: [    1.796151] virtio_net virtio1 ens4: renamed from eth1
Nov 16 16:55:58 ctl01 kernel: [    1.820267] virtio_net virtio3 ens6: renamed from eth3
Nov 16 16:55:58 ctl01 kernel: [    1.856161] virtio_net virtio0 ens3: renamed from eth0
Nov 16 16:55:58 ctl01 kernel: [    3.448025] raid6: sse2x1   gen()  6605 MB/s
Nov 16 16:55:58 ctl01 kernel: [    3.496010] raid6: sse2x1   xor()  4300 MB/s
Nov 16 16:55:58 ctl01 kernel: [    3.544011] raid6: sse2x2   gen()  6933 MB/s
Nov 16 16:55:58 ctl01 kernel: [    3.592009] raid6: sse2x2   xor()  6354 MB/s
Nov 16 16:55:58 ctl01 kernel: [    3.640011] raid6: sse2x4   gen() 11557 MB/s
Nov 16 16:55:58 ctl01 kernel: [    3.688009] raid6: sse2x4   xor()  7696 MB/s
Nov 16 16:55:58 ctl01 kernel: [    3.689099] raid6: using algorithm sse2x4 gen() 11557 MB/s
Nov 16 16:55:58 ctl01 kernel: [    3.690439] raid6: .... xor() 7696 MB/s, rmw enabled
Nov 16 16:55:58 ctl01 kernel: [    3.691667] raid6: using ssse3x2 recovery algorithm
Nov 16 16:55:58 ctl01 kernel: [    3.694511] xor: automatically using best checksumming function   avx       
Nov 16 16:55:58 ctl01 kernel: [    3.697705] async_tx: api initialized (async)
Nov 16 16:55:58 ctl01 kernel: [    3.755401] Btrfs loaded, crc32c=crc32c-intel
Nov 16 16:55:58 ctl01 kernel: [    4.018578] EXT4-fs (vda1): mounted filesystem with ordered data mode. Opts: (null)
Nov 16 16:55:58 ctl01 kernel: [    4.029207] random: fast init done
Nov 16 16:55:58 ctl01 kernel: [    5.012252] ip_tables: (C) 2000-2006 Netfilter Core Team
Nov 16 16:55:58 ctl01 kernel: [    5.218725] random: systemd: uninitialized urandom read (16 bytes read)
Nov 16 16:55:58 ctl01 kernel: [    5.227522] systemd[1]: systemd 237 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybrid)
Nov 16 16:55:58 ctl01 kernel: [    5.233248] systemd[1]: Detected virtualization kvm.
Nov 16 16:55:58 ctl01 kernel: [    5.234635] systemd[1]: Detected architecture x86-64.
Nov 16 16:55:58 ctl01 kernel: [    5.236089] random: systemd: uninitialized urandom read (16 bytes read)
Nov 16 16:55:58 ctl01 kernel: [    5.237852] random: systemd: uninitialized urandom read (16 bytes read)
Nov 16 16:55:58 ctl01 kernel: [    5.369642] systemd[1]: Set hostname to <ctl01>.
Nov 16 16:55:58 ctl01 kernel: [    6.097960] systemd[1]: Reached target User and Group Name Lookups.
Nov 16 16:55:58 ctl01 kernel: [    6.103011] systemd[1]: Created slice User and Session Slice.
Nov 16 16:55:58 ctl01 kernel: [    6.106364] systemd[1]: Created slice System Slice.
Nov 16 16:55:58 ctl01 kernel: [    6.108966] systemd[1]: Listening on /dev/initctl Compatibility Named Pipe.
Nov 16 16:55:58 ctl01 kernel: [    6.111951] systemd[1]: Listening on LVM2 metadata daemon socket.
Nov 16 16:55:58 ctl01 kernel: [    6.114643] systemd[1]: Listening on Syslog Socket.
Nov 16 16:55:58 ctl01 kernel: [    6.173618] Loading iSCSI transport class v2.0-870.
Nov 16 16:55:58 ctl01 kernel: [    6.185092] iscsi: registered transport (tcp)
Nov 16 16:55:58 ctl01 kernel: [    6.223614] iscsi: registered transport (iser)
Nov 16 16:55:58 ctl01 kernel: [    6.226145] EXT4-fs (vda1): re-mounted. Opts: (null)
Nov 16 16:55:58 ctl01 kernel: [    6.249813] nf_conntrack version 0.5.0 (65536 buckets, 262144 max)
Nov 16 16:55:58 ctl01 kernel: [    6.377092] systemd-journald[468]: Received request to flush runtime journal from PID 1
Nov 16 16:55:58 ctl01 kernel: [    7.013472] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 16 16:55:58 ctl01 kernel: [    7.014752] br-mgmt: port 1(ens4) entered blocking state
Nov 16 16:55:58 ctl01 kernel: [    7.014753] br-mgmt: port 1(ens4) entered disabled state
Nov 16 16:55:58 ctl01 kernel: [    7.014814] device ens4 entered promiscuous mode
Nov 16 16:55:58 ctl01 kernel: [    8.140597] audit: type=1400 audit(1573923350.364:2): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/bin/lxc-start" pid=1061 comm="apparmor_parser"
Nov 16 16:55:58 ctl01 kernel: [    8.143205] audit: type=1400 audit(1573923350.364:3): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/bin/man" pid=1062 comm="apparmor_parser"
Nov 16 16:55:58 ctl01 kernel: [    8.143208] audit: type=1400 audit(1573923350.364:4): apparmor="STATUS" operation="profile_load" profile="unconfined" name="man_filter" pid=1062 comm="apparmor_parser"
Nov 16 16:55:58 ctl01 kernel: [    8.143211] audit: type=1400 audit(1573923350.364:5): apparmor="STATUS" operation="profile_load" profile="unconfined" name="man_groff" pid=1062 comm="apparmor_parser"
Nov 16 16:55:58 ctl01 kernel: [    8.147292] audit: type=1400 audit(1573923350.368:6): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/snapd/snap-confine" pid=1063 comm="apparmor_parser"
Nov 16 16:55:58 ctl01 kernel: [    8.147295] audit: type=1400 audit(1573923350.368:7): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/snapd/snap-confine//mount-namespace-capture-helper" pid=1063 comm="apparmor_parser"
Nov 16 16:55:58 ctl01 kernel: [    8.147899] audit: type=1400 audit(1573923350.368:8): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default" pid=1059 comm="apparmor_parser"
Nov 16 16:55:58 ctl01 kernel: [    8.147902] audit: type=1400 audit(1573923350.368:9): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default-cgns" pid=1059 comm="apparmor_parser"
Nov 16 16:55:58 ctl01 kernel: [    8.147905] audit: type=1400 audit(1573923350.368:10): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default-with-mounting" pid=1059 comm="apparmor_parser"
Nov 16 16:55:58 ctl01 kernel: [    8.147907] audit: type=1400 audit(1573923350.368:11): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default-with-nesting" pid=1059 comm="apparmor_parser"
Nov 16 16:55:58 ctl01 kernel: [   11.293178] ISO 9660 Extensions: Microsoft Joliet Level 3
Nov 16 16:55:58 ctl01 kernel: [   11.297637] ISO 9660 Extensions: RRIP_1991A
Nov 16 16:55:58 ctl01 kernel: [   11.633135] br-mgmt: port 1(ens4) entered blocking state
Nov 16 16:55:58 ctl01 kernel: [   11.633140] br-mgmt: port 1(ens4) entered forwarding state
Nov 16 16:55:58 ctl01 kernel: [   11.633247] IPv6: ADDRCONF(NETDEV_UP): br-mgmt: link is not ready
Nov 16 16:55:58 ctl01 kernel: [   11.633270] IPv6: ADDRCONF(NETDEV_CHANGE): br-mgmt: link becomes ready
Nov 16 16:55:58 ctl01 kernel: [   11.703538] 8021q: 802.1Q VLAN Support v1.8
Nov 16 16:55:58 ctl01 kernel: [   11.703546] 8021q: adding VLAN 0 to HW filter on device ens3
Nov 16 16:55:58 ctl01 kernel: [   11.703589] 8021q: adding VLAN 0 to HW filter on device ens4
Nov 16 16:55:58 ctl01 kernel: [   11.703615] 8021q: adding VLAN 0 to HW filter on device ens5
Nov 16 16:55:58 ctl01 kernel: [   11.921340] openvswitch: Open vSwitch switching datapath
Nov 16 16:55:58 ctl01 kernel: [   15.145521] new mount options do not match the existing superblock, will be ignored
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: +-----------+-------+----------------------------+---------------+--------+-------------------+
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: |  br-mgmt  |  True |        172.16.10.36        | 255.255.255.0 | global | 52:54:00:9d:e7:5c |
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: |  br-mgmt  |  True | fe80::5054:ff:fe9d:e75c/64 |       .       |  link  | 52:54:00:9d:e7:5c |
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: |    ens3   |  True |       192.168.11.21        | 255.255.255.0 | global | 52:54:00:9c:aa:a4 |
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: |    ens3   |  True | fe80::5054:ff:fe9c:aaa4/64 |       .       |  link  | 52:54:00:9c:aa:a4 |
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: |    ens4   |  True | fe80::5054:ff:fe9d:e75c/64 |       .       |  link  | 52:54:00:9d:e7:5c |
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: |    ens5   |  True | fe80::5054:ff:fe08:c5ff/64 |       .       |  link  | 52:54:00:08:c5:ff |
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: | ens5.1000 |  True | fe80::5054:ff:fe08:c5ff/64 |       .       |  link  | 52:54:00:08:c5:ff |
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: |    ens6   | False |             .              |       .       |   .    | 52:54:00:c0:ab:72 |
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: |     lo    |  True |         127.0.0.1          |   255.0.0.0   |  host  |         .         |
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: |     lo    |  True |          ::1/128           |       .       |  host  |         .         |
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: +-----------+-------+----------------------------+---------------+--------+-------------------+
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: ++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: +-------+--------------+--------------+---------------+-----------+-------+
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: | Route | Destination  |   Gateway    |    Genmask    | Interface | Flags |
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: +-------+--------------+--------------+---------------+-----------+-------+
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: |   0   |   0.0.0.0    | 192.168.11.3 |    0.0.0.0    |    ens3   |   UG  |
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: |   1   |  10.254.0.0  | 172.16.10.36 |  255.255.0.0  |  br-mgmt  |   UG  |
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: |   2   | 172.16.10.0  |   0.0.0.0    | 255.255.255.0 |  br-mgmt  |   U   |
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: |   3   | 192.168.11.0 |   0.0.0.0    | 255.255.255.0 |    ens3   |   U   |
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: +-------+--------------+--------------+---------------+-----------+-------+
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: |   1   |  fe80::/64  |    ::   |    ens4   |   U   |
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: |   2   |  fe80::/64  |    ::   |  br-mgmt  |   U   |
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: |   3   |  fe80::/64  |    ::   |    ens3   |   U   |
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: |   4   |  fe80::/64  |    ::   |    ens5   |   U   |
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: |   5   |  fe80::/64  |    ::   | ens5.1000 |   U   |
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: |   7   |    local    |    ::   |    ens4   |   U   |
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: |   8   |   ff00::/8  |    ::   |    ens4   |   U   |
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: |   9   |   ff00::/8  |    ::   |  br-mgmt  |   U   |
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: |   10  |   ff00::/8  |    ::   |    ens3   |   U   |
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: |   11  |   ff00::/8  |    ::   |    ens5   |   U   |
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: |   12  |   ff00::/8  |    ::   | ens5.1000 |   U   |
Nov 16 16:55:58 ctl01 cloud-init[1886]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 16 16:55:58 ctl01 systemd[1]: Started Initial cloud-init job (metadata service crawler).
Nov 16 16:55:58 ctl01 systemd[1]: Reached target System Initialization.
Nov 16 16:55:58 ctl01 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 16 16:55:58 ctl01 systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 16 16:55:58 ctl01 systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 16 16:55:58 ctl01 systemd[1]: Started ACPI Events Check.
Nov 16 16:55:58 ctl01 systemd[1]: Reached target Paths.
Nov 16 16:55:58 ctl01 systemd[1]: Starting LXD - unix socket.
Nov 16 16:55:58 ctl01 systemd[1]: Started Daily apt download activities.
Nov 16 16:55:58 ctl01 systemd[1]: Started Daily apt upgrade and clean activities.
Nov 16 16:55:58 ctl01 systemd[1]: Listening on UUID daemon activation socket.
Nov 16 16:55:58 ctl01 systemd[1]: Listening on ACPID Listen Socket.
Nov 16 16:55:58 ctl01 systemd[1]: Starting Socket activation for snappy daemon.
Nov 16 16:55:58 ctl01 systemd[1]: Started Discard unused blocks once a week.
Nov 16 16:55:58 ctl01 systemd[1]: Started Message of the Day.
Nov 16 16:55:58 ctl01 systemd[1]: Reached target Timers.
Nov 16 16:55:58 ctl01 systemd[1]: Reached target Cloud-config availability.
Nov 16 16:55:58 ctl01 systemd[1]: Listening on LXD - unix socket.
Nov 16 16:55:58 ctl01 systemd[1]: Listening on Socket activation for snappy daemon.
Nov 16 16:55:58 ctl01 systemd[1]: Reached target Sockets.
Nov 16 16:55:58 ctl01 systemd[1]: Reached target Basic System.
Nov 16 16:55:58 ctl01 systemd[1]: Started irqbalance daemon.
Nov 16 16:55:58 ctl01 systemd[1]: Starting Open vSwitch...
Nov 16 16:55:58 ctl01 systemd[1]: Started D-Bus System Message Bus.
Nov 16 16:55:58 ctl01 dbus-daemon[1981]: [system] AppArmor D-Bus mediation is enabled
Nov 16 16:55:58 ctl01 systemd[1]: Starting LSB: Record successful boot for GRUB...
Nov 16 16:55:58 ctl01 systemd[1]: Starting Snappy daemon...
Nov 16 16:55:58 ctl01 systemd[1]: Started Self Monitoring and Reporting Technology (SMART) Daemon.
Nov 16 16:55:58 ctl01 systemd[1]: Starting Accounts Service...
Nov 16 16:55:58 ctl01 systemd[1]: Starting LSB: Set sysfs variables from /etc/sysfs.conf...
Nov 16 16:55:58 ctl01 systemd[1]: Started Regular background program processing daemon.
Nov 16 16:55:58 ctl01 systemd[1]: Starting System Logging Service...
Nov 16 16:55:58 ctl01 systemd[1]: Started FUSE filesystem for LXC.
Nov 16 16:55:58 ctl01 systemd[1]: Started Deferred execution scheduler.
Nov 16 16:55:58 ctl01 systemd[1]: Starting Login Service...
Nov 16 16:55:58 ctl01 systemd[1]: Starting LXD - container startup/shutdown...
Nov 16 16:55:58 ctl01 systemd[1]: Started Open vSwitch.
Nov 16 16:55:58 ctl01 systemd[1]: Reached target Network.
Nov 16 16:55:58 ctl01 systemd[1]: Starting dnsmasq - A lightweight DHCP and caching DNS server...
Nov 16 16:55:58 ctl01 cron[2031]: (CRON) INFO (pidfile fd = 3)
Nov 16 16:55:58 ctl01 sysfsutils[2022]:  * Setting sysfs variables...
Nov 16 16:55:58 ctl01 grub-common[1998]:  * Recording successful boot for GRUB
Nov 16 16:55:58 ctl01 systemd[1]: Starting The Salt Minion...
Nov 16 16:55:58 ctl01 systemd[1]: Reached target Network is Online.
Nov 16 16:55:58 ctl01 sysfsutils[2022]:    ...done.
Nov 16 16:55:58 ctl01 systemd[1]: Starting Availability of block devices...
Nov 16 16:55:58 ctl01 systemd[1]: Reached target Remote File Systems (Pre).
Nov 16 16:55:58 ctl01 systemd[1]: Reached target Remote File Systems.
Nov 16 16:55:58 ctl01 systemd[1]: Starting LSB: automatic crash report generation...
Nov 16 16:55:58 ctl01 systemd[1]: Starting Permit User Sessions...
Nov 16 16:55:58 ctl01 systemd[1]: Starting OpenBSD Secure Shell server...
Nov 16 16:55:58 ctl01 systemd[1]: Started LSB: Set sysfs variables from /etc/sysfs.conf.
Nov 16 16:55:58 ctl01 cron[2031]: (CRON) INFO (Running @reboot jobs)
Nov 16 16:55:58 ctl01 systemd[1]: Started Availability of block devices.
Nov 16 16:55:58 ctl01 systemd[1]: Started Permit User Sessions.
Nov 16 16:55:58 ctl01 systemd[1]: Started Login Service.
Nov 16 16:55:58 ctl01 systemd[1]: Started Unattended Upgrades Shutdown.
Nov 16 16:55:58 ctl01 systemd[1]: Starting Hold until boot process finishes up...
Nov 16 16:55:58 ctl01 systemd[1]: Starting Terminate Plymouth Boot Screen...
Nov 16 16:55:58 ctl01 systemd[1]: Started Hold until boot process finishes up.
Nov 16 16:55:58 ctl01 systemd[1]: Starting Set console scheme...
Nov 16 16:55:58 ctl01 systemd[1]: Started Serial Getty on ttyS0.
Nov 16 16:55:58 ctl01 lxcfs[2040]: mount namespace: 5
Nov 16 16:55:58 ctl01 lxcfs[2040]: hierarchies:
Nov 16 16:55:58 ctl01 lxcfs[2040]:   0: fd:   6: perf_event
Nov 16 16:55:58 ctl01 lxcfs[2040]:   1: fd:   7: blkio
Nov 16 16:55:58 ctl01 lxcfs[2040]:   2: fd:   8: rdma
Nov 16 16:55:58 ctl01 lxcfs[2040]:   3: fd:   9: net_cls,net_prio
Nov 16 16:55:58 ctl01 lxcfs[2040]:   4: fd:  10: hugetlb
Nov 16 16:55:58 ctl01 lxcfs[2040]:   5: fd:  11: cpu,cpuacct
Nov 16 16:55:58 ctl01 lxcfs[2040]:   6: fd:  12: devices
Nov 16 16:55:58 ctl01 lxcfs[2040]:   7: fd:  13: cpuset
Nov 16 16:55:58 ctl01 lxcfs[2040]:   8: fd:  14: freezer
Nov 16 16:55:58 ctl01 lxcfs[2040]:   9: fd:  15: memory
Nov 16 16:55:58 ctl01 lxcfs[2040]:  10: fd:  16: pids
Nov 16 16:55:58 ctl01 lxcfs[2040]:  11: fd:  17: name=systemd
Nov 16 16:55:58 ctl01 lxcfs[2040]:  12: fd:  18: unified
Nov 16 16:55:58 ctl01 systemd[1]: Started Terminate Plymouth Boot Screen.
Nov 16 16:55:58 ctl01 apport[2102]:  * Starting automatic crash report generation: apport
Nov 16 16:55:58 ctl01 systemd[1]: Started Set console scheme.
Nov 16 16:55:58 ctl01 systemd[1]: Created slice system-getty.slice.
Nov 16 16:55:58 ctl01 dnsmasq[2054]: dnsmasq: syntax check OK.
Nov 16 16:55:58 ctl01 systemd[1]: Started Getty on tty1.
Nov 16 16:55:58 ctl01 systemd[1]: Reached target Login Prompts.
Nov 16 16:55:58 ctl01 smartd[2012]: smartd 6.6 2016-05-31 r4324 [x86_64-linux-4.15.0-70-generic] (local build)
Nov 16 16:55:58 ctl01 smartd[2012]: Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
Nov 16 16:55:58 ctl01 smartd[2012]: Opened configuration file /etc/smartd.conf
Nov 16 16:55:58 ctl01 apport[2102]:    ...done.
Nov 16 16:55:58 ctl01 systemd[1]: Started LSB: automatic crash report generation.
Nov 16 16:55:58 ctl01 grub-common[1998]:    ...done.
Nov 16 16:55:58 ctl01 systemd[1]: Started LSB: Record successful boot for GRUB.
Nov 16 16:55:58 ctl01 smartd[2012]: Drive: DEVICESCAN, implied '-a' Directive on line 21 of file /etc/smartd.conf
Nov 16 16:55:58 ctl01 smartd[2012]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices
Nov 16 16:55:58 ctl01 smartd[2012]: DEVICESCAN failed: glob(3) aborted matching pattern /dev/discs/disc*
Nov 16 16:55:58 ctl01 smartd[2012]: In the system's table of devices NO devices found to scan
Nov 16 16:55:58 ctl01 smartd[2012]: Unable to monitor any SMART enabled devices. Try debug (-d) option. Exiting...
Nov 16 16:55:58 ctl01 systemd[1]: smartd.service: Main process exited, code=exited, status=17/n/a
Nov 16 16:55:58 ctl01 systemd[1]: smartd.service: Failed with result 'exit-code'.
Nov 16 16:55:58 ctl01 rsyslogd: imuxsock: Acquired UNIX socket '/run/systemd/journal/syslog' (fd 3) from systemd.  [v8.32.0]
Nov 16 16:55:58 ctl01 systemd[1]: Started System Logging Service.
Nov 16 16:55:58 ctl01 rsyslogd: rsyslogd's groupid changed to 106
Nov 16 16:55:58 ctl01 rsyslogd: rsyslogd's userid changed to 102
Nov 16 16:55:58 ctl01 rsyslogd:  [origin software="rsyslogd" swVersion="8.32.0" x-pid="2037" x-info="http://www.rsyslog.com"] start
Nov 16 16:55:58 ctl01 dbus-daemon[1981]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.2' (uid=0 pid=2016 comm="/usr/lib/accountsservice/accounts-daemon " label="unconfined")
Nov 16 16:55:58 ctl01 systemd[1]: Starting Authorization Manager...
Nov 16 16:55:58 ctl01 polkitd[2271]: started daemon version 0.105 using authority implementation `local' version `0.105'
Nov 16 16:55:58 ctl01 dbus-daemon[1981]: [system] Successfully activated service 'org.freedesktop.PolicyKit1'
Nov 16 16:55:58 ctl01 systemd[1]: Started Authorization Manager.
Nov 16 16:55:58 ctl01 accounts-daemon[2016]: started daemon version 0.6.45
Nov 16 16:55:58 ctl01 systemd[1]: Started Accounts Service.
Nov 16 16:55:58 ctl01 dnsmasq[2295]: started, version 2.79 cachesize 150
Nov 16 16:55:58 ctl01 dnsmasq[2295]: compile time options: IPv6 GNU-getopt DBus i18n IDN DHCP DHCPv6 no-Lua TFTP conntrack ipset auth DNSSEC loop-detect inotify
Nov 16 16:55:58 ctl01 dnsmasq[2295]: reading /etc/resolv.conf
Nov 16 16:55:58 ctl01 dnsmasq[2295]: using nameserver 8.8.8.8#53
Nov 16 16:55:58 ctl01 dnsmasq[2295]: read /etc/hosts - 11 addresses
Nov 16 16:55:58 ctl01 systemd[1]: Started dnsmasq - A lightweight DHCP and caching DNS server.
Nov 16 16:55:58 ctl01 systemd[1]: Reached target Host and Network Name Lookups.
Nov 16 16:55:58 ctl01 systemd[1]: Starting Postfix Mail Transport Agent (instance -)...
Nov 16 16:55:58 ctl01 systemd[1]: Started OpenBSD Secure Shell server.
Nov 16 16:55:58 ctl01 snapd[2005]: AppArmor status: apparmor is enabled and all features are available
Nov 16 16:55:58 ctl01 snapd[2005]: patch.go:64: Patching system state level 6 to sublevel 1...
Nov 16 16:55:58 ctl01 snapd[2005]: daemon.go:338: started snapd/2.40+18.04 (series 16; classic) ubuntu/18.04 (amd64) linux/4.15.0-70-generic.
Nov 16 16:55:58 ctl01 systemd[1]: Started LXD - container startup/shutdown.
Nov 16 16:55:59 ctl01 systemd[1]: Started Snappy daemon.
Nov 16 16:55:59 ctl01 systemd[1]: Starting Wait until snapd is fully seeded...
Nov 16 16:55:59 ctl01 systemd[1]: Started Wait until snapd is fully seeded.
Nov 16 16:55:59 ctl01 systemd[1]: Starting Apply the settings specified in cloud-config...
Nov 16 16:55:59 ctl01 postfix/postfix-script[2581]: starting the Postfix mail system
Nov 16 16:55:59 ctl01 postfix/master[2585]: daemon started -- version 3.3.0, configuration /etc/postfix
Nov 16 16:55:59 ctl01 systemd[1]: Started Postfix Mail Transport Agent (instance -).
Nov 16 16:55:59 ctl01 systemd[1]: Starting Postfix Mail Transport Agent...
Nov 16 16:55:59 ctl01 systemd[1]: Started Postfix Mail Transport Agent.
Nov 16 16:55:59 ctl01 systemd[1]: Started The Salt Minion.
Nov 16 16:55:59 ctl01 systemd[1]: Reached target Multi-User System.
Nov 16 16:55:59 ctl01 systemd[1]: Reached target Graphical Interface.
Nov 16 16:55:59 ctl01 systemd[1]: Starting Update UTMP about System Runlevel Changes...
Nov 16 16:55:59 ctl01 systemd[1]: Started Update UTMP about System Runlevel Changes.
Nov 16 16:55:59 ctl01 cloud-init[2506]: Cloud-init v. 19.2-36-g059d049c-0ubuntu2~18.04.1 running 'modules:config' at Sat, 16 Nov 2019 16:55:59 +0000. Up 17.37 seconds.
Nov 16 16:55:59 ctl01 systemd[1]: Started Apply the settings specified in cloud-config.
Nov 16 16:55:59 ctl01 systemd[1]: Starting Execute cloud user/final scripts...
Nov 16 16:56:00 ctl01 cloud-init[2642]: Cloud-init v. 19.2-36-g059d049c-0ubuntu2~18.04.1 running 'modules:final' at Sat, 16 Nov 2019 16:56:00 +0000. Up 17.91 seconds.
Nov 16 16:56:00 ctl01 cloud-init[2642]: Cloud-init v. 19.2-36-g059d049c-0ubuntu2~18.04.1 finished at Sat, 16 Nov 2019 16:56:00 +0000. Datasource DataSourceNoCloud [seed=/dev/sr0][dsmode=net].  Up 18.00 seconds
Nov 16 16:56:00 ctl01 systemd[1]: Started Execute cloud user/final scripts.
Nov 16 16:56:00 ctl01 systemd[1]: Reached target Cloud-init target.
Nov 16 16:56:00 ctl01 systemd[1]: Startup finished in 4.765s (kernel) + 13.288s (userspace) = 18.054s.
Nov 16 16:56:01 ctl01 kernel: [   18.841356] random: crng init done
Nov 16 16:56:01 ctl01 kernel: [   18.841360] random: 7 urandom warning(s) missed due to ratelimiting
Nov 16 16:56:04 ctl01 snapd[2005]: daemon.go:576: gracefully waiting for running hooks
Nov 16 16:56:04 ctl01 snapd[2005]: daemon.go:578: done waiting for running hooks
Nov 16 16:56:04 ctl01 snapd[2005]: daemon stop requested to wait for socket activation
Nov 16 16:56:19 ctl01 systemd-timesyncd[985]: Synchronized to time server 91.189.91.157:123 (ntp.ubuntu.com).
Nov 16 16:56:34 ctl01 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install python-oauth python-m2crypto.
Nov 16 16:56:40 ctl01 systemd[1]: Reloading.
Nov 16 16:56:41 ctl01 salt-minion[2061]: [WARNING ] The function "module.run" is using its deprecated version and will expire in version "Sodium".
Nov 16 16:56:42 ctl01 salt-minion[2061]: ..................................................................................................................++++
Nov 16 16:56:43 ctl01 salt-minion[2061]: .......................................................................................++++
Nov 16 16:56:44 ctl01 salt-minion[2061]: [WARNING ] State for file: /etc/kubernetes/ssl/ca-kubernetes.crt - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
Nov 16 16:56:44 ctl01 salt-minion[2061]: ...........++++
Nov 16 16:56:45 ctl01 salt-minion[2061]: ........................................................................................................................................................++++
Nov 16 16:56:47 ctl01 salt-minion[2061]: .............................................................................................................................................................................................................................................................++++
Nov 16 16:56:48 ctl01 salt-minion[2061]: ..........................................................++++
Nov 16 16:56:48 ctl01 salt-minion[2061]: [WARNING ] State for file: /var/lib/etcd/ca.pem - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
Nov 16 16:56:49 ctl01 salt-minion[2061]: .................................................................++++
Nov 16 16:56:51 ctl01 salt-minion[2061]: ................................................................................................................................................................................................................................................................................++++
Nov 16 16:56:52 ctl01 salt-minion[2061]: ........................++++
Nov 16 16:56:53 ctl01 salt-minion[2061]: ............................................................................................................................................................................................................................................................................++++
Nov 16 16:56:55 ctl01 salt-minion[2061]: .....................................................................................................................++++
Nov 16 16:56:55 ctl01 salt-minion[2061]: ....................++++
Nov 16 16:56:56 ctl01 salt-minion[2061]: .......................................................................................++++
Nov 16 16:56:57 ctl01 salt-minion[2061]: ............................................................++++
Nov 16 16:56:58 ctl01 salt-minion[2061]: ...........................................................................................................................................................................................++++
Nov 16 16:56:59 ctl01 salt-minion[2061]: ..........................++++
Nov 16 16:56:59 ctl01 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install ntp.
Nov 16 16:57:02 ctl01 systemd[1]: Reloading.
Nov 16 16:57:03 ctl01 systemd[1]: message repeated 2 times: [ Reloading.]
Nov 16 16:57:03 ctl01 systemd[1]: Started ntp-systemd-netif.path.
Nov 16 16:57:03 ctl01 kernel: [   82.143185] kauditd_printk_skb: 5 callbacks suppressed
Nov 16 16:57:03 ctl01 kernel: [   82.143186] audit: type=1400 audit(1573923423.325:17): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/sbin/ntpd" pid=3978 comm="apparmor_parser"
Nov 16 16:57:03 ctl01 systemd[1]: Reloading.
Nov 16 16:57:03 ctl01 systemd[1]: message repeated 2 times: [ Reloading.]
Nov 16 16:57:03 ctl01 systemd[1]: Starting Network Time Service...
Nov 16 16:57:03 ctl01 systemd[1]: Stopping Network Time Synchronization...
Nov 16 16:57:03 ctl01 ntpd[4086]: ntpd 4.2.8p10@1.3728-o (1): Starting
Nov 16 16:57:03 ctl01 ntpd[4086]: Command line: /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 112:118
Nov 16 16:57:03 ctl01 systemd[1]: Started Network Time Service.
Nov 16 16:57:03 ctl01 systemd[1]: Stopped Network Time Synchronization.
Nov 16 16:57:03 ctl01 systemd[1]: Reloading.
Nov 16 16:57:04 ctl01 ntpd[4090]: proto: precision = 0.061 usec (-24)
Nov 16 16:57:04 ctl01 ntpd[4090]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature
Nov 16 16:57:04 ctl01 ntpd[4090]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2020-06-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37
Nov 16 16:57:04 ctl01 ntpd[4090]: Listen and drop on 0 v6wildcard [::]:123
Nov 16 16:57:04 ctl01 ntpd[4090]: Listen and drop on 1 v4wildcard 0.0.0.0:123
Nov 16 16:57:04 ctl01 ntpd[4090]: Listen normally on 2 lo 127.0.0.1:123
Nov 16 16:57:04 ctl01 ntpd[4090]: Listen normally on 3 ens3 192.168.11.21:123
Nov 16 16:57:04 ctl01 ntpd[4090]: Listen normally on 4 br-mgmt 172.16.10.36:123
Nov 16 16:57:04 ctl01 ntpd[4090]: Listen normally on 5 lo [::1]:123
Nov 16 16:57:04 ctl01 ntpd[4090]: Listen normally on 6 ens3 [fe80::5054:ff:fe9c:aaa4%2]:123
Nov 16 16:57:04 ctl01 ntpd[4090]: Listen normally on 7 ens4 [fe80::5054:ff:fe9d:e75c%3]:123
Nov 16 16:57:04 ctl01 ntpd[4090]: Listen normally on 8 ens5 [fe80::5054:ff:fe08:c5ff%4]:123
Nov 16 16:57:04 ctl01 ntpd[4090]: Listen normally on 9 br-mgmt [fe80::5054:ff:fe9d:e75c%6]:123
Nov 16 16:57:04 ctl01 ntpd[4090]: Listen normally on 10 ens5.1000 [fe80::5054:ff:fe08:c5ff%7]:123
Nov 16 16:57:04 ctl01 ntpd[4090]: Listening on routing socket on fd #27 for interface updates
Nov 16 16:57:05 ctl01 ntpd[4090]: Soliciting pool server 162.159.200.123
Nov 16 16:57:06 ctl01 ntpd[4090]: Soliciting pool server 192.36.143.130
Nov 16 16:57:06 ctl01 ntpd[4090]: Soliciting pool server 83.168.200.198
Nov 16 16:57:07 ctl01 ntpd[4090]: Soliciting pool server 193.182.111.12
Nov 16 16:57:07 ctl01 ntpd[4090]: Soliciting pool server 5.186.65.2
Nov 16 16:57:07 ctl01 ntpd[4090]: Soliciting pool server 162.159.200.1
Nov 16 16:57:07 ctl01 systemd[1]: Started /bin/systemctl restart ntp.service.
Nov 16 16:57:07 ctl01 ntpd[4090]: ntpd exiting on signal 15 (Terminated)
Nov 16 16:57:07 ctl01 systemd[1]: Stopping Network Time Service...
Nov 16 16:57:07 ctl01 ntpd[4090]: 162.159.200.123 local addr 192.168.11.21 -> <null>
Nov 16 16:57:07 ctl01 ntpd[4090]: 192.36.143.130 local addr 192.168.11.21 -> <null>
Nov 16 16:57:07 ctl01 ntpd[4090]: 83.168.200.198 local addr 192.168.11.21 -> <null>
Nov 16 16:57:07 ctl01 ntpd[4090]: 193.182.111.12 local addr 192.168.11.21 -> <null>
Nov 16 16:57:07 ctl01 ntpd[4090]: 5.186.65.2 local addr 192.168.11.21 -> <null>
Nov 16 16:57:07 ctl01 ntpd[4090]: 162.159.200.1 local addr 192.168.11.21 -> <null>
Nov 16 16:57:07 ctl01 systemd[1]: Stopped Network Time Service.
Nov 16 16:57:07 ctl01 systemd[1]: Starting Network Time Service...
Nov 16 16:57:07 ctl01 ntpd[4405]: ntpd 4.2.8p10@1.3728-o (1): Starting
Nov 16 16:57:07 ctl01 ntpd[4405]: Command line: /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 112:118
Nov 16 16:57:07 ctl01 systemd[1]: Started Network Time Service.
Nov 16 16:57:07 ctl01 ntpd[4408]: proto: precision = 0.061 usec (-24)
Nov 16 16:57:07 ctl01 ntpd[4408]: restrict 0.0.0.0: KOD does nothing without LIMITED.
Nov 16 16:57:07 ctl01 ntpd[4408]: restrict ::: KOD does nothing without LIMITED.
Nov 16 16:57:07 ctl01 ntpd[4408]: switching logging to file /var/log/ntp.log
Nov 16 16:57:11 ctl01 salt-minion[2061]: [INFO    ] Executing command ['systemctl', 'status', 'salt-minion.service', '-n', '0'] in directory '/root'
Nov 16 16:57:11 ctl01 salt-minion[2061]: [INFO    ] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'salt-minion.service'] in directory '/root'
Nov 16 16:57:11 ctl01 systemd[1]: Started /bin/systemctl restart salt-minion.service.
Nov 16 16:57:11 ctl01 systemd[1]: Stopping The Salt Minion...
Nov 16 16:57:11 ctl01 salt-minion[2061]: [WARNING ] Minion received a SIGTERM. Exiting.
Nov 16 16:57:12 ctl01 salt-minion[2061]: The Salt Minion is shutdown. Minion received a SIGTERM. Exited.
Nov 16 16:57:12 ctl01 systemd[1]: Stopped The Salt Minion.
Nov 16 16:57:12 ctl01 systemd[1]: salt-minion.service: Found left-over process 4411 (bash) in control group while starting unit. Ignoring.
Nov 16 16:57:12 ctl01 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Nov 16 16:57:12 ctl01 systemd[1]: salt-minion.service: Found left-over process 4476 (salt-call) in control group while starting unit. Ignoring.
Nov 16 16:57:12 ctl01 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Nov 16 16:57:12 ctl01 systemd[1]: Starting The Salt Minion...
Nov 16 16:57:12 ctl01 systemd[1]: Started The Salt Minion.
Nov 16 16:57:12 ctl01 salt-minion[2061]: local:
Nov 16 16:57:12 ctl01 salt-minion[2061]:     True
Nov 16 16:57:12 ctl01 salt-minion[4526]: [INFO    ] Setting up the Salt Minion "ctl01.mcp-k8s-calico-noha.local"
Nov 16 16:57:12 ctl01 salt-minion[4526]: [INFO    ] Starting up the Salt Minion
Nov 16 16:57:12 ctl01 salt-minion[4526]: [INFO    ] Starting pull socket on /var/run/salt/minion/minion_event_1899fe9592_pull.ipc
Nov 16 16:57:13 ctl01 salt-minion[4526]: [INFO    ] Creating minion process manager
Nov 16 16:57:15 ctl01 salt-minion[4526]: [INFO    ] Executing command ['date', '+%z'] in directory '/root'
Nov 16 16:57:15 ctl01 salt-minion[4526]: [INFO    ] Updating job settings for scheduled job: __mine_interval
Nov 16 16:57:15 ctl01 salt-minion[4526]: [INFO    ] Added mine.update to scheduler
Nov 16 16:57:15 ctl01 salt-minion[4526]: [INFO    ] Minion is starting as user 'root'
Nov 16 16:57:15 ctl01 salt-minion[4526]: [INFO    ] Minion is ready to receive requests!
Nov 16 16:57:23 ctl01 salt-minion[4526]: [INFO    ] User sudo_ubuntu Executing command state.sls with jid 20191116165723320648
Nov 16 16:57:23 ctl01 salt-minion[4526]: [INFO    ] Starting a new job with PID 4616
Nov 16 16:57:24 ctl01 salt-minion[4526]: [INFO    ] Loading fresh modules for state activity
Nov 16 16:57:24 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'etcd/server/service.sls'
Nov 16 16:57:24 ctl01 salt-minion[4526]: [INFO    ] Executing command '. /var/lib/etcd/configenv; etcdctl cluster-health > /dev/null 2>&1; echo $?' in directory '/root'
Nov 16 16:57:24 ctl01 salt-minion[4526]: [ERROR   ] Command '. /var/lib/etcd/configenv; etcdctl cluster-health > /dev/null 2>&1; echo $?' failed with return code: 2
Nov 16 16:57:24 ctl01 salt-minion[4526]: [ERROR   ] stdout: /bin/sh: 1: .: Can't open /var/lib/etcd/configenv
Nov 16 16:57:24 ctl01 salt-minion[4526]: [ERROR   ] retcode: 2
Nov 16 16:57:25 ctl01 salt-minion[4526]: [INFO    ] Running state [etcd_support_packages] at time 16:57:25.270504
Nov 16 16:57:25 ctl01 salt-minion[4526]: [INFO    ] Executing state pkg.installed for [etcd_support_packages]
Nov 16 16:57:25 ctl01 salt-minion[4526]: [INFO    ] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
Nov 16 16:57:26 ctl01 salt-minion[4526]: [INFO    ] Executing command ['apt-cache', '-q', 'policy', 'python-etcd'] in directory '/root'
Nov 16 16:57:26 ctl01 salt-minion[4526]: [INFO    ] Executing command ['apt-get', '-q', 'update'] in directory '/root'
Nov 16 16:57:29 ctl01 salt-minion[4526]: [INFO    ] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
Nov 16 16:57:29 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'python-etcd'] in directory '/root'
Nov 16 16:57:29 ctl01 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install python-etcd.
Nov 16 16:57:33 ctl01 salt-minion[4526]: [INFO    ] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
Nov 16 16:57:33 ctl01 salt-minion[4526]: [INFO    ] Made the following changes:
Nov 16 16:57:33 ctl01 salt-minion[4526]: 'python-etcd' changed from 'absent' to '0.4.3-2'
Nov 16 16:57:33 ctl01 salt-minion[4526]: 'python-dnspython' changed from 'absent' to '1.15.0-1'
Nov 16 16:57:33 ctl01 salt-minion[4526]: [INFO    ] Loading fresh modules for state activity
Nov 16 16:57:33 ctl01 salt-minion[4526]: [INFO    ] Completed state [etcd_support_packages] at time 16:57:33.843125 duration_in_ms=8572.62
Nov 16 16:57:33 ctl01 salt-minion[4526]: [INFO    ] Running state [/tmp/etcd/bin] at time 16:57:33.846247
Nov 16 16:57:33 ctl01 salt-minion[4526]: [INFO    ] Executing state archive.extracted for [/tmp/etcd/bin]
Nov 16 16:57:36 ctl01 salt-minion[4526]: [INFO    ] Executing command ['tar', 'x', '--strip=1', '-f', '/var/cache/salt/minion/extrn_files/base/github.com/etcd-io/etcd/releases/download/v3.3.12/etcd-v3.3.12-linux-amd64.tar.gz'] in directory '/tmp/etcd/bin/'
Nov 16 16:57:36 ctl01 salt-minion[4526]: [INFO    ] Executing command ['tar', '--version'] in directory '/root'
Nov 16 16:57:36 ctl01 salt-minion[4526]: [INFO    ] {'extracted_files': 'no tar output so far', 'directories_created': ['/tmp/etcd/bin/']}
Nov 16 16:57:36 ctl01 salt-minion[4526]: [INFO    ] Completed state [/tmp/etcd/bin] at time 16:57:36.728506 duration_in_ms=2882.253
Nov 16 16:57:36 ctl01 salt-minion[4526]: [INFO    ] Running state [/usr/local/bin/etcd] at time 16:57:36.729418
Nov 16 16:57:36 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/usr/local/bin/etcd]
Nov 16 16:57:37 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:57:37 ctl01 salt-minion[4526]: New file
Nov 16 16:57:37 ctl01 salt-minion[4526]: [INFO    ] Completed state [/usr/local/bin/etcd] at time 16:57:37.091500 duration_in_ms=362.081
Nov 16 16:57:37 ctl01 salt-minion[4526]: [INFO    ] Running state [/usr/local/bin/etcdctl] at time 16:57:37.092113
Nov 16 16:57:37 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/usr/local/bin/etcdctl]
Nov 16 16:57:37 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:57:37 ctl01 salt-minion[4526]: New file
Nov 16 16:57:37 ctl01 salt-minion[4526]: [INFO    ] Completed state [/usr/local/bin/etcdctl] at time 16:57:37.255640 duration_in_ms=163.526
Nov 16 16:57:37 ctl01 salt-minion[4526]: [INFO    ] Running state [etcd] at time 16:57:37.257377
Nov 16 16:57:37 ctl01 salt-minion[4526]: [INFO    ] Executing state user.present for [etcd]
Nov 16 16:57:37 ctl01 salt-minion[4526]: [INFO    ] User etcd is present and up to date
Nov 16 16:57:37 ctl01 salt-minion[4526]: [INFO    ] Completed state [etcd] at time 16:57:37.267611 duration_in_ms=10.234
Nov 16 16:57:37 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/systemd/system/etcd.service] at time 16:57:37.267835
Nov 16 16:57:37 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/systemd/system/etcd.service]
Nov 16 16:57:37 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'etcd/files/systemd/etcd.service'
Nov 16 16:57:37 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:57:37 ctl01 salt-minion[4526]: New file
Nov 16 16:57:37 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/systemd/system/etcd.service] at time 16:57:37.305013 duration_in_ms=37.178
Nov 16 16:57:37 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/default/etcd] at time 16:57:37.305253
Nov 16 16:57:37 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/default/etcd]
Nov 16 16:57:37 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'etcd/files/default'
Nov 16 16:57:37 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:57:37 ctl01 salt-minion[4526]: New file
Nov 16 16:57:37 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/default/etcd] at time 16:57:37.393303 duration_in_ms=88.05
Nov 16 16:57:37 ctl01 salt-minion[4526]: [INFO    ] Running state [/var/lib/etcd/] at time 16:57:37.393542
Nov 16 16:57:37 ctl01 salt-minion[4526]: [INFO    ] Executing state file.directory for [/var/lib/etcd/]
Nov 16 16:57:37 ctl01 salt-minion[4526]: [INFO    ] Directory /var/lib/etcd is in the correct state
Nov 16 16:57:37 ctl01 salt-minion[4526]: Directory /var/lib/etcd updated
Nov 16 16:57:37 ctl01 salt-minion[4526]: [INFO    ] Completed state [/var/lib/etcd/] at time 16:57:37.398309 duration_in_ms=4.767
Nov 16 16:57:37 ctl01 salt-minion[4526]: [INFO    ] Running state [/var/lib/etcd/configenv] at time 16:57:37.398864
Nov 16 16:57:37 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/var/lib/etcd/configenv]
Nov 16 16:57:37 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'etcd/files/configenv'
Nov 16 16:57:37 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:57:37 ctl01 salt-minion[4526]: New file
Nov 16 16:57:37 ctl01 salt-minion[4526]: [INFO    ] Completed state [/var/lib/etcd/configenv] at time 16:57:37.452559 duration_in_ms=53.695
Nov 16 16:57:37 ctl01 salt-minion[4526]: [INFO    ] Running state [/var/lib/etcd/configenvv3] at time 16:57:37.453056
Nov 16 16:57:37 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/var/lib/etcd/configenvv3]
Nov 16 16:57:37 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'etcd/files/configenvv3'
Nov 16 16:57:37 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:57:37 ctl01 salt-minion[4526]: New file
Nov 16 16:57:37 ctl01 salt-minion[4526]: [INFO    ] Completed state [/var/lib/etcd/configenvv3] at time 16:57:37.503058 duration_in_ms=50.002
Nov 16 16:57:37 ctl01 salt-minion[4526]: [INFO    ] Running state [etcd] at time 16:57:37.998962
Nov 16 16:57:37 ctl01 salt-minion[4526]: [INFO    ] Executing state service.running for [etcd]
Nov 16 16:57:38 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'status', 'etcd.service', '-n', '0'] in directory '/root'
Nov 16 16:57:38 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-active', 'etcd.service'] in directory '/root'
Nov 16 16:57:38 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'etcd.service'] in directory '/root'
Nov 16 16:57:38 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemd-run', '--scope', 'systemctl', 'start', 'etcd.service'] in directory '/root'
Nov 16 16:57:38 ctl01 systemd[1]: Started /bin/systemctl start etcd.service.
Nov 16 16:57:38 ctl01 systemd[1]: Starting etcd - highly-available key value store...
Nov 16 16:57:38 ctl01 etcd[5381]: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=https://172.16.10.36:4001
Nov 16 16:57:38 ctl01 etcd[5381]: recognized and used environment variable ETCD_CERT_FILE=/var/lib/etcd/etcd-server.crt
Nov 16 16:57:38 ctl01 etcd[5381]: recognized and used environment variable ETCD_CLIENT_CERT_AUTH=true
Nov 16 16:57:38 ctl01 etcd[5381]: recognized and used environment variable ETCD_DATA_DIR=/var/lib/etcd/default
Nov 16 16:57:38 ctl01 etcd[5381]: recognized and used environment variable ETCD_ELECTION_TIMEOUT=5000
Nov 16 16:57:38 ctl01 etcd[5381]: recognized and used environment variable ETCD_HEARTBEAT_INTERVAL=250
Nov 16 16:57:38 ctl01 etcd[5381]: recognized and used environment variable ETCD_INITIAL_ADVERTISE_PEER_URLS=https://172.16.10.36:2380
Nov 16 16:57:38 ctl01 etcd[5381]: recognized and used environment variable ETCD_INITIAL_CLUSTER=ctl01=https://172.16.10.36:2380
Nov 16 16:57:38 ctl01 etcd[5381]: recognized and used environment variable ETCD_INITIAL_CLUSTER_STATE=new
Nov 16 16:57:38 ctl01 etcd[5381]: recognized and used environment variable ETCD_INITIAL_CLUSTER_TOKEN=IN7KaRMSo3xkGxkjAAPtkRkAgqN4ZNRq
Nov 16 16:57:38 ctl01 etcd[5381]: recognized and used environment variable ETCD_KEY_FILE=/var/lib/etcd/etcd-server.key
Nov 16 16:57:38 ctl01 etcd[5381]: recognized and used environment variable ETCD_LISTEN_CLIENT_URLS=https://172.16.10.36:4001,http://127.0.0.1:4001
Nov 16 16:57:38 ctl01 etcd[5381]: recognized and used environment variable ETCD_LISTEN_PEER_URLS=https://172.16.10.36:2380
Nov 16 16:57:38 ctl01 etcd[5381]: recognized and used environment variable ETCD_NAME=ctl01
Nov 16 16:57:38 ctl01 etcd[5381]: recognized and used environment variable ETCD_PEER_CERT_FILE=/var/lib/etcd/etcd-server.crt
Nov 16 16:57:38 ctl01 etcd[5381]: recognized and used environment variable ETCD_PEER_CLIENT_CERT_AUTH=true
Nov 16 16:57:38 ctl01 etcd[5381]: recognized and used environment variable ETCD_PEER_KEY_FILE=/var/lib/etcd/etcd-server.key
Nov 16 16:57:38 ctl01 etcd[5381]: recognized and used environment variable ETCD_PEER_TRUSTED_CA_FILE=/var/lib/etcd/ca.pem
Nov 16 16:57:38 ctl01 etcd[5381]: recognized and used environment variable ETCD_TRUSTED_CA_FILE=/var/lib/etcd/ca.pem
Nov 16 16:57:38 ctl01 etcd[5381]: etcd Version: 3.3.12
Nov 16 16:57:38 ctl01 etcd[5381]: Git SHA: d57e8b8
Nov 16 16:57:38 ctl01 etcd[5381]: Go Version: go1.10.8
Nov 16 16:57:38 ctl01 etcd[5381]: Go OS/Arch: linux/amd64
Nov 16 16:57:38 ctl01 etcd[5381]: setting maximum number of CPUs to 8, total number of available CPUs is 8
Nov 16 16:57:38 ctl01 etcd[5381]: peerTLS: cert = /var/lib/etcd/etcd-server.crt, key = /var/lib/etcd/etcd-server.key, ca = , trusted-ca = /var/lib/etcd/ca.pem, client-cert-auth = true, crl-file = 
Nov 16 16:57:38 ctl01 etcd[5381]: listening for peers on https://172.16.10.36:2380
Nov 16 16:57:38 ctl01 etcd[5381]: The scheme of client url http://127.0.0.1:4001 is HTTP while peer key/cert files are presented. Ignored key/cert files.
Nov 16 16:57:38 ctl01 etcd[5381]: The scheme of client url http://127.0.0.1:4001 is HTTP while client cert auth (--client-cert-auth) is enabled. Ignored client cert auth for this url.
Nov 16 16:57:38 ctl01 etcd[5381]: listening for client requests on 127.0.0.1:4001
Nov 16 16:57:38 ctl01 etcd[5381]: listening for client requests on 172.16.10.36:4001
Nov 16 16:57:38 ctl01 etcd[5381]: name = ctl01
Nov 16 16:57:38 ctl01 etcd[5381]: data dir = /var/lib/etcd/default
Nov 16 16:57:38 ctl01 etcd[5381]: member dir = /var/lib/etcd/default/member
Nov 16 16:57:38 ctl01 etcd[5381]: heartbeat = 250ms
Nov 16 16:57:38 ctl01 etcd[5381]: election = 5000ms
Nov 16 16:57:38 ctl01 etcd[5381]: snapshot count = 100000
Nov 16 16:57:38 ctl01 etcd[5381]: advertise client URLs = https://172.16.10.36:4001
Nov 16 16:57:38 ctl01 etcd[5381]: initial advertise peer URLs = https://172.16.10.36:2380
Nov 16 16:57:38 ctl01 etcd[5381]: initial cluster = ctl01=https://172.16.10.36:2380
Nov 16 16:57:38 ctl01 etcd[5381]: starting member 411212f5fea59a9f in cluster 8f5ffbb80be06f34
Nov 16 16:57:38 ctl01 etcd[5381]: 411212f5fea59a9f became follower at term 0
Nov 16 16:57:38 ctl01 etcd[5381]: newRaft 411212f5fea59a9f [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
Nov 16 16:57:38 ctl01 etcd[5381]: 411212f5fea59a9f became follower at term 1
Nov 16 16:57:38 ctl01 etcd[5381]: simple token is not cryptographically signed
Nov 16 16:57:38 ctl01 etcd[5381]: starting server... [version: 3.3.12, cluster version: to_be_decided]
Nov 16 16:57:38 ctl01 etcd[5381]: 411212f5fea59a9f as single-node; fast-forwarding 19 ticks (election ticks 20)
Nov 16 16:57:38 ctl01 etcd[5381]: added member 411212f5fea59a9f [https://172.16.10.36:2380] to cluster 8f5ffbb80be06f34
Nov 16 16:57:38 ctl01 etcd[5381]: ClientTLS: cert = /var/lib/etcd/etcd-server.crt, key = /var/lib/etcd/etcd-server.key, ca = , trusted-ca = /var/lib/etcd/ca.pem, client-cert-auth = true, crl-file = 
Nov 16 16:57:38 ctl01 salt-minion[4526]: [INFO    ] User sudo_ubuntu Executing command saltutil.find_job with jid 20191116165738353727
Nov 16 16:57:38 ctl01 salt-minion[4526]: [INFO    ] Starting a new job with PID 5413
Nov 16 16:57:38 ctl01 salt-minion[4526]: [INFO    ] Returning information for job: 20191116165738353727
Nov 16 16:57:43 ctl01 etcd[5381]: 411212f5fea59a9f is starting a new election at term 1
Nov 16 16:57:43 ctl01 etcd[5381]: 411212f5fea59a9f became candidate at term 2
Nov 16 16:57:43 ctl01 etcd[5381]: 411212f5fea59a9f received MsgVoteResp from 411212f5fea59a9f at term 2
Nov 16 16:57:43 ctl01 etcd[5381]: 411212f5fea59a9f became leader at term 2
Nov 16 16:57:43 ctl01 etcd[5381]: raft.node: 411212f5fea59a9f elected leader 411212f5fea59a9f at term 2
Nov 16 16:57:43 ctl01 etcd[5381]: published {Name:ctl01 ClientURLs:[https://172.16.10.36:4001]} to cluster 8f5ffbb80be06f34
Nov 16 16:57:43 ctl01 etcd[5381]: setting up the initial cluster version to 3.3
Nov 16 16:57:43 ctl01 etcd[5381]: ready to serve client requests
Nov 16 16:57:43 ctl01 etcd[5381]: ready to serve client requests
Nov 16 16:57:43 ctl01 systemd[1]: Started etcd - highly-available key value store.
Nov 16 16:57:43 ctl01 etcd[5381]: serving insecure client requests on 127.0.0.1:4001, this is strongly discouraged!
Nov 16 16:57:43 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-active', 'etcd.service'] in directory '/root'
Nov 16 16:57:43 ctl01 etcd[5381]: serving client requests on 172.16.10.36:4001
Nov 16 16:57:43 ctl01 etcd[5381]: set the initial cluster version to 3.3
Nov 16 16:57:43 ctl01 etcd[5381]: enabled capabilities for version 3.3
Nov 16 16:57:43 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'etcd.service'] in directory '/root'
Nov 16 16:57:43 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'etcd.service'] in directory '/root'
Nov 16 16:57:43 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemd-run', '--scope', 'systemctl', 'enable', 'etcd.service'] in directory '/root'
Nov 16 16:57:43 ctl01 systemd[1]: Started /bin/systemctl enable etcd.service.
Nov 16 16:57:43 ctl01 systemd[1]: Reloading.
Nov 16 16:57:43 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'etcd.service'] in directory '/root'
Nov 16 16:57:43 ctl01 salt-minion[4526]: [INFO    ] {'etcd': True}
Nov 16 16:57:43 ctl01 salt-minion[4526]: [INFO    ] Completed state [etcd] at time 16:57:43.423225 duration_in_ms=5424.262
Nov 16 16:57:43 ctl01 salt-minion[4526]: [INFO    ] Returning information for job: 20191116165723320648
Nov 16 16:57:44 ctl01 salt-minion[4526]: [INFO    ] User sudo_ubuntu Executing command cmd.run with jid 20191116165744105309
Nov 16 16:57:44 ctl01 salt-minion[4526]: [INFO    ] Starting a new job with PID 5470
Nov 16 16:57:44 ctl01 salt-minion[4526]: [INFO    ] Executing command '. /var/lib/etcd/configenv && etcdctl cluster-health' in directory '/root'
Nov 16 16:57:44 ctl01 salt-minion[4526]: [INFO    ] Returning information for job: 20191116165744105309
Nov 16 16:57:44 ctl01 salt-minion[4526]: [INFO    ] User sudo_ubuntu Executing command state.sls with jid 20191116165744978789
Nov 16 16:57:45 ctl01 salt-minion[4526]: [INFO    ] Starting a new job with PID 5483
Nov 16 16:57:51 ctl01 salt-minion[4526]: [INFO    ] Loading fresh modules for state activity
Nov 16 16:57:51 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/master/kube-addons.sls'
Nov 16 16:57:51 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:57:51 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:57:51 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:57:51 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:57:51 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:57:51 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:57:51 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/addons] at time 16:57:51.849029
Nov 16 16:57:51 ctl01 salt-minion[4526]: [INFO    ] Executing state file.directory for [/etc/kubernetes/addons]
Nov 16 16:57:51 ctl01 salt-minion[4526]: [INFO    ] {'/etc/kubernetes/addons': 'New Dir'}
Nov 16 16:57:51 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/addons] at time 16:57:51.855581 duration_in_ms=6.553
Nov 16 16:57:51 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/addons/calico/calico-kube-controllers.yml] at time 16:57:51.855854
Nov 16 16:57:51 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/addons/calico/calico-kube-controllers.yml]
Nov 16 16:57:51 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/kube-addons/calico/calico-kube-controllers.yml'
Nov 16 16:57:51 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:57:51 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:57:51 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:57:51 ctl01 salt-minion[4526]: New file
Nov 16 16:57:51 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/addons/calico/calico-kube-controllers.yml] at time 16:57:51.985821 duration_in_ms=129.965
Nov 16 16:57:51 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/addons/calico/calico-rbac.yml] at time 16:57:51.986360
Nov 16 16:57:51 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/addons/calico/calico-rbac.yml]
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/kube-addons/calico/calico-rbac.yml'
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:57:52 ctl01 salt-minion[4526]: New file
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/addons/calico/calico-rbac.yml] at time 16:57:52.048092 duration_in_ms=61.731
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/addons/netchecker/netchecker-svc.yml] at time 16:57:52.048491
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/addons/netchecker/netchecker-svc.yml]
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/kube-addons/netchecker/netchecker-svc.yml'
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:57:52 ctl01 salt-minion[4526]: New file
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/addons/netchecker/netchecker-svc.yml] at time 16:57:52.183042 duration_in_ms=134.551
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/addons/netchecker/netchecker-server.yml] at time 16:57:52.183647
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/addons/netchecker/netchecker-server.yml]
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/kube-addons/netchecker/netchecker-server.yml'
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:57:52 ctl01 salt-minion[4526]: New file
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/addons/netchecker/netchecker-server.yml] at time 16:57:52.344271 duration_in_ms=160.625
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/addons/netchecker/netchecker-agent.yml] at time 16:57:52.344797
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/addons/netchecker/netchecker-agent.yml]
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/kube-addons/netchecker/netchecker-agent.yml'
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:57:52 ctl01 salt-minion[4526]: New file
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/addons/netchecker/netchecker-agent.yml] at time 16:57:52.475480 duration_in_ms=130.683
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/addons/netchecker/netchecker-serviceaccount.yml] at time 16:57:52.475904
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/addons/netchecker/netchecker-serviceaccount.yml]
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/kube-addons/netchecker/netchecker-serviceaccount.yml'
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:57:52 ctl01 salt-minion[4526]: New file
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/addons/netchecker/netchecker-serviceaccount.yml] at time 16:57:52.511005 duration_in_ms=35.099
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/addons/netchecker/netchecker-roles.yml] at time 16:57:52.511281
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/addons/netchecker/netchecker-roles.yml]
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/kube-addons/netchecker/netchecker-roles.yml'
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:57:52 ctl01 salt-minion[4526]: New file
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/addons/netchecker/netchecker-roles.yml] at time 16:57:52.543399 duration_in_ms=32.118
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/addons/prometheus/prometheus-roles.yml] at time 16:57:52.543678
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/addons/prometheus/prometheus-roles.yml]
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/kube-addons/prometheus/prometheus-roles.yml'
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:57:52 ctl01 salt-minion[4526]: New file
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/addons/prometheus/prometheus-roles.yml] at time 16:57:52.575197 duration_in_ms=31.519
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/addons/coredns] at time 16:57:52.575440
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Executing state file.absent for [/etc/kubernetes/addons/coredns]
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] File /etc/kubernetes/addons/coredns is not present
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/addons/coredns] at time 16:57:52.576318 duration_in_ms=0.879
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Running state [kubectl -n kube-system delete svc coredns > /dev/null || echo "coredns is absent. OK" && true] at time 16:57:52.577426
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Executing state cmd.run for [kubectl -n kube-system delete svc coredns > /dev/null || echo "coredns is absent. OK" && true]
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Executing command 'kubectl -n kube-system delete svc coredns > /dev/null || echo "coredns is absent. OK" && true' in directory '/root'
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] {'pid': 5564, 'retcode': 0, 'stderr': '/bin/sh: 1: kubectl: not found', 'stdout': 'coredns is absent. OK'}
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Completed state [kubectl -n kube-system delete svc coredns > /dev/null || echo "coredns is absent. OK" && true] at time 16:57:52.591525 duration_in_ms=14.099
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/addons/dns/kubedns-svc.yaml] at time 16:57:52.591927
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/addons/dns/kubedns-svc.yaml]
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/kube-addons/dns/kubedns-svc.yaml'
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:57:52 ctl01 salt-minion[4526]: New file
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/addons/dns/kubedns-svc.yaml] at time 16:57:52.704645 duration_in_ms=112.717
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/addons/dns/kubedns-rc.yaml] at time 16:57:52.705156
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/addons/dns/kubedns-rc.yaml]
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/kube-addons/dns/kubedns-rc.yaml'
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:57:52 ctl01 salt-minion[4526]: New file
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/addons/dns/kubedns-rc.yaml] at time 16:57:52.847822 duration_in_ms=142.665
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/addons/dns/kubedns-sa.yaml] at time 16:57:52.848342
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/addons/dns/kubedns-sa.yaml]
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/kube-addons/dns/kubedns-sa.yaml'
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:57:52 ctl01 salt-minion[4526]: New file
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/addons/dns/kubedns-sa.yaml] at time 16:57:52.879693 duration_in_ms=31.352
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/addons/dns/kubedns-autoscaler.yaml] at time 16:57:52.879948
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/addons/dns/kubedns-autoscaler.yaml]
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/kube-addons/dns/kubedns-autoscaler.yaml'
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:57:52 ctl01 salt-minion[4526]: New file
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/addons/dns/kubedns-autoscaler.yaml] at time 16:57:52.998801 duration_in_ms=118.851
Nov 16 16:57:52 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/addons/dns/kubedns-autoscaler-rbac.yaml] at time 16:57:52.999337
Nov 16 16:57:53 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/addons/dns/kubedns-autoscaler-rbac.yaml]
Nov 16 16:57:53 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/kube-addons/dns/kubedns-autoscaler-rbac.yaml'
Nov 16 16:57:53 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:57:53 ctl01 salt-minion[4526]: New file
Nov 16 16:57:53 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/addons/dns/kubedns-autoscaler-rbac.yaml] at time 16:57:53.033909 duration_in_ms=34.573
Nov 16 16:57:53 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/addons/dns/kubedns-clusterrole.yaml] at time 16:57:53.034149
Nov 16 16:57:53 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/addons/dns/kubedns-clusterrole.yaml]
Nov 16 16:57:53 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/kube-addons/dns/kubedns-clusterrole.yaml'
Nov 16 16:57:53 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:57:53 ctl01 salt-minion[4526]: New file
Nov 16 16:57:53 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/addons/dns/kubedns-clusterrole.yaml] at time 16:57:53.072404 duration_in_ms=38.254
Nov 16 16:57:53 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/addons/metrics-server] at time 16:57:53.072640
Nov 16 16:57:53 ctl01 salt-minion[4526]: [INFO    ] Executing state file.absent for [/etc/kubernetes/addons/metrics-server]
Nov 16 16:57:53 ctl01 salt-minion[4526]: [INFO    ] File /etc/kubernetes/addons/metrics-server is not present
Nov 16 16:57:53 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/addons/metrics-server] at time 16:57:53.073409 duration_in_ms=0.769
Nov 16 16:57:53 ctl01 salt-minion[4526]: [INFO    ] Returning information for job: 20191116165744978789
Nov 16 16:57:53 ctl01 salt-minion[4526]: [INFO    ] User sudo_ubuntu Executing command state.sls with jid 20191116165753766584
Nov 16 16:57:53 ctl01 salt-minion[4526]: [INFO    ] Starting a new job with PID 5602
Nov 16 16:57:54 ctl01 salt-minion[4526]: [INFO    ] Loading fresh modules for state activity
Nov 16 16:57:54 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/pool/init.sls'
Nov 16 16:57:55 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:57:55 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:57:55 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:57:55 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:57:55 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/pool/calico.sls'
Nov 16 16:57:55 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:57:55 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:57:55 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:57:55 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:57:55 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/pool/service.sls'
Nov 16 16:57:55 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:57:55 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:57:55 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:57:55 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:57:55 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/_common.sls'
Nov 16 16:57:55 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:57:55 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:57:55 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/pool/kube-proxy.sls'
Nov 16 16:57:55 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:57:55 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:57:55 ctl01 salt-minion[4526]: [INFO    ] Running state [/usr/bin/calicoctl] at time 16:57:55.759351
Nov 16 16:57:55 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/usr/bin/calicoctl]
Nov 16 16:57:57 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:57:57 ctl01 salt-minion[4526]: New file
Nov 16 16:57:57 ctl01 salt-minion[4526]: [INFO    ] Completed state [/usr/bin/calicoctl] at time 16:57:57.190331 duration_in_ms=1430.98
Nov 16 16:57:57 ctl01 salt-minion[4526]: [INFO    ] Running state [/usr/bin/birdcl] at time 16:57:57.190606
Nov 16 16:57:57 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/usr/bin/birdcl]
Nov 16 16:57:57 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:57:57 ctl01 salt-minion[4526]: New file
Nov 16 16:57:57 ctl01 salt-minion[4526]: [INFO    ] Completed state [/usr/bin/birdcl] at time 16:57:57.604433 duration_in_ms=413.826
Nov 16 16:57:57 ctl01 salt-minion[4526]: [INFO    ] Running state [/opt/cni/bin/calico] at time 16:57:57.604663
Nov 16 16:57:57 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/opt/cni/bin/calico]
Nov 16 16:57:58 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:57:58 ctl01 salt-minion[4526]: New file
Nov 16 16:57:58 ctl01 salt-minion[4526]: [INFO    ] Completed state [/opt/cni/bin/calico] at time 16:57:58.828672 duration_in_ms=1224.008
Nov 16 16:57:58 ctl01 salt-minion[4526]: [INFO    ] Running state [/opt/cni/bin/calico-ipam] at time 16:57:58.828944
Nov 16 16:57:58 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/opt/cni/bin/calico-ipam]
Nov 16 16:57:59 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:57:59 ctl01 salt-minion[4526]: New file
Nov 16 16:57:59 ctl01 salt-minion[4526]: [INFO    ] Completed state [/opt/cni/bin/calico-ipam] at time 16:57:59.978840 duration_in_ms=1149.893
Nov 16 16:57:59 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/cni/net.d/10-calico.conf] at time 16:57:59.979109
Nov 16 16:57:59 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/cni/net.d/10-calico.conf]
Nov 16 16:57:59 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/calico/calico.conf'
Nov 16 16:58:00 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:58:00 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:58:00 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:58:00 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:58:00 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:58:00 ctl01 salt-minion[4526]: New file
Nov 16 16:58:00 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/cni/net.d/10-calico.conf] at time 16:58:00.113914 duration_in_ms=134.803
Nov 16 16:58:00 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/calico/network-environment] at time 16:58:00.114410
Nov 16 16:58:00 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/calico/network-environment]
Nov 16 16:58:00 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/calico/network-environment.pool'
Nov 16 16:58:00 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:58:00 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:58:00 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:58:00 ctl01 salt-minion[4526]: New file
Nov 16 16:58:00 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/calico/network-environment] at time 16:58:00.217898 duration_in_ms=103.487
Nov 16 16:58:00 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/calico/calicoctl.cfg] at time 16:58:00.218430
Nov 16 16:58:00 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/calico/calicoctl.cfg]
Nov 16 16:58:00 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/calico/calicoctl.cfg.pool'
Nov 16 16:58:00 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:58:00 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:58:00 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:58:00 ctl01 salt-minion[4526]: New file
Nov 16 16:58:00 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/calico/calicoctl.cfg] at time 16:58:00.337634 duration_in_ms=119.203
Nov 16 16:58:01 ctl01 salt-minion[4526]: [INFO    ] Running state [containerd] at time 16:58:01.019229
Nov 16 16:58:01 ctl01 salt-minion[4526]: [INFO    ] Executing state pkg.installed for [containerd]
Nov 16 16:58:01 ctl01 salt-minion[4526]: [INFO    ] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
Nov 16 16:58:01 ctl01 salt-minion[4526]: [INFO    ] Executing command ['apt-cache', '-q', 'policy', 'containerd'] in directory '/root'
Nov 16 16:58:02 ctl01 salt-minion[4526]: [INFO    ] Executing command ['apt-get', '-q', 'update'] in directory '/root'
Nov 16 16:58:05 ctl01 salt-minion[4526]: [INFO    ] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
Nov 16 16:58:05 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'containerd'] in directory '/root'
Nov 16 16:58:05 ctl01 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install containerd.
Nov 16 16:58:08 ctl01 salt-minion[4526]: [INFO    ] User sudo_ubuntu Executing command saltutil.find_job with jid 20191116165808820551
Nov 16 16:58:08 ctl01 salt-minion[4526]: [INFO    ] Starting a new job with PID 6351
Nov 16 16:58:08 ctl01 salt-minion[4526]: [INFO    ] Returning information for job: 20191116165808820551
Nov 16 16:58:10 ctl01 systemd[1]: Reloading.
Nov 16 16:58:10 ctl01 systemd[1]: Reloading.
Nov 16 16:58:10 ctl01 systemd[1]: Starting containerd container runtime...
Nov 16 16:58:10 ctl01 systemd[1]: Started containerd container runtime.
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.449360909Z" level=info msg="starting containerd" revision= version="1.2.6-0ubuntu1~18.04.2"
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.449799388Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.449936582Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.450370642Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.450554146Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.477708131Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.477888827Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
Nov 16 16:58:10 ctl01 kernel: [  149.291800] aufs 4.15-20180219
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.478162718Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.478515913Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.zfs" error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.478624114Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.478760027Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.478863987Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.482728703Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.482863373Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.483005498Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.483113974Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.483225915Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.483328670Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.483425933Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.483521749Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.483618253Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.483720786Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.483928324Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.484140743Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.484673101Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.484727439Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.484779925Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.484802242Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.484821961Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.484836368Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.484849621Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.484862745Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.484876225Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.484889551Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.484902775Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.485068119Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.485088339Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.485102604Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.485123136Z" level=info msg="loading plugin "io.containerd.grpc.v1.cri"..." type=io.containerd.grpc.v1
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.485184164Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntime:{Type:io.containerd.runtime.v1.linux Engine: Root: Options:<nil>} UntrustedWorkloadRuntime:{Type: Engine: Root: Options:<nil>} Runtimes:map[] NoPivot:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginConfTemplate:} Registry:{Mirrors:map[docker.io:{Endpoints:[https://registry-1.docker.io]}] Auths:map[]} StreamServerAddress:127.0.0.1 StreamServerPort:0 EnableSelinux:false SandboxImage:k8s.gcr.io/pause:3.1 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.485265807Z" level=info msg="Connect containerd service"
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.485416335Z" level=info msg="Get image filesystem path "/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs""
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.485967332Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.486292394Z" level=info msg=serving... address="/run/containerd/containerd.sock"
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.486332784Z" level=info msg="containerd successfully booted in 0.037399s"
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.488751845Z" level=info msg="Start subscribing containerd event"
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.489025530Z" level=info msg="Start recovering state"
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.489346366Z" level=info msg="Start event monitor"
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.489389651Z" level=info msg="Start snapshots syncer"
Nov 16 16:58:10 ctl01 containerd[6427]: time="2019-11-16T16:58:10.489408718Z" level=info msg="Start streaming server"
Nov 16 16:58:13 ctl01 salt-minion[4526]: [INFO    ] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
Nov 16 16:58:13 ctl01 salt-minion[4526]: [INFO    ] Made the following changes:
Nov 16 16:58:13 ctl01 salt-minion[4526]: 'containerd' changed from 'absent' to '1.2.6-0ubuntu1~18.04.2'
Nov 16 16:58:13 ctl01 salt-minion[4526]: 'runc' changed from 'absent' to '1.0.0~rc7+git20190403.029124da-0ubuntu1~18.04.2'
Nov 16 16:58:13 ctl01 salt-minion[4526]: [INFO    ] Loading fresh modules for state activity
Nov 16 16:58:13 ctl01 salt-minion[4526]: [INFO    ] Completed state [containerd] at time 16:58:13.330378 duration_in_ms=12311.149
Nov 16 16:58:13 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/containerd/config.toml] at time 16:58:13.334237
Nov 16 16:58:13 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/containerd/config.toml]
Nov 16 16:58:13 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/containerd/config.toml'
Nov 16 16:58:13 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:58:13 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:58:13 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:58:13 ctl01 salt-minion[4526]: New file
Nov 16 16:58:13 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/containerd/config.toml] at time 16:58:13.448644 duration_in_ms=114.406
Nov 16 16:58:14 ctl01 salt-minion[4526]: [INFO    ] Running state [containerd] at time 16:58:14.002066
Nov 16 16:58:14 ctl01 salt-minion[4526]: [INFO    ] Executing state service.running for [containerd]
Nov 16 16:58:14 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'status', 'containerd.service', '-n', '0'] in directory '/root'
Nov 16 16:58:14 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-active', 'containerd.service'] in directory '/root'
Nov 16 16:58:14 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'containerd.service'] in directory '/root'
Nov 16 16:58:14 ctl01 salt-minion[4526]: [INFO    ] The service containerd is already running
Nov 16 16:58:14 ctl01 salt-minion[4526]: [INFO    ] Completed state [containerd] at time 16:58:14.057981 duration_in_ms=55.915
Nov 16 16:58:14 ctl01 salt-minion[4526]: [INFO    ] Running state [containerd] at time 16:58:14.058511
Nov 16 16:58:14 ctl01 salt-minion[4526]: [INFO    ] Executing state service.mod_watch for [containerd]
Nov 16 16:58:14 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-active', 'containerd.service'] in directory '/root'
Nov 16 16:58:14 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'containerd.service'] in directory '/root'
Nov 16 16:58:14 ctl01 systemd[1]: Started /bin/systemctl restart containerd.service.
Nov 16 16:58:14 ctl01 systemd[1]: Stopping containerd container runtime...
Nov 16 16:58:14 ctl01 containerd[6427]: time="2019-11-16T16:58:14.135813841Z" level=info msg="Stop CRI service"
Nov 16 16:58:14 ctl01 systemd[1]: Stopped containerd container runtime.
Nov 16 16:58:14 ctl01 systemd[1]: Starting containerd container runtime...
Nov 16 16:58:14 ctl01 systemd[1]: Started containerd container runtime.
Nov 16 16:58:14 ctl01 salt-minion[4526]: [INFO    ] {'containerd': True}
Nov 16 16:58:14 ctl01 salt-minion[4526]: [INFO    ] Completed state [containerd] at time 16:58:14.144243 duration_in_ms=85.733
Nov 16 16:58:14 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/systemd/system/calico-node.service] at time 16:58:14.145098
Nov 16 16:58:14 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/systemd/system/calico-node.service]
Nov 16 16:58:14 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/calico/calico-node.service.ctr'
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.173637438Z" level=info msg="starting containerd" revision= version="1.2.6-0ubuntu1~18.04.2"
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.174073087Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.174149237Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.174437829Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.174554731Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.177023723Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.177073800Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.177158987Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.177335792Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.zfs" error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.177359315Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.177377396Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.177388012Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.177535759Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.177558012Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.177604661Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.177626195Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.177640094Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.177654471Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.177674502Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.177691904Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.177716895Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.177736192Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.177780217Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.177832401Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.178292869Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.178335218Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.178390219Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.178408812Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.178422639Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.178469647Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.178484088Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.178497612Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.178510767Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.178525878Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.178539938Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.178580730Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.178601233Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.178615861Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.178628385Z" level=info msg="loading plugin "io.containerd.grpc.v1.cri"..." type=io.containerd.grpc.v1
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.178711858Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntime:{Type:io.containerd.runtime.v1.linux Engine: Root: Options:<nil>} UntrustedWorkloadRuntime:{Type: Engine: Root: Options:<nil>} Runtimes:map[] NoPivot:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginConfTemplate:} Registry:{Mirrors:map[docker.io:{Endpoints:[https://registry-1.docker.io]}] Auths:map[]} StreamServerAddress:127.0.0.1 StreamServerPort:0 EnableSelinux:false SandboxImage:docker-prod-local.artifactory.mirantis.com/mirantis/kubernetes/pause-amd64:v1.13.5-3 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.178802949Z" level=info msg="Connect containerd service"
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.178964662Z" level=info msg="Get image filesystem path "/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs""
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.179462247Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.179703075Z" level=info msg=serving... address="/run/containerd/containerd.sock"
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.179724296Z" level=info msg="containerd successfully booted in 0.006546s"
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.180956967Z" level=info msg="Start subscribing containerd event"
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.181767700Z" level=info msg="Start recovering state"
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.182060079Z" level=info msg="Start event monitor"
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.182240600Z" level=info msg="Start snapshots syncer"
Nov 16 16:58:14 ctl01 containerd[6733]: time="2019-11-16T16:58:14.182538835Z" level=info msg="Start streaming server"
Nov 16 16:58:14 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:58:14 ctl01 salt-minion[4526]: New file
Nov 16 16:58:14 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/systemd/system/calico-node.service] at time 16:58:14.189856 duration_in_ms=44.757
Nov 16 16:58:14 ctl01 salt-minion[4526]: [INFO    ] Running state [/var/lib/calico] at time 16:58:14.190144
Nov 16 16:58:14 ctl01 salt-minion[4526]: [INFO    ] Executing state file.directory for [/var/lib/calico]
Nov 16 16:58:14 ctl01 salt-minion[4526]: [INFO    ] {'/var/lib/calico': 'New Dir'}
Nov 16 16:58:14 ctl01 salt-minion[4526]: [INFO    ] Completed state [/var/lib/calico] at time 16:58:14.192409 duration_in_ms=2.265
Nov 16 16:58:14 ctl01 salt-minion[4526]: [INFO    ] Running state [/var/log/calico] at time 16:58:14.192644
Nov 16 16:58:14 ctl01 salt-minion[4526]: [INFO    ] Executing state file.directory for [/var/log/calico]
Nov 16 16:58:14 ctl01 salt-minion[4526]: [INFO    ] {'/var/log/calico': 'New Dir'}
Nov 16 16:58:14 ctl01 salt-minion[4526]: [INFO    ] Completed state [/var/log/calico] at time 16:58:14.194226 duration_in_ms=1.582
Nov 16 16:58:14 ctl01 salt-minion[4526]: [INFO    ] Running state [calico-node] at time 16:58:14.196500
Nov 16 16:58:14 ctl01 salt-minion[4526]: [INFO    ] Executing state service.running for [calico-node]
Nov 16 16:58:14 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'status', 'calico-node.service', '-n', '0'] in directory '/root'
Nov 16 16:58:14 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-active', 'calico-node.service'] in directory '/root'
Nov 16 16:58:14 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'calico-node.service'] in directory '/root'
Nov 16 16:58:14 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemd-run', '--scope', 'systemctl', 'start', 'calico-node.service'] in directory '/root'
Nov 16 16:58:14 ctl01 systemd[1]: Started /bin/systemctl start calico-node.service.
Nov 16 16:58:14 ctl01 systemd[1]: Starting calico-node...
Nov 16 16:58:14 ctl01 ctr[6782]: ctr: container "calico-node" in namespace "default": not found
Nov 16 16:58:14 ctl01 ctr[6799]: time="2019-11-16T16:58:14Z" level=error msg="failed to delete container "calico-node"" error="container "calico-node" in namespace "default": not found"
Nov 16 16:58:14 ctl01 ctr[6799]: ctr: container "calico-node" in namespace "default": not found
Nov 16 16:58:14 ctl01 ctr[6807]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolving      |#033[32m#033[0m--------------------------------------|
Nov 16 16:58:14 ctl01 ctr[6807]: elapsed: 0.1 s                                                                        total:   0.0 B (0.0 B/s)
Nov 16 16:58:14 ctl01 ctr[6807]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolving      |#033[32m#033[0m--------------------------------------|
Nov 16 16:58:14 ctl01 ctr[6807]: elapsed: 0.2 s                                                                        total:   0.0 B (0.0 B/s)
Nov 16 16:58:14 ctl01 ctr[6807]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolving      |#033[32m#033[0m--------------------------------------|
Nov 16 16:58:14 ctl01 ctr[6807]: elapsed: 0.3 s                                                                        total:   0.0 B (0.0 B/s)
Nov 16 16:58:14 ctl01 ctr[6807]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolving      |#033[32m#033[0m--------------------------------------|
Nov 16 16:58:14 ctl01 ctr[6807]: elapsed: 0.4 s                                                                        total:   0.0 B (0.0 B/s)
Nov 16 16:58:14 ctl01 ctr[6807]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolving      |#033[32m#033[0m--------------------------------------|
Nov 16 16:58:14 ctl01 ctr[6807]: elapsed: 0.5 s                                                                        total:   0.0 B (0.0 B/s)
Nov 16 16:58:15 ctl01 ctr[6807]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolving      |#033[32m#033[0m--------------------------------------|
Nov 16 16:58:15 ctl01 ctr[6807]: elapsed: 0.6 s                                                                        total:   0.0 B (0.0 B/s)
Nov 16 16:58:15 ctl01 ctr[6807]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolving      |#033[32m#033[0m--------------------------------------|
Nov 16 16:58:15 ctl01 ctr[6807]: elapsed: 0.7 s                                                                        total:   0.0 B (0.0 B/s)
Nov 16 16:58:15 ctl01 ctr[6807]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 ctl01 ctr[6807]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     waiting        |#033[32m#033[0m--------------------------------------|
Nov 16 16:58:15 ctl01 ctr[6807]: elapsed: 0.8 s                                                                        total:   0.0 B (0.0 B/s)
Nov 16 16:58:15 ctl01 ctr[6807]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 ctl01 ctr[6807]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/1.3 KiB
Nov 16 16:58:15 ctl01 ctr[6807]: elapsed: 0.9 s                                                                        total:   0.0 B (0.0 B/s)
Nov 16 16:58:15 ctl01 ctr[6807]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 ctl01 ctr[6807]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/1.3 KiB
Nov 16 16:58:15 ctl01 ctr[6807]: elapsed: 1.0 s                                                                        total:   0.0 B (0.0 B/s)
Nov 16 16:58:15 ctl01 ctr[6807]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 ctl01 ctr[6807]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 ctl01 ctr[6807]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/13.7 MiB
Nov 16 16:58:15 ctl01 ctr[6807]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/1.9 MiB
Nov 16 16:58:15 ctl01 ctr[6807]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/1.8 MiB
Nov 16 16:58:15 ctl01 ctr[6807]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/47.1 KiB
Nov 16 16:58:15 ctl01 ctr[6807]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/3.2 KiB
Nov 16 16:58:15 ctl01 ctr[6807]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/2.1 MiB
Nov 16 16:58:15 ctl01 ctr[6807]: elapsed: 1.1 s                                                                        total:  1.3 Ki (1.2 KiB/s)
Nov 16 16:58:15 ctl01 ctr[6807]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 ctl01 ctr[6807]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 ctl01 ctr[6807]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/13.7 MiB
Nov 16 16:58:15 ctl01 ctr[6807]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/1.9 MiB
Nov 16 16:58:15 ctl01 ctr[6807]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/1.8 MiB
Nov 16 16:58:15 ctl01 ctr[6807]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/47.1 KiB
Nov 16 16:58:15 ctl01 ctr[6807]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/3.2 KiB
Nov 16 16:58:15 ctl01 ctr[6807]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/2.1 MiB
Nov 16 16:58:15 ctl01 ctr[6807]: elapsed: 1.2 s                                                                        total:  1.3 Ki (1.1 KiB/s)
Nov 16 16:58:15 ctl01 ctr[6807]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 ctl01 ctr[6807]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 ctl01 ctr[6807]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        downloading    |#033[32m#033[0m--------------------------------------| 39.1 KiB/13.7 MiB
Nov 16 16:58:15 ctl01 ctr[6807]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        downloading    |#033[32m#033[0m--------------------------------------| 39.1 KiB/1.9 MiB
Nov 16 16:58:15 ctl01 ctr[6807]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/1.8 MiB
Nov 16 16:58:15 ctl01 ctr[6807]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        downloading    |#033[32m+++++++++++++++++++++++++++++++#033[0m-------| 39.1 KiB/47.1 KiB
Nov 16 16:58:15 ctl01 ctr[6807]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       downloading    |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|  3.2 KiB/3.2 KiB
Nov 16 16:58:15 ctl01 ctr[6807]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        downloading    |#033[32m#033[0m--------------------------------------|  7.1 KiB/2.1 MiB
Nov 16 16:58:15 ctl01 ctr[6807]: elapsed: 1.3 s                                                                        total:  129.1  (99.2 KiB/s)
Nov 16 16:58:15 ctl01 ctr[6807]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 ctl01 ctr[6807]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 ctl01 ctr[6807]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        downloading    |#033[32m#033[0m--------------------------------------| 199.1 Ki/13.7 MiB
Nov 16 16:58:15 ctl01 ctr[6807]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        downloading    |#033[32m++++#033[0m----------------------------------| 207.1 Ki/1.9 MiB
Nov 16 16:58:15 ctl01 ctr[6807]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        downloading    |#033[32m+++#033[0m-----------------------------------| 191.2 Ki/1.8 MiB
Nov 16 16:58:15 ctl01 ctr[6807]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 ctl01 ctr[6807]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 ctl01 ctr[6807]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        downloading    |#033[32m+++#033[0m-----------------------------------| 199.1 Ki/2.1 MiB
Nov 16 16:58:15 ctl01 ctr[6807]: elapsed: 1.4 s                                                                        total:  848.2  (605.2 KiB/s)
Nov 16 16:58:15 ctl01 ctr[6807]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 ctl01 ctr[6807]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 ctl01 ctr[6807]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        downloading    |#033[32m+++#033[0m-----------------------------------|  1.2 MiB/13.7 MiB
Nov 16 16:58:15 ctl01 ctr[6807]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        downloading    |#033[32m+++++++++++++++++++++++++#033[0m-------------|  1.2 MiB/1.9 MiB
Nov 16 16:58:15 ctl01 ctr[6807]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        downloading    |#033[32m++++++++++++++++#033[0m----------------------| 815.3 Ki/1.8 MiB
Nov 16 16:58:15 ctl01 ctr[6807]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 ctl01 ctr[6807]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 ctl01 ctr[6807]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        downloading    |#033[32m++++++++++++++#033[0m------------------------| 839.3 Ki/2.1 MiB
Nov 16 16:58:15 ctl01 ctr[6807]: elapsed: 1.5 s                                                                        total:  4.1 Mi (2.8 MiB/s)
Nov 16 16:58:16 ctl01 ctr[6807]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        downloading    |#033[32m++++++++++#033[0m----------------------------|  3.7 MiB/13.7 MiB
Nov 16 16:58:16 ctl01 ctr[6807]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        downloading    |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|  1.8 MiB/1.8 MiB
Nov 16 16:58:16 ctl01 ctr[6807]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        downloading    |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|  2.1 MiB/2.1 MiB
Nov 16 16:58:16 ctl01 ctr[6807]: elapsed: 1.6 s                                                                        total:  9.6 Mi (6.0 MiB/s)
Nov 16 16:58:16 ctl01 ctr[6807]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        downloading    |#033[32m++++++++++++++++++++#033[0m------------------|  7.4 MiB/13.7 MiB
Nov 16 16:58:16 ctl01 ctr[6807]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: elapsed: 1.7 s                                                                        total:  13.3 M (7.8 MiB/s)
Nov 16 16:58:16 ctl01 ctr[6807]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        downloading    |#033[32m++++++++++++++++++++++++++++#033[0m----------| 10.3 MiB/13.7 MiB
Nov 16 16:58:16 ctl01 ctr[6807]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: elapsed: 1.8 s                                                                        total:  16.1 M (9.0 MiB/s)
Nov 16 16:58:16 ctl01 ctr[6807]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        downloading    |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m| 13.7 MiB/13.7 MiB
Nov 16 16:58:16 ctl01 ctr[6807]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: elapsed: 1.9 s                                                                        total:  19.6 M (10.3 MiB/s)
Nov 16 16:58:16 ctl01 ctr[6807]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        downloading    |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m| 13.7 MiB/13.7 MiB
Nov 16 16:58:16 ctl01 ctr[6807]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: elapsed: 2.0 s                                                                        total:  19.6 M (9.8 MiB/s)
Nov 16 16:58:16 ctl01 ctr[6807]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        downloading    |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m| 13.7 MiB/13.7 MiB
Nov 16 16:58:16 ctl01 ctr[6807]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: elapsed: 2.1 s                                                                        total:  19.6 M (9.3 MiB/s)
Nov 16 16:58:16 ctl01 ctr[6807]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 ctl01 ctr[6807]: elapsed: 2.2 s                                                                        total:  19.6 M (8.9 MiB/s)
Nov 16 16:58:16 ctl01 ctr[6807]: unpacking linux/amd64 sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18...
Nov 16 16:58:18 ctl01 etcd[5381]: proto: no coders for int
Nov 16 16:58:18 ctl01 etcd[5381]: proto: no encoder for ValueSize int [GetProperties]
Nov 16 16:58:18 ctl01 ctr[6807]: done
Nov 16 16:58:18 ctl01 systemd[1]: Started calico-node.
Nov 16 16:58:19 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-active', 'calico-node.service'] in directory '/root'
Nov 16 16:58:19 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'calico-node.service'] in directory '/root'
Nov 16 16:58:19 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'calico-node.service'] in directory '/root'
Nov 16 16:58:19 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemd-run', '--scope', 'systemctl', 'enable', 'calico-node.service'] in directory '/root'
Nov 16 16:58:19 ctl01 containerd[6733]: time="2019-11-16T16:58:19.071162857Z" level=info msg="shim containerd-shim started" address="/containerd-shim/default/calico-node/shim.sock" debug=false pid=6873
Nov 16 16:58:19 ctl01 systemd[1]: Started /bin/systemctl enable calico-node.service.
Nov 16 16:58:19 ctl01 systemd[1]: Reloading.
Nov 16 16:58:19 ctl01 ctr[6848]: 2019-11-16 16:58:19.204 [INFO][9] startup.go 264: Early log level set to info
Nov 16 16:58:19 ctl01 ctr[6848]: 2019-11-16 16:58:19.204 [INFO][9] startup.go 280: Using NODENAME environment for node name
Nov 16 16:58:19 ctl01 ctr[6848]: 2019-11-16 16:58:19.205 [INFO][9] startup.go 292: Determined node name: ctl01
Nov 16 16:58:19 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'calico-node.service'] in directory '/root'
Nov 16 16:58:19 ctl01 salt-minion[4526]: [INFO    ] {'calico-node': True}
Nov 16 16:58:19 ctl01 salt-minion[4526]: [INFO    ] Completed state [calico-node] at time 16:58:19.248369 duration_in_ms=5051.868
Nov 16 16:58:19 ctl01 salt-minion[4526]: [INFO    ] Running state [curl] at time 16:58:19.250156
Nov 16 16:58:19 ctl01 salt-minion[4526]: [INFO    ] Executing state pkg.installed for [curl]
Nov 16 16:58:19 ctl01 ctr[6848]: 2019-11-16 16:58:19.256 [INFO][9] startup.go 105: Skipping datastore connection test
Nov 16 16:58:19 ctl01 ctr[6848]: 2019-11-16 16:58:19.258 [INFO][9] startup.go 365: Building new node resource Name="ctl01"
Nov 16 16:58:19 ctl01 ctr[6848]: 2019-11-16 16:58:19.258 [INFO][9] startup.go 380: Initialize BGP data
Nov 16 16:58:19 ctl01 ctr[6848]: 2019-11-16 16:58:19.258 [INFO][9] startup.go 474: Using IPv4 address from environment: IP=172.16.10.36
Nov 16 16:58:19 ctl01 ctr[6848]: 2019-11-16 16:58:19.258 [INFO][9] startup.go 507: IPv4 address 172.16.10.36 discovered on interface br-mgmt
Nov 16 16:58:19 ctl01 ctr[6848]: 2019-11-16 16:58:19.258 [INFO][9] startup.go 450: Node IPv4 changed, will check for conflicts
Nov 16 16:58:19 ctl01 ctr[6848]: 2019-11-16 16:58:19.259 [INFO][9] startup.go 640: Using AS number specified in environment (AS=64512)
Nov 16 16:58:19 ctl01 ctr[6848]: 2019-11-16 16:58:19.266 [INFO][9] startup.go 189: Using node name: ctl01
Nov 16 16:58:19 ctl01 ctr[6848]: Calico node started successfully
Nov 16 16:58:19 ctl01 salt-minion[4526]: [INFO    ] All specified packages are already installed
Nov 16 16:58:19 ctl01 salt-minion[4526]: [INFO    ] Completed state [curl] at time 16:58:19.467497 duration_in_ms=217.341
Nov 16 16:58:19 ctl01 salt-minion[4526]: [INFO    ] Running state [git] at time 16:58:19.467795
Nov 16 16:58:19 ctl01 salt-minion[4526]: [INFO    ] Executing state pkg.installed for [git]
Nov 16 16:58:19 ctl01 salt-minion[4526]: [INFO    ] All specified packages are already installed
Nov 16 16:58:19 ctl01 salt-minion[4526]: [INFO    ] Completed state [git] at time 16:58:19.477037 duration_in_ms=9.242
Nov 16 16:58:19 ctl01 salt-minion[4526]: [INFO    ] Running state [apt-transport-https] at time 16:58:19.477275
Nov 16 16:58:19 ctl01 salt-minion[4526]: [INFO    ] Executing state pkg.installed for [apt-transport-https]
Nov 16 16:58:19 ctl01 salt-minion[4526]: [INFO    ] All specified packages are already installed
Nov 16 16:58:19 ctl01 salt-minion[4526]: [INFO    ] Completed state [apt-transport-https] at time 16:58:19.486156 duration_in_ms=8.881
Nov 16 16:58:19 ctl01 salt-minion[4526]: [INFO    ] Running state [python-apt] at time 16:58:19.486412
Nov 16 16:58:19 ctl01 salt-minion[4526]: [INFO    ] Executing state pkg.installed for [python-apt]
Nov 16 16:58:19 ctl01 salt-minion[4526]: [INFO    ] All specified packages are already installed
Nov 16 16:58:19 ctl01 salt-minion[4526]: [INFO    ] Completed state [python-apt] at time 16:58:19.496291 duration_in_ms=9.879
Nov 16 16:58:19 ctl01 salt-minion[4526]: [INFO    ] Running state [socat] at time 16:58:19.496573
Nov 16 16:58:19 ctl01 salt-minion[4526]: [INFO    ] Executing state pkg.installed for [socat]
Nov 16 16:58:19 ctl01 salt-minion[4526]: [INFO    ] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
Nov 16 16:58:19 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'socat'] in directory '/root'
Nov 16 16:58:19 ctl01 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install socat.
Nov 16 16:58:20 ctl01 kernel: [  159.394569] Netfilter messages via NETLINK v0.30.
Nov 16 16:58:20 ctl01 kernel: [  159.399168] ip_set: protocol 6
Nov 16 16:58:20 ctl01 kernel: [  159.608912] ip6_tables: (C) 2000-2006 Netfilter Core Team
Nov 16 16:58:23 ctl01 salt-minion[4526]: [INFO    ] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
Nov 16 16:58:23 ctl01 salt-minion[4526]: [INFO    ] Made the following changes:
Nov 16 16:58:23 ctl01 salt-minion[4526]: 'socat' changed from 'absent' to '1.7.3.2-2ubuntu2'
Nov 16 16:58:23 ctl01 salt-minion[4526]: [INFO    ] Loading fresh modules for state activity
Nov 16 16:58:23 ctl01 salt-minion[4526]: [INFO    ] Completed state [socat] at time 16:58:23.956710 duration_in_ms=4460.135
Nov 16 16:58:23 ctl01 salt-minion[4526]: [INFO    ] Running state [openssl] at time 16:58:23.966059
Nov 16 16:58:23 ctl01 salt-minion[4526]: [INFO    ] Executing state pkg.installed for [openssl]
Nov 16 16:58:24 ctl01 salt-minion[4526]: [INFO    ] All specified packages are already installed
Nov 16 16:58:24 ctl01 salt-minion[4526]: [INFO    ] Completed state [openssl] at time 16:58:24.629163 duration_in_ms=663.104
Nov 16 16:58:24 ctl01 salt-minion[4526]: [INFO    ] Running state [conntrack] at time 16:58:24.629482
Nov 16 16:58:24 ctl01 salt-minion[4526]: [INFO    ] Executing state pkg.installed for [conntrack]
Nov 16 16:58:24 ctl01 salt-minion[4526]: [INFO    ] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
Nov 16 16:58:24 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'conntrack'] in directory '/root'
Nov 16 16:58:24 ctl01 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install conntrack.
Nov 16 16:58:28 ctl01 salt-minion[4526]: [INFO    ] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
Nov 16 16:58:28 ctl01 salt-minion[4526]: [INFO    ] Made the following changes:
Nov 16 16:58:28 ctl01 salt-minion[4526]: 'conntrack' changed from 'absent' to '1:1.4.4+snapshot20161117-6ubuntu2'
Nov 16 16:58:28 ctl01 salt-minion[4526]: [INFO    ] Loading fresh modules for state activity
Nov 16 16:58:28 ctl01 salt-minion[4526]: [INFO    ] Completed state [conntrack] at time 16:58:28.735215 duration_in_ms=4105.731
Nov 16 16:58:28 ctl01 salt-minion[4526]: [INFO    ] Running state [nfs-common] at time 16:58:28.741034
Nov 16 16:58:28 ctl01 salt-minion[4526]: [INFO    ] Executing state pkg.installed for [nfs-common]
Nov 16 16:58:29 ctl01 salt-minion[4526]: [INFO    ] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
Nov 16 16:58:29 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'nfs-common'] in directory '/root'
Nov 16 16:58:29 ctl01 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install nfs-common.
Nov 16 16:58:31 ctl01 systemd[1]: Reloading.
Nov 16 16:58:32 ctl01 systemd[1]: message repeated 4 times: [ Reloading.]
Nov 16 16:58:33 ctl01 systemd[1]: Listening on RPCbind Server Activation Socket.
Nov 16 16:58:33 ctl01 systemd[1]: Starting RPC bind portmap service...
Nov 16 16:58:33 ctl01 systemd[1]: Started RPC bind portmap service.
Nov 16 16:58:33 ctl01 systemd[1]: Reached target RPC Port Mapper.
Nov 16 16:58:34 ctl01 systemd[1]: Reloading.
Nov 16 16:58:34 ctl01 systemd[1]: message repeated 4 times: [ Reloading.]
Nov 16 16:58:38 ctl01 salt-minion[4526]: [INFO    ] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
Nov 16 16:58:38 ctl01 salt-minion[4526]: [INFO    ] Made the following changes:
Nov 16 16:58:38 ctl01 salt-minion[4526]: 'keyutils' changed from 'absent' to '1.5.9-9.2ubuntu2'
Nov 16 16:58:38 ctl01 salt-minion[4526]: 'nfs-common' changed from 'absent' to '1:1.3.4-2.1ubuntu5.2'
Nov 16 16:58:38 ctl01 salt-minion[4526]: 'rpcbind' changed from 'absent' to '0.2.3-0.6'
Nov 16 16:58:38 ctl01 salt-minion[4526]: 'libtirpc1' changed from 'absent' to '0.2.5-1.2ubuntu0.1'
Nov 16 16:58:38 ctl01 salt-minion[4526]: 'nfs-client' changed from 'absent' to '1'
Nov 16 16:58:38 ctl01 salt-minion[4526]: 'libnfsidmap2' changed from 'absent' to '0.25-5.1'
Nov 16 16:58:38 ctl01 salt-minion[4526]: 'portmap' changed from 'absent' to '1'
Nov 16 16:58:38 ctl01 salt-minion[4526]: [INFO    ] Loading fresh modules for state activity
Nov 16 16:58:38 ctl01 salt-minion[4526]: [INFO    ] Completed state [nfs-common] at time 16:58:38.490246 duration_in_ms=9749.211
Nov 16 16:58:38 ctl01 salt-minion[4526]: [INFO    ] Running state [cifs-utils] at time 16:58:38.498088
Nov 16 16:58:38 ctl01 salt-minion[4526]: [INFO    ] Executing state pkg.installed for [cifs-utils]
Nov 16 16:58:39 ctl01 salt-minion[4526]: [INFO    ] User sudo_ubuntu Executing command saltutil.find_job with jid 20191116165838849070
Nov 16 16:58:39 ctl01 salt-minion[4526]: [INFO    ] Starting a new job with PID 8752
Nov 16 16:58:39 ctl01 salt-minion[4526]: [INFO    ] Returning information for job: 20191116165838849070
Nov 16 16:58:39 ctl01 salt-minion[4526]: [INFO    ] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
Nov 16 16:58:39 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'cifs-utils'] in directory '/root'
Nov 16 16:58:39 ctl01 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install cifs-utils.
Nov 16 16:58:51 ctl01 salt-minion[4526]: [INFO    ] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
Nov 16 16:58:51 ctl01 salt-minion[4526]: [INFO    ] Made the following changes:
Nov 16 16:58:51 ctl01 salt-minion[4526]: 'python2.7-ldb' changed from 'absent' to '1'
Nov 16 16:58:51 ctl01 salt-minion[4526]: 'python-ldb' changed from 'absent' to '2:1.2.3-1ubuntu0.1'
Nov 16 16:58:51 ctl01 salt-minion[4526]: 'libtdb1' changed from 'absent' to '1.3.15-2'
Nov 16 16:58:51 ctl01 salt-minion[4526]: 'libavahi-common3' changed from 'absent' to '0.7-3.1ubuntu1.2'
Nov 16 16:58:51 ctl01 salt-minion[4526]: 'python2.7-talloc' changed from 'absent' to '1'
Nov 16 16:58:51 ctl01 salt-minion[4526]: 'libavahi-client3' changed from 'absent' to '0.7-3.1ubuntu1.2'
Nov 16 16:58:51 ctl01 salt-minion[4526]: 'libwbclient0' changed from 'absent' to '2:4.7.6+dfsg~ubuntu-0ubuntu2.13'
Nov 16 16:58:51 ctl01 salt-minion[4526]: 'libavahi-common-data' changed from 'absent' to '0.7-3.1ubuntu1.2'
Nov 16 16:58:51 ctl01 salt-minion[4526]: 'libcups2' changed from 'absent' to '2.2.7-1ubuntu2.7'
Nov 16 16:58:51 ctl01 salt-minion[4526]: 'cifs-utils' changed from 'absent' to '2:6.8-1'
Nov 16 16:58:51 ctl01 salt-minion[4526]: 'samba-common' changed from 'absent' to '2:4.7.6+dfsg~ubuntu-0ubuntu2.13'
Nov 16 16:58:51 ctl01 salt-minion[4526]: 'python2.7-tdb' changed from 'absent' to '1'
Nov 16 16:58:51 ctl01 salt-minion[4526]: 'samba-libs' changed from 'absent' to '2:4.7.6+dfsg~ubuntu-0ubuntu2.13'
Nov 16 16:58:51 ctl01 salt-minion[4526]: 'libldb1' changed from 'absent' to '2:1.2.3-1ubuntu0.1'
Nov 16 16:58:51 ctl01 salt-minion[4526]: 'libtevent0' changed from 'absent' to '0.9.34-1'
Nov 16 16:58:51 ctl01 salt-minion[4526]: 'python-talloc' changed from 'absent' to '2.1.10-2ubuntu1'
Nov 16 16:58:51 ctl01 salt-minion[4526]: 'samba-common-bin' changed from 'absent' to '2:4.7.6+dfsg~ubuntu-0ubuntu2.13'
Nov 16 16:58:51 ctl01 salt-minion[4526]: 'python-samba' changed from 'absent' to '2:4.7.6+dfsg~ubuntu-0ubuntu2.13'
Nov 16 16:58:51 ctl01 salt-minion[4526]: 'libtalloc2' changed from 'absent' to '2.1.10-2ubuntu1'
Nov 16 16:58:51 ctl01 salt-minion[4526]: 'python2.7-samba' changed from 'absent' to '1'
Nov 16 16:58:51 ctl01 salt-minion[4526]: 'libjansson4' changed from 'absent' to '2.11-1'
Nov 16 16:58:51 ctl01 salt-minion[4526]: 'python-tdb' changed from 'absent' to '1.3.15-2'
Nov 16 16:58:52 ctl01 salt-minion[4526]: [INFO    ] Loading fresh modules for state activity
Nov 16 16:58:52 ctl01 salt-minion[4526]: [INFO    ] Completed state [cifs-utils] at time 16:58:52.044020 duration_in_ms=13545.931
Nov 16 16:58:52 ctl01 salt-minion[4526]: [INFO    ] Running state [/usr/bin/hyperkube] at time 16:58:52.047937
Nov 16 16:58:52 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/usr/bin/hyperkube]
Nov 16 16:58:56 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:58:56 ctl01 salt-minion[4526]: New file
Nov 16 16:58:56 ctl01 salt-minion[4526]: [INFO    ] Completed state [/usr/bin/hyperkube] at time 16:58:56.881941 duration_in_ms=4834.002
Nov 16 16:58:56 ctl01 salt-minion[4526]: [INFO    ] Running state [/usr/bin/kubectl] at time 16:58:56.883342
Nov 16 16:58:56 ctl01 salt-minion[4526]: [INFO    ] Executing state file.symlink for [/usr/bin/kubectl]
Nov 16 16:58:56 ctl01 salt-minion[4526]: [INFO    ] {'new': '/usr/bin/kubectl'}
Nov 16 16:58:56 ctl01 salt-minion[4526]: [INFO    ] Loading fresh modules for state activity
Nov 16 16:58:56 ctl01 salt-minion[4526]: [INFO    ] Completed state [/usr/bin/kubectl] at time 16:58:56.941144 duration_in_ms=57.802
Nov 16 16:58:56 ctl01 salt-minion[4526]: [INFO    ] Running state [/tmp/crictl] at time 16:58:56.944174
Nov 16 16:58:56 ctl01 salt-minion[4526]: [INFO    ] Executing state archive.extracted for [/tmp/crictl]
Nov 16 16:58:59 ctl01 salt-minion[4526]: [INFO    ] Executing command ['tar', 'xz', '-f', '/var/cache/salt/minion/extrn_files/base/github.com/kubernetes-sigs/cri-tools/releases/download/v1.12.0/crictl-v1.12.0-linux-amd64.tar.gz'] in directory '/tmp/crictl/'
Nov 16 16:58:59 ctl01 salt-minion[4526]: [INFO    ] Executing command ['tar', '--version'] in directory '/root'
Nov 16 16:58:59 ctl01 salt-minion[4526]: [INFO    ] {'extracted_files': 'no tar output so far', 'directories_created': ['/tmp/crictl/']}
Nov 16 16:58:59 ctl01 salt-minion[4526]: [INFO    ] Completed state [/tmp/crictl] at time 16:58:59.373484 duration_in_ms=2429.297
Nov 16 16:58:59 ctl01 salt-minion[4526]: [INFO    ] Running state [/usr/local/bin/crictl] at time 16:58:59.376836
Nov 16 16:58:59 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/usr/local/bin/crictl]
Nov 16 16:58:59 ctl01 salt-minion[4526]: [WARNING ] Use of argument owner found, "owner" is invalid, please use "user"
Nov 16 16:58:59 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:58:59 ctl01 salt-minion[4526]: New file
Nov 16 16:58:59 ctl01 salt-minion[4526]: [INFO    ] Completed state [/usr/local/bin/crictl] at time 16:58:59.649210 duration_in_ms=272.378
Nov 16 16:58:59 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/crictl.yaml] at time 16:58:59.649508
Nov 16 16:58:59 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/crictl.yaml]
Nov 16 16:58:59 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:58:59 ctl01 salt-minion[4526]: New file
Nov 16 16:58:59 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/crictl.yaml] at time 16:58:59.686823 duration_in_ms=37.315
Nov 16 16:58:59 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/criproxy] at time 16:58:59.687182
Nov 16 16:58:59 ctl01 salt-minion[4526]: [INFO    ] Executing state file.absent for [/etc/criproxy]
Nov 16 16:58:59 ctl01 salt-minion[4526]: [INFO    ] File /etc/criproxy is not present
Nov 16 16:58:59 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/criproxy] at time 16:58:59.688307 duration_in_ms=1.126
Nov 16 16:59:00 ctl01 salt-minion[4526]: [INFO    ] Running state [criproxy] at time 16:59:00.294968
Nov 16 16:59:00 ctl01 salt-minion[4526]: [INFO    ] Executing state service.dead for [criproxy]
Nov 16 16:59:00 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'status', 'criproxy.service', '-n', '0'] in directory '/root'
Nov 16 16:59:00 ctl01 salt-minion[4526]: [INFO    ] The named service criproxy is not available
Nov 16 16:59:00 ctl01 salt-minion[4526]: [INFO    ] Completed state [criproxy] at time 16:59:00.323635 duration_in_ms=28.667
Nov 16 16:59:00 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/systemd/system/kubelet.service] at time 16:59:00.324229
Nov 16 16:59:00 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/systemd/system/kubelet.service]
Nov 16 16:59:00 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/systemd/kubelet.service'
Nov 16 16:59:00 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:00 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:00 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:59:00 ctl01 salt-minion[4526]: New file
Nov 16 16:59:00 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/systemd/system/kubelet.service] at time 16:59:00.672812 duration_in_ms=348.58
Nov 16 16:59:00 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/config] at time 16:59:00.673301
Nov 16 16:59:00 ctl01 salt-minion[4526]: [INFO    ] Executing state file.absent for [/etc/kubernetes/config]
Nov 16 16:59:00 ctl01 salt-minion[4526]: [INFO    ] File /etc/kubernetes/config is not present
Nov 16 16:59:00 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/config] at time 16:59:00.674536 duration_in_ms=1.235
Nov 16 16:59:00 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/default/kubelet] at time 16:59:00.674868
Nov 16 16:59:00 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/default/kubelet]
Nov 16 16:59:00 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/kubelet/default.master'
Nov 16 16:59:00 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:00 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:01 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:01 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:01 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:01 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:01 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:59:01 ctl01 salt-minion[4526]: New file
Nov 16 16:59:01 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/default/kubelet] at time 16:59:01.601727 duration_in_ms=926.858
Nov 16 16:59:01 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/kubelet.kubeconfig] at time 16:59:01.602058
Nov 16 16:59:01 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/kubelet.kubeconfig]
Nov 16 16:59:01 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/kubelet/kubelet.kubeconfig.master'
Nov 16 16:59:01 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:01 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:01 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:02 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:02 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:59:02 ctl01 salt-minion[4526]: New file
Nov 16 16:59:02 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/kubelet.kubeconfig] at time 16:59:02.280615 duration_in_ms=678.555
Nov 16 16:59:02 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/manifests] at time 16:59:02.280967
Nov 16 16:59:02 ctl01 salt-minion[4526]: [INFO    ] Executing state file.directory for [/etc/kubernetes/manifests]
Nov 16 16:59:02 ctl01 salt-minion[4526]: [INFO    ] {'/etc/kubernetes/manifests': 'New Dir'}
Nov 16 16:59:02 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/manifests] at time 16:59:02.282984 duration_in_ms=2.017
Nov 16 16:59:02 ctl01 salt-minion[4526]: [INFO    ] Running state [kubelet] at time 16:59:02.285180
Nov 16 16:59:02 ctl01 salt-minion[4526]: [INFO    ] Executing state service.running for [kubelet]
Nov 16 16:59:02 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'status', 'kubelet.service', '-n', '0'] in directory '/root'
Nov 16 16:59:02 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-active', 'kubelet.service'] in directory '/root'
Nov 16 16:59:02 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kubelet.service'] in directory '/root'
Nov 16 16:59:02 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemd-run', '--scope', 'systemctl', 'start', 'kubelet.service'] in directory '/root'
Nov 16 16:59:02 ctl01 systemd[1]: Started /bin/systemctl start kubelet.service.
Nov 16 16:59:02 ctl01 systemd[1]: Started Kubernetes Kubelet Server.
Nov 16 16:59:02 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-active', 'kubelet.service'] in directory '/root'
Nov 16 16:59:02 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kubelet.service'] in directory '/root'
Nov 16 16:59:02 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kubelet.service'] in directory '/root'
Nov 16 16:59:02 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemd-run', '--scope', 'systemctl', 'enable', 'kubelet.service'] in directory '/root'
Nov 16 16:59:02 ctl01 systemd[1]: Started /bin/systemctl enable kubelet.service.
Nov 16 16:59:02 ctl01 systemd[1]: Reloading.
Nov 16 16:59:02 ctl01 kubelet[9716]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 16 16:59:02 ctl01 kubelet[9716]: Flag --address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 16 16:59:02 ctl01 kubelet[9716]: Flag --allow-privileged has been deprecated, will be removed in a future version
Nov 16 16:59:02 ctl01 kubelet[9716]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 16 16:59:02 ctl01 kubelet[9716]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 16 16:59:02 ctl01 kubelet[9716]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.509802    9716 flags.go:33] FLAG: --address="172.16.10.36"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.509936    9716 flags.go:33] FLAG: --allow-privileged="true"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.510019    9716 flags.go:33] FLAG: --allowed-unsafe-sysctls="[]"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.510112    9716 flags.go:33] FLAG: --alsologtostderr="false"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.510214    9716 flags.go:33] FLAG: --anonymous-auth="true"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.510307    9716 flags.go:33] FLAG: --application-metrics-count-limit="100"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.510398    9716 flags.go:33] FLAG: --authentication-token-webhook="false"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.510486    9716 flags.go:33] FLAG: --authentication-token-webhook-cache-ttl="2m0s"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.510575    9716 flags.go:33] FLAG: --authorization-mode="AlwaysAllow"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.510659    9716 flags.go:33] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.510746    9716 flags.go:33] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.510828    9716 flags.go:33] FLAG: --azure-container-registry-config=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.510909    9716 flags.go:33] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.510996    9716 flags.go:33] FLAG: --bootstrap-checkpoint-path=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.511073    9716 flags.go:33] FLAG: --bootstrap-kubeconfig=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.511153    9716 flags.go:33] FLAG: --cert-dir="/var/lib/kubelet/pki"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.511230    9716 flags.go:33] FLAG: --cgroup-driver="cgroupfs"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.511315    9716 flags.go:33] FLAG: --cgroup-root=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.511397    9716 flags.go:33] FLAG: --cgroups-per-qos="true"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.511482    9716 flags.go:33] FLAG: --chaos-chance="0"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.511563    9716 flags.go:33] FLAG: --client-ca-file=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.511639    9716 flags.go:33] FLAG: --cloud-config=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.511718    9716 flags.go:33] FLAG: --cloud-provider=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.511793    9716 flags.go:33] FLAG: --cluster-dns="[10.254.0.10]"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.511907    9716 flags.go:33] FLAG: --cluster-domain="cluster.local"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.512006    9716 flags.go:33] FLAG: --cni-bin-dir="/opt/cni/bin"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.512090    9716 flags.go:33] FLAG: --cni-conf-dir="/etc/cni/net.d"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.512168    9716 flags.go:33] FLAG: --config=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.512249    9716 flags.go:33] FLAG: --container-hints="/etc/cadvisor/container_hints.json"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.512327    9716 flags.go:33] FLAG: --container-log-max-files="5"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.512410    9716 flags.go:33] FLAG: --container-log-max-size="10Mi"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.512492    9716 flags.go:33] FLAG: --container-runtime="remote"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.512577    9716 flags.go:33] FLAG: --container-runtime-endpoint="unix:///run/containerd/containerd.sock"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.512654    9716 flags.go:33] FLAG: --containerd="unix:///var/run/containerd.sock"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.512740    9716 flags.go:33] FLAG: --containerized="false"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.512818    9716 flags.go:33] FLAG: --contention-profiling="false"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.512899    9716 flags.go:33] FLAG: --cpu-cfs-quota="true"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.512997    9716 flags.go:33] FLAG: --cpu-cfs-quota-period="100ms"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.513210    9716 flags.go:33] FLAG: --cpu-manager-policy="none"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.513299    9716 flags.go:33] FLAG: --cpu-manager-reconcile-period="10s"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.513377    9716 flags.go:33] FLAG: --docker="unix:///var/run/docker.sock"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.513455    9716 flags.go:33] FLAG: --docker-endpoint="unix:///var/run/docker.sock"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.513536    9716 flags.go:33] FLAG: --docker-env-metadata-whitelist=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.513613    9716 flags.go:33] FLAG: --docker-only="false"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.513698    9716 flags.go:33] FLAG: --docker-root="/var/lib/docker"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.516530    9716 flags.go:33] FLAG: --docker-tls="false"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.516628    9716 flags.go:33] FLAG: --docker-tls-ca="ca.pem"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.516712    9716 flags.go:33] FLAG: --docker-tls-cert="cert.pem"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.516791    9716 flags.go:33] FLAG: --docker-tls-key="key.pem"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.516875    9716 flags.go:33] FLAG: --dynamic-config-dir=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.516957    9716 flags.go:33] FLAG: --enable-controller-attach-detach="true"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.517071    9716 flags.go:33] FLAG: --enable-debugging-handlers="true"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.517156    9716 flags.go:33] FLAG: --enable-load-reader="false"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.517234    9716 flags.go:33] FLAG: --enable-server="true"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.517316    9716 flags.go:33] FLAG: --enforce-node-allocatable="[pods]"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.517403    9716 flags.go:33] FLAG: --event-burst="10"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.517480    9716 flags.go:33] FLAG: --event-qps="5"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.517557    9716 flags.go:33] FLAG: --event-storage-age-limit="default=0"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.517639    9716 flags.go:33] FLAG: --event-storage-event-limit="default=0"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.517724    9716 flags.go:33] FLAG: --eviction-hard="imagefs.available<15%,memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.517821    9716 flags.go:33] FLAG: --eviction-max-pod-grace-period="0"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.517900    9716 flags.go:33] FLAG: --eviction-minimum-reclaim=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.517985    9716 flags.go:33] FLAG: --eviction-pressure-transition-period="5m0s"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.518063    9716 flags.go:33] FLAG: --eviction-soft=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.518145    9716 flags.go:33] FLAG: --eviction-soft-grace-period=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.518242    9716 flags.go:33] FLAG: --exit-on-lock-contention="false"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.518332    9716 flags.go:33] FLAG: --experimental-allocatable-ignore-eviction="false"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.518458    9716 flags.go:33] FLAG: --experimental-bootstrap-kubeconfig=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.518562    9716 flags.go:33] FLAG: --experimental-check-node-capabilities-before-mount="false"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.518671    9716 flags.go:33] FLAG: --experimental-dockershim="false"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.518776    9716 flags.go:33] FLAG: --experimental-dockershim-root-directory="/var/lib/dockershim"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.518882    9716 flags.go:33] FLAG: --experimental-fail-swap-on="true"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.518983    9716 flags.go:33] FLAG: --experimental-kernel-memcg-notification="false"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.519093    9716 flags.go:33] FLAG: --experimental-mounter-path=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.519192    9716 flags.go:33] FLAG: --fail-swap-on="true"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.519299    9716 flags.go:33] FLAG: --feature-gates=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.519403    9716 flags.go:33] FLAG: --file-check-frequency="20s"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.519505    9716 flags.go:33] FLAG: --global-housekeeping-interval="1m0s"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.519606    9716 flags.go:33] FLAG: --hairpin-mode="promiscuous-bridge"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.519713    9716 flags.go:33] FLAG: --healthz-bind-address="127.0.0.1"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.519813    9716 flags.go:33] FLAG: --healthz-port="10248"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.519923    9716 flags.go:33] FLAG: --help="false"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.520026    9716 flags.go:33] FLAG: --host-ipc-sources="[*]"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.520166    9716 flags.go:33] FLAG: --host-network-sources="[*]"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.520287    9716 flags.go:33] FLAG: --host-pid-sources="[*]"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.520413    9716 flags.go:33] FLAG: --hostname-override="ctl01"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.520524    9716 flags.go:33] FLAG: --housekeeping-interval="10s"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.520632    9716 flags.go:33] FLAG: --http-check-frequency="20s"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.520744    9716 flags.go:33] FLAG: --image-gc-high-threshold="85"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.520851    9716 flags.go:33] FLAG: --image-gc-low-threshold="80"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.520951    9716 flags.go:33] FLAG: --image-pull-progress-deadline="1m0s"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.521072    9716 flags.go:33] FLAG: --image-service-endpoint=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.521172    9716 flags.go:33] FLAG: --iptables-drop-bit="15"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.521272    9716 flags.go:33] FLAG: --iptables-masquerade-bit="14"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.521371    9716 flags.go:33] FLAG: --keep-terminated-pod-volumes="false"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.521478    9716 flags.go:33] FLAG: --kube-api-burst="10"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.521579    9716 flags.go:33] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.521680    9716 flags.go:33] FLAG: --kube-api-qps="5"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.521780    9716 flags.go:33] FLAG: --kube-reserved=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.521881    9716 flags.go:33] FLAG: --kube-reserved-cgroup=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.521982    9716 flags.go:33] FLAG: --kubeconfig="/etc/kubernetes/kubelet.kubeconfig"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.522098    9716 flags.go:33] FLAG: --kubelet-cgroups=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.522220    9716 flags.go:33] FLAG: --lock-file=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.522344    9716 flags.go:33] FLAG: --log-backtrace-at=":0"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.522456    9716 flags.go:33] FLAG: --log-cadvisor-usage="false"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.522566    9716 flags.go:33] FLAG: --log-dir=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.522676    9716 flags.go:33] FLAG: --log-file=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.522790    9716 flags.go:33] FLAG: --log-flush-frequency="5s"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.522900    9716 flags.go:33] FLAG: --logtostderr="true"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.523019    9716 flags.go:33] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.523137    9716 flags.go:33] FLAG: --make-iptables-util-chains="true"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.523247    9716 flags.go:33] FLAG: --manifest-url=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.523390    9716 flags.go:33] FLAG: --manifest-url-header=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.523509    9716 flags.go:33] FLAG: --master-service-namespace="default"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.523620    9716 flags.go:33] FLAG: --max-open-files="1000000"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.523740    9716 flags.go:33] FLAG: --max-pods="110"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.523857    9716 flags.go:33] FLAG: --maximum-dead-containers="-1"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.523972    9716 flags.go:33] FLAG: --maximum-dead-containers-per-container="1"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.524083    9716 flags.go:33] FLAG: --minimum-container-ttl-duration="0s"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.524193    9716 flags.go:33] FLAG: --minimum-image-ttl-duration="2m0s"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.524304    9716 flags.go:33] FLAG: --network-plugin="cni"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.524414    9716 flags.go:33] FLAG: --network-plugin-mtu="0"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.524524    9716 flags.go:33] FLAG: --node-ip="172.16.10.36"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.524634    9716 flags.go:33] FLAG: --node-labels="node-role.kubernetes.io/master=true"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.524759    9716 flags.go:33] FLAG: --node-status-max-images="50"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.524874    9716 flags.go:33] FLAG: --node-status-update-frequency="10s"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.524998    9716 flags.go:33] FLAG: --non-masquerade-cidr="10.0.0.0/8"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.525110    9716 flags.go:33] FLAG: --oom-score-adj="-999"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.525221    9716 flags.go:33] FLAG: --pod-cidr=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.525330    9716 flags.go:33] FLAG: --pod-infra-container-image="docker-prod-local.artifactory.mirantis.com/mirantis/kubernetes/pause-amd64:v1.13.5-3"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.525454    9716 flags.go:33] FLAG: --pod-manifest-path="/etc/kubernetes/manifests"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.525571    9716 flags.go:33] FLAG: --pod-max-pids="-1"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.525682    9716 flags.go:33] FLAG: --pods-per-core="0"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.525793    9716 flags.go:33] FLAG: --port="10250"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.525903    9716 flags.go:33] FLAG: --protect-kernel-defaults="false"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.526014    9716 flags.go:33] FLAG: --provider-id=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.526123    9716 flags.go:33] FLAG: --qos-reserved=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.526254    9716 flags.go:33] FLAG: --read-only-port="10255"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.526373    9716 flags.go:33] FLAG: --really-crash-for-testing="false"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.526489    9716 flags.go:33] FLAG: --redirect-container-streaming="false"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.526600    9716 flags.go:33] FLAG: --register-node="true"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.526720    9716 flags.go:33] FLAG: --register-schedulable="true"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.526831    9716 flags.go:33] FLAG: --register-with-taints=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.526942    9716 flags.go:33] FLAG: --registry-burst="10"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.527053    9716 flags.go:33] FLAG: --registry-qps="5"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.527163    9716 flags.go:33] FLAG: --resolv-conf="/etc/resolv.conf"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.527285    9716 flags.go:33] FLAG: --root-dir="/var/lib/kubelet"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.527406    9716 flags.go:33] FLAG: --rotate-certificates="false"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.527520    9716 flags.go:33] FLAG: --rotate-server-certificates="false"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.527630    9716 flags.go:33] FLAG: --runonce="false"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.527740    9716 flags.go:33] FLAG: --runtime-cgroups=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.527855    9716 flags.go:33] FLAG: --runtime-request-timeout="2m0s"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.527970    9716 flags.go:33] FLAG: --seccomp-profile-root="/var/lib/kubelet/seccomp"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.528088    9716 flags.go:33] FLAG: --serialize-image-pulls="true"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.528203    9716 flags.go:33] FLAG: --stderrthreshold="2"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.528314    9716 flags.go:33] FLAG: --storage-driver-buffer-duration="1m0s"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.528429    9716 flags.go:33] FLAG: --storage-driver-db="cadvisor"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.528545    9716 flags.go:33] FLAG: --storage-driver-host="localhost:8086"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.528655    9716 flags.go:33] FLAG: --storage-driver-password="root"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.528769    9716 flags.go:33] FLAG: --storage-driver-secure="false"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.528886    9716 flags.go:33] FLAG: --storage-driver-table="stats"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.529045    9716 flags.go:33] FLAG: --storage-driver-user="root"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.529161    9716 flags.go:33] FLAG: --streaming-connection-idle-timeout="4h0m0s"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.529272    9716 flags.go:33] FLAG: --sync-frequency="1m0s"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.529387    9716 flags.go:33] FLAG: --system-cgroups=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.529497    9716 flags.go:33] FLAG: --system-reserved=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.529612    9716 flags.go:33] FLAG: --system-reserved-cgroup=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.529726    9716 flags.go:33] FLAG: --tls-cert-file=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.529844    9716 flags.go:33] FLAG: --tls-cipher-suites="[]"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.529962    9716 flags.go:33] FLAG: --tls-min-version=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.530078    9716 flags.go:33] FLAG: --tls-private-key-file=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.530212    9716 flags.go:33] FLAG: --v="2"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.530324    9716 flags.go:33] FLAG: --version="false"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.530438    9716 flags.go:33] FLAG: --vmodule=""
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.530549    9716 flags.go:33] FLAG: --volume-plugin-dir="/usr/libexec/kubernetes/kubelet-plugins/volume/exec/"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.530706    9716 flags.go:33] FLAG: --volume-stats-agg-period="1m0s"
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.530862    9716 feature_gate.go:206] feature gates: &{map[]}
Nov 16 16:59:02 ctl01 kubelet[9716]: W1116 16:59:02.530993    9716 options.go:265] unknown 'kubernetes.io' or 'k8s.io' labels specified with --node-labels: [node-role.kubernetes.io/master]
Nov 16 16:59:02 ctl01 kubelet[9716]: W1116 16:59:02.531114    9716 options.go:266] in 1.15, --node-labels in the 'kubernetes.io' namespace must begin with an allowed prefix (kubelet.kubernetes.io, node.kubernetes.io) or be in the specifically allowed set (beta.kubernetes.io/arch, beta.kubernetes.io/instance-type, beta.kubernetes.io/os, failure-domain.beta.kubernetes.io/region, failure-domain.beta.kubernetes.io/zone, failure-domain.kubernetes.io/region, failure-domain.kubernetes.io/zone, kubernetes.io/arch, kubernetes.io/hostname, kubernetes.io/instance-type, kubernetes.io/os)
Nov 16 16:59:02 ctl01 kubelet[9716]: W1116 16:59:02.531234    9716 server.go:182] Warning: For remote container runtime, --pod-infra-container-image is ignored in kubelet, which should be set in that remote runtime instead
Nov 16 16:59:02 ctl01 kubelet[9716]: I1116 16:59:02.531413    9716 feature_gate.go:206] feature gates: &{map[]}
Nov 16 16:59:02 ctl01 systemd[1]: kubelet.service: Dependency Conflicts=cadvisor.service dropped, merged into kubelet.service
Nov 16 16:59:02 ctl01 systemd[1]: kubelet.service: Dependency ConflictedBy=cadvisor.service dropped, merged into kubelet.service
Nov 16 16:59:02 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kubelet.service'] in directory '/root'
Nov 16 16:59:02 ctl01 salt-minion[4526]: [INFO    ] {'kubelet': True}
Nov 16 16:59:02 ctl01 salt-minion[4526]: [INFO    ] Completed state [kubelet] at time 16:59:02.615748 duration_in_ms=330.567
Nov 16 16:59:02 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/logrotate.d/kubernetes] at time 16:59:02.616379
Nov 16 16:59:02 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/logrotate.d/kubernetes]
Nov 16 16:59:02 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/logrotate'
Nov 16 16:59:02 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:59:02 ctl01 salt-minion[4526]: New file
Nov 16 16:59:02 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/logrotate.d/kubernetes] at time 16:59:02.648678 duration_in_ms=32.297
Nov 16 16:59:02 ctl01 salt-minion[4526]: [INFO    ] Running state [/opt/cni/bin] at time 16:59:02.649093
Nov 16 16:59:02 ctl01 salt-minion[4526]: [INFO    ] Executing state archive.extracted for [/opt/cni/bin]
Nov 16 16:59:03 ctl01 systemd[1]: Started Kubernetes systemd probe.
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.429686    9716 mount_linux.go:180] Detected OS with systemd
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.429794    9716 server.go:407] Version: v1.13.5-3+98374c02d2d8c1
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.429904    9716 feature_gate.go:206] feature gates: &{map[]}
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.430050    9716 feature_gate.go:206] feature gates: &{map[]}
Nov 16 16:59:03 ctl01 kubelet[9716]: W1116 16:59:03.430111    9716 options.go:265] unknown 'kubernetes.io' or 'k8s.io' labels specified with --node-labels: [node-role.kubernetes.io/master]
Nov 16 16:59:03 ctl01 kubelet[9716]: W1116 16:59:03.430147    9716 options.go:266] in 1.15, --node-labels in the 'kubernetes.io' namespace must begin with an allowed prefix (kubelet.kubernetes.io, node.kubernetes.io) or be in the specifically allowed set (beta.kubernetes.io/arch, beta.kubernetes.io/instance-type, beta.kubernetes.io/os, failure-domain.beta.kubernetes.io/region, failure-domain.beta.kubernetes.io/zone, failure-domain.kubernetes.io/region, failure-domain.kubernetes.io/zone, kubernetes.io/arch, kubernetes.io/hostname, kubernetes.io/instance-type, kubernetes.io/os)
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.430404    9716 plugins.go:103] No cloud provider specified.
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.430439    9716 server.go:523] No cloud provider specified: "" from the config file: ""
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.442769    9716 manager.go:155] cAdvisor running in container: "/sys/fs/cgroup/cpu,cpuacct/system.slice/kubelet.service"
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.444138    9716 fs.go:142] Filesystem UUIDs: map[2ff13334-aecb-43c4-82f2-b8bb5fa56dda:/dev/vda1 9E29-9F5A:/dev/vda15 2019-11-16-17-50-40-00:/dev/sr0]
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.444174    9716 fs.go:143] Filesystem partitions: map[tmpfs:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /dev/vda1:{mountpoint:/ major:252 minor:1 fsType:ext4 blockSize:0}]
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.449738    9716 manager.go:229] Machine: {NumCores:8 CpuFrequency:2799994 MemoryCapacity:14704979968 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:d108ca480c9a479aa6ed963179e5367f SystemUUID:D108CA48-0C9A-479A-A6ED-963179E5367F BootID:31a78d97-680e-4185-91ee-188069d3cfdc Filesystems:[{Device:tmpfs DeviceMajor:0 DeviceMinor:24 Capacity:1470500864 Type:vfs Inodes:1795041 HasInodes:true} {Device:/dev/vda1 DeviceMajor:252 DeviceMinor:1 Capacity:103880232960 Type:vfs Inodes:12902400 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:107374182400 Scheduler:none}] NetworkDevices:[{Name:br-mgmt MacAddress:52:54:00:9d:e7:5c Speed:0 Mtu:9000} {Name:ens3 MacAddress:52:54:00:9c:aa:a4 Speed:-1 Mtu:9000} {Name:ens4 MacAddress:52:54:00:9d:e7:5c Speed:-1 Mtu:9000} {Name:ens5 MacAddress:52:54:00:08:c5:ff Speed:-1 Mtu:9000} {Name:ens5.1000 MacAddress:52:54:00:08:c5:ff Speed:-1 Mtu:9000} {Name:ens6 MacAddress:52:54:00:c0:ab:72 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:14704979968 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[]} {Id:1 Memory:0 Cores:[{Id:0 Threads:[1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[]} {Id:2 Memory:0 Cores:[{Id:0 Threads:[2] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[]} {Id:3 Memory:0 Cores:[{Id:0 Threads:[3] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[]} {Id:4 Memory:0 Cores:[{Id:0 Threads:[4] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[]} {Id:5 Memory:0 Cores:[{Id:0 Threads:[5] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[]} {Id:6 Memory:0 Cores:[{Id:0 Threads:[6] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[]} {Id:7 Memory:0 Cores:[{Id:0 Threads:[7] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.450127    9716 manager.go:235] Version: {KernelVersion:4.15.0-70-generic ContainerOsVersion:Ubuntu 18.04.3 LTS DockerVersion:Unknown DockerAPIVersion:Unknown CadvisorVersion: CadvisorRevision:}
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.450267    9716 server.go:666] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.450544    9716 container_manager_linux.go:248] container manager verified user specified cgroup-root exists: []
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.450566    9716 container_manager_linux.go:253] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms}
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.450721    9716 container_manager_linux.go:272] Creating device plugin manager: true
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.450735    9716 manager.go:109] Creating Device Plugin manager at /var/lib/kubelet/device-plugins/kubelet.sock
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.450834    9716 state_mem.go:36] [cpumanager] initializing new in-memory state store
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.462397    9716 server.go:941] Using root directory: /var/lib/kubelet
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.462433    9716 kubelet.go:281] Adding pod path: /etc/kubernetes/manifests
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.462461    9716 file.go:68] Watching path "/etc/kubernetes/manifests"
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.462478    9716 kubelet.go:306] Watching apiserver
Nov 16 16:59:03 ctl01 kubelet[9716]: E1116 16:59:03.463135    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:03 ctl01 kubelet[9716]: E1116 16:59:03.463165    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:03 ctl01 kubelet[9716]: E1116 16:59:03.463538    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.472779    9716 kuberuntime_manager.go:198] Container runtime containerd initialized, version: 1.2.6-0ubuntu1~18.04.2, apiVersion: v1alpha2
Nov 16 16:59:03 ctl01 kubelet[9716]: W1116 16:59:03.473168    9716 probe.go:271] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.473499    9716 plugins.go:547] Loaded volume plugin "kubernetes.io/aws-ebs"
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.473527    9716 plugins.go:547] Loaded volume plugin "kubernetes.io/empty-dir"
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.473539    9716 plugins.go:547] Loaded volume plugin "kubernetes.io/gce-pd"
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.473549    9716 plugins.go:547] Loaded volume plugin "kubernetes.io/git-repo"
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.473608    9716 plugins.go:547] Loaded volume plugin "kubernetes.io/host-path"
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.473660    9716 plugins.go:547] Loaded volume plugin "kubernetes.io/nfs"
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.473710    9716 plugins.go:547] Loaded volume plugin "kubernetes.io/secret"
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.473759    9716 plugins.go:547] Loaded volume plugin "kubernetes.io/iscsi"
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.473775    9716 plugins.go:547] Loaded volume plugin "kubernetes.io/glusterfs"
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.473826    9716 plugins.go:547] Loaded volume plugin "kubernetes.io/rbd"
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.473874    9716 plugins.go:547] Loaded volume plugin "kubernetes.io/cinder"
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.473922    9716 plugins.go:547] Loaded volume plugin "kubernetes.io/quobyte"
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.473965    9716 plugins.go:547] Loaded volume plugin "kubernetes.io/cephfs"
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.474017    9716 plugins.go:547] Loaded volume plugin "kubernetes.io/downward-api"
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.474064    9716 plugins.go:547] Loaded volume plugin "kubernetes.io/fc"
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.474120    9716 plugins.go:547] Loaded volume plugin "kubernetes.io/flocker"
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.474167    9716 plugins.go:547] Loaded volume plugin "kubernetes.io/azure-file"
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.474233    9716 plugins.go:547] Loaded volume plugin "kubernetes.io/configmap"
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.474247    9716 plugins.go:547] Loaded volume plugin "kubernetes.io/vsphere-volume"
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.474294    9716 plugins.go:547] Loaded volume plugin "kubernetes.io/azure-disk"
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.474307    9716 plugins.go:547] Loaded volume plugin "kubernetes.io/photon-pd"
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.474353    9716 plugins.go:547] Loaded volume plugin "kubernetes.io/projected"
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.474444    9716 plugins.go:547] Loaded volume plugin "kubernetes.io/portworx-volume"
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.474492    9716 plugins.go:547] Loaded volume plugin "kubernetes.io/scaleio"
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.474541    9716 plugins.go:547] Loaded volume plugin "kubernetes.io/local-volume"
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.474554    9716 plugins.go:547] Loaded volume plugin "kubernetes.io/storageos"
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.474604    9716 plugins.go:547] Loaded volume plugin "kubernetes.io/csi"
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.475119    9716 server.go:999] Started kubelet
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.475770    9716 server.go:157] Starting to listen read-only on 172.16.10.36:10255
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.476090    9716 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.476130    9716 status_manager.go:152] Starting to sync pod status with apiserver
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.476147    9716 kubelet.go:1829] Starting kubelet main sync loop.
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.476160    9716 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet PLEG is not healthy: pleg has yet to be successful]
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.476422    9716 volume_manager.go:246] The desired_state_of_world populator starts
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.476432    9716 volume_manager.go:248] Starting Kubelet Volume Manager
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.476823    9716 server.go:137] Starting to listen on 172.16.10.36:10250
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.477178    9716 desired_state_of_world_populator.go:130] Desired state populator starts to run
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.477569    9716 server.go:333] Adding debug handlers to kubelet server.
Nov 16 16:59:03 ctl01 kubelet[9716]: E1116 16:59:03.481141    9716 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 16:59:03 ctl01 kubelet[9716]: E1116 16:59:03.481248    9716 cri_stats_provider.go:320] Failed to get the info of the filesystem with mountpoint "/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache.
Nov 16 16:59:03 ctl01 kubelet[9716]: E1116 16:59:03.481263    9716 kubelet.go:1308] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.487044    9716 factory.go:136] Registering containerd factory
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.488816    9716 factory.go:54] Registering systemd factory
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.489275    9716 factory.go:97] Registering Raw factory
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.489529    9716 manager.go:1222] Started watching for new ooms in manager
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.491804    9716 manager.go:365] Starting recovery of all containers
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.551062    9716 manager.go:370] Recovery completed
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.577474    9716 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.577535    9716 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet]
Nov 16 16:59:03 ctl01 kubelet[9716]: E1116 16:59:03.577548    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.578021    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.579234    9716 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node ctl01
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.579376    9716 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node ctl01
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.579485    9716 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node ctl01
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.579621    9716 kubelet_node_status.go:72] Attempting to register node ctl01
Nov 16 16:59:03 ctl01 kubelet[9716]: E1116 16:59:03.580153    9716 kubelet_node_status.go:94] Unable to register node "ctl01" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.610340    9716 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.610674    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.611758    9716 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node ctl01
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.611882    9716 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node ctl01
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.611992    9716 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node ctl01
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.612177    9716 cpu_manager.go:155] [cpumanager] starting with none policy
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.612317    9716 cpu_manager.go:156] [cpumanager] reconciling every 10s
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.612462    9716 policy_none.go:42] [cpumanager] none policy: Start
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.613014    9716 container_manager_linux.go:376] Updating kernel flag: vm/overcommit_memory, expected value: 1, actual value: 0
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.613202    9716 container_manager_linux.go:376] Updating kernel flag: kernel/panic, expected value: 10, actual value: 60
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.613359    9716 container_manager_linux.go:376] Updating kernel flag: kernel/panic_on_oops, expected value: 1, actual value: 0
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.630611    9716 manager.go:196] Starting Device Plugin manager
Nov 16 16:59:03 ctl01 kubelet[9716]: W1116 16:59:03.630813    9716 manager.go:537] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.631107    9716 manager.go:231] Serving device plugin registration server on "/var/lib/kubelet/device-plugins/kubelet.sock"
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.631316    9716 plugin_watcher.go:90] Plugin Watcher Start at /var/lib/kubelet/plugins_registry
Nov 16 16:59:03 ctl01 kubelet[9716]: E1116 16:59:03.631927    9716 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "ctl01" not found
Nov 16 16:59:03 ctl01 kubelet[9716]: E1116 16:59:03.677732    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.777784    9716 kubelet.go:1908] SyncLoop (ADD, "file"): ""
Nov 16 16:59:03 ctl01 kubelet[9716]: E1116 16:59:03.777890    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.780489    9716 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.780926    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.782395    9716 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node ctl01
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.782451    9716 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node ctl01
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.782466    9716 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node ctl01
Nov 16 16:59:03 ctl01 kubelet[9716]: I1116 16:59:03.782507    9716 kubelet_node_status.go:72] Attempting to register node ctl01
Nov 16 16:59:03 ctl01 kubelet[9716]: E1116 16:59:03.783360    9716 kubelet_node_status.go:94] Unable to register node "ctl01" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:03 ctl01 kubelet[9716]: E1116 16:59:03.878068    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:03 ctl01 kubelet[9716]: E1116 16:59:03.978655    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:04 ctl01 kubelet[9716]: E1116 16:59:04.078831    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:04 ctl01 kubelet[9716]: E1116 16:59:04.179337    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:04 ctl01 kubelet[9716]: I1116 16:59:04.183629    9716 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:04 ctl01 kubelet[9716]: I1116 16:59:04.184238    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 16:59:04 ctl01 kubelet[9716]: I1116 16:59:04.185632    9716 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node ctl01
Nov 16 16:59:04 ctl01 kubelet[9716]: I1116 16:59:04.185675    9716 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node ctl01
Nov 16 16:59:04 ctl01 kubelet[9716]: I1116 16:59:04.185689    9716 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node ctl01
Nov 16 16:59:04 ctl01 kubelet[9716]: I1116 16:59:04.185714    9716 kubelet_node_status.go:72] Attempting to register node ctl01
Nov 16 16:59:04 ctl01 kubelet[9716]: E1116 16:59:04.186118    9716 kubelet_node_status.go:94] Unable to register node "ctl01" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:04 ctl01 kubelet[9716]: E1116 16:59:04.279581    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:04 ctl01 kubelet[9716]: E1116 16:59:04.379790    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:04 ctl01 kubelet[9716]: E1116 16:59:04.463937    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:04 ctl01 kubelet[9716]: E1116 16:59:04.465289    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:04 ctl01 kubelet[9716]: E1116 16:59:04.466159    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:04 ctl01 kubelet[9716]: E1116 16:59:04.479940    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:04 ctl01 kubelet[9716]: E1116 16:59:04.580300    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:04 ctl01 salt-minion[4526]: [INFO    ] Executing command ['tar', 'xz', '-f', '/var/cache/salt/minion/extrn_files/base/docker-prod-local.artifactory.mirantis.com/artifactory/binary-prod-local/mirantis/kubernetes/containernetworking-plugins/containernetworking-plugins_v0.7.2-173-g8db2808.tar.gz'] in directory '/opt/cni/bin/'
Nov 16 16:59:04 ctl01 kubelet[9716]: E1116 16:59:04.681511    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:04 ctl01 kubelet[9716]: E1116 16:59:04.781723    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:04 ctl01 kubelet[9716]: E1116 16:59:04.881932    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:04 ctl01 kubelet[9716]: E1116 16:59:04.982085    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:04 ctl01 kubelet[9716]: I1116 16:59:04.986281    9716 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:04 ctl01 kubelet[9716]: I1116 16:59:04.986579    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 16:59:04 ctl01 kubelet[9716]: I1116 16:59:04.987835    9716 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node ctl01
Nov 16 16:59:04 ctl01 kubelet[9716]: I1116 16:59:04.987914    9716 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node ctl01
Nov 16 16:59:04 ctl01 kubelet[9716]: I1116 16:59:04.987953    9716 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node ctl01
Nov 16 16:59:04 ctl01 kubelet[9716]: I1116 16:59:04.987989    9716 kubelet_node_status.go:72] Attempting to register node ctl01
Nov 16 16:59:04 ctl01 kubelet[9716]: E1116 16:59:04.988713    9716 kubelet_node_status.go:94] Unable to register node "ctl01" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:05 ctl01 kubelet[9716]: E1116 16:59:05.082258    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:05 ctl01 kubelet[9716]: E1116 16:59:05.182422    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:05 ctl01 salt-minion[4526]: [INFO    ] Executing command ['tar', '--version'] in directory '/root'
Nov 16 16:59:05 ctl01 kubelet[9716]: E1116 16:59:05.282637    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:05 ctl01 salt-minion[4526]: [INFO    ] {'extracted_files': 'no tar output so far'}
Nov 16 16:59:05 ctl01 salt-minion[4526]: [INFO    ] Completed state [/opt/cni/bin] at time 16:59:05.287021 duration_in_ms=2637.919
Nov 16 16:59:05 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/proxy.kubeconfig] at time 16:59:05.287659
Nov 16 16:59:05 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/proxy.kubeconfig]
Nov 16 16:59:05 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/kube-proxy/proxy.kubeconfig'
Nov 16 16:59:05 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:05 ctl01 kubelet[9716]: E1116 16:59:05.383058    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:05 ctl01 kubelet[9716]: E1116 16:59:05.464932    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:05 ctl01 kubelet[9716]: E1116 16:59:05.465971    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:05 ctl01 kubelet[9716]: E1116 16:59:05.467003    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:05 ctl01 kubelet[9716]: E1116 16:59:05.483412    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:05 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:05 ctl01 kubelet[9716]: E1116 16:59:05.583566    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:05 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:05 ctl01 kubelet[9716]: E1116 16:59:05.683791    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:05 ctl01 kubelet[9716]: E1116 16:59:05.783953    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:05 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:05 ctl01 kubelet[9716]: E1116 16:59:05.884148    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:05 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:59:05 ctl01 salt-minion[4526]: New file
Nov 16 16:59:05 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/proxy.kubeconfig] at time 16:59:05.970351 duration_in_ms=682.691
Nov 16 16:59:05 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/systemd/system/kube-proxy.service] at time 16:59:05.970698
Nov 16 16:59:05 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/systemd/system/kube-proxy.service]
Nov 16 16:59:05 ctl01 kubelet[9716]: E1116 16:59:05.984402    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:05 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/systemd/kube-proxy.service'
Nov 16 16:59:06 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:59:06 ctl01 salt-minion[4526]: New file
Nov 16 16:59:06 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/systemd/system/kube-proxy.service] at time 16:59:06.006934 duration_in_ms=36.236
Nov 16 16:59:06 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/default/kube-proxy] at time 16:59:06.007230
Nov 16 16:59:06 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/default/kube-proxy]
Nov 16 16:59:06 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 16:59:06 ctl01 salt-minion[4526]: New file
Nov 16 16:59:06 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/default/kube-proxy] at time 16:59:06.009808 duration_in_ms=2.578
Nov 16 16:59:06 ctl01 salt-minion[4526]: [INFO    ] Running state [kube-proxy] at time 16:59:06.011593
Nov 16 16:59:06 ctl01 salt-minion[4526]: [INFO    ] Executing state service.running for [kube-proxy]
Nov 16 16:59:06 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'status', 'kube-proxy.service', '-n', '0'] in directory '/root'
Nov 16 16:59:06 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-active', 'kube-proxy.service'] in directory '/root'
Nov 16 16:59:06 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kube-proxy.service'] in directory '/root'
Nov 16 16:59:06 ctl01 kubelet[9716]: E1116 16:59:06.084616    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:06 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemd-run', '--scope', 'systemctl', 'start', 'kube-proxy.service'] in directory '/root'
Nov 16 16:59:06 ctl01 systemd[1]: Started /bin/systemctl start kube-proxy.service.
Nov 16 16:59:06 ctl01 systemd[1]: Started Kubernetes Kube-Proxy Server.
Nov 16 16:59:06 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-active', 'kube-proxy.service'] in directory '/root'
Nov 16 16:59:06 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kube-proxy.service'] in directory '/root'
Nov 16 16:59:06 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kube-proxy.service'] in directory '/root'
Nov 16 16:59:06 ctl01 kubelet[9716]: E1116 16:59:06.184823    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:06 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemd-run', '--scope', 'systemctl', 'enable', 'kube-proxy.service'] in directory '/root'
Nov 16 16:59:06 ctl01 systemd[1]: Started /bin/systemctl enable kube-proxy.service.
Nov 16 16:59:06 ctl01 systemd[1]: Reloading.
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.237271    9926 flags.go:33] FLAG: --alsologtostderr="false"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238176    9926 flags.go:33] FLAG: --application-metrics-count-limit="100"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238249    9926 flags.go:33] FLAG: --azure-container-registry-config=""
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238259    9926 flags.go:33] FLAG: --bind-address="0.0.0.0"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238267    9926 flags.go:33] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238274    9926 flags.go:33] FLAG: --cleanup="false"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238281    9926 flags.go:33] FLAG: --cleanup-iptables="false"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238287    9926 flags.go:33] FLAG: --cleanup-ipvs="true"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238292    9926 flags.go:33] FLAG: --cloud-provider-gce-lb-src-cidrs="130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238302    9926 flags.go:33] FLAG: --cluster-cidr="192.168.0.0/16"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238308    9926 flags.go:33] FLAG: --config=""
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238314    9926 flags.go:33] FLAG: --config-sync-period="15m0s"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238321    9926 flags.go:33] FLAG: --conntrack-max="0"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238328    9926 flags.go:33] FLAG: --conntrack-max-per-core="32768"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238335    9926 flags.go:33] FLAG: --conntrack-min="131072"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238340    9926 flags.go:33] FLAG: --conntrack-tcp-timeout-close-wait="1h0m0s"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238346    9926 flags.go:33] FLAG: --conntrack-tcp-timeout-established="24h0m0s"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238352    9926 flags.go:33] FLAG: --container-hints="/etc/cadvisor/container_hints.json"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238358    9926 flags.go:33] FLAG: --containerd="unix:///var/run/containerd.sock"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238364    9926 flags.go:33] FLAG: --default-not-ready-toleration-seconds="300"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238370    9926 flags.go:33] FLAG: --default-unreachable-toleration-seconds="300"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238375    9926 flags.go:33] FLAG: --docker="unix:///var/run/docker.sock"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238381    9926 flags.go:33] FLAG: --docker-env-metadata-whitelist=""
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238386    9926 flags.go:33] FLAG: --docker-only="false"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238392    9926 flags.go:33] FLAG: --docker-root="/var/lib/docker"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238397    9926 flags.go:33] FLAG: --docker-tls="false"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238403    9926 flags.go:33] FLAG: --docker-tls-ca="ca.pem"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238408    9926 flags.go:33] FLAG: --docker-tls-cert="cert.pem"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238413    9926 flags.go:33] FLAG: --docker-tls-key="key.pem"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238418    9926 flags.go:33] FLAG: --enable-load-reader="false"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238424    9926 flags.go:33] FLAG: --event-storage-age-limit="default=0"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238429    9926 flags.go:33] FLAG: --event-storage-event-limit="default=0"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238435    9926 flags.go:33] FLAG: --feature-gates=""
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238443    9926 flags.go:33] FLAG: --global-housekeeping-interval="1m0s"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238449    9926 flags.go:33] FLAG: --healthz-bind-address="0.0.0.0:10256"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238455    9926 flags.go:33] FLAG: --healthz-port="10256"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238461    9926 flags.go:33] FLAG: --help="false"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238466    9926 flags.go:33] FLAG: --hostname-override=""
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238471    9926 flags.go:33] FLAG: --housekeeping-interval="10s"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238489    9926 flags.go:33] FLAG: --iptables-masquerade-bit="14"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238497    9926 flags.go:33] FLAG: --iptables-min-sync-period="0s"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238503    9926 flags.go:33] FLAG: --iptables-sync-period="30s"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238510    9926 flags.go:33] FLAG: --ipvs-exclude-cidrs="[]"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238528    9926 flags.go:33] FLAG: --ipvs-min-sync-period="0s"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238535    9926 flags.go:33] FLAG: --ipvs-scheduler=""
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238541    9926 flags.go:33] FLAG: --ipvs-sync-period="30s"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238557    9926 flags.go:33] FLAG: --kube-api-burst="10"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238563    9926 flags.go:33] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238570    9926 flags.go:33] FLAG: --kube-api-qps="5"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238585    9926 flags.go:33] FLAG: --kubeconfig="/etc/kubernetes/proxy.kubeconfig"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238593    9926 flags.go:33] FLAG: --log-backtrace-at=":0"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238602    9926 flags.go:33] FLAG: --log-cadvisor-usage="false"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238609    9926 flags.go:33] FLAG: --log-dir=""
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238616    9926 flags.go:33] FLAG: --log-file=""
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238622    9926 flags.go:33] FLAG: --log-flush-frequency="5s"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238628    9926 flags.go:33] FLAG: --logtostderr="true"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238635    9926 flags.go:33] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238642    9926 flags.go:33] FLAG: --masquerade-all="false"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238648    9926 flags.go:33] FLAG: --master=""
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238654    9926 flags.go:33] FLAG: --mesos-agent="127.0.0.1:5051"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238661    9926 flags.go:33] FLAG: --mesos-agent-timeout="10s"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238667    9926 flags.go:33] FLAG: --metrics-bind-address="127.0.0.1:10249"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238674    9926 flags.go:33] FLAG: --metrics-port="10249"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238681    9926 flags.go:33] FLAG: --nodeport-addresses="[]"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238692    9926 flags.go:33] FLAG: --oom-score-adj="-999"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238699    9926 flags.go:33] FLAG: --profiling="false"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238705    9926 flags.go:33] FLAG: --proxy-mode="iptables"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238713    9926 flags.go:33] FLAG: --proxy-port-range=""
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238721    9926 flags.go:33] FLAG: --resource-container="/kube-proxy"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238728    9926 flags.go:33] FLAG: --skip-headers="false"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238734    9926 flags.go:33] FLAG: --stderrthreshold="2"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238740    9926 flags.go:33] FLAG: --storage-driver-buffer-duration="1m0s"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238747    9926 flags.go:33] FLAG: --storage-driver-db="cadvisor"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238753    9926 flags.go:33] FLAG: --storage-driver-host="localhost:8086"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238760    9926 flags.go:33] FLAG: --storage-driver-password="root"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238766    9926 flags.go:33] FLAG: --storage-driver-secure="false"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238772    9926 flags.go:33] FLAG: --storage-driver-table="stats"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238778    9926 flags.go:33] FLAG: --storage-driver-user="root"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238794    9926 flags.go:33] FLAG: --udp-timeout="250ms"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238801    9926 flags.go:33] FLAG: --v="2"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238807    9926 flags.go:33] FLAG: --version="false"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238817    9926 flags.go:33] FLAG: --vmodule=""
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238823    9926 flags.go:33] FLAG: --write-config-to=""
Nov 16 16:59:06 ctl01 kube-proxy[9926]: W1116 16:59:06.238833    9926 server.go:198] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.238879    9926 feature_gate.go:206] feature gates: &{map[]}
Nov 16 16:59:06 ctl01 kernel: [  205.073519] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP)
Nov 16 16:59:06 ctl01 kernel: [  205.073568] IPVS: Connection hash table configured (size=4096, memory=64Kbytes)
Nov 16 16:59:06 ctl01 kubelet[9716]: E1116 16:59:06.286024    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:06 ctl01 systemd[1]: kubelet.service: Dependency Conflicts=cadvisor.service dropped, merged into kubelet.service
Nov 16 16:59:06 ctl01 systemd[1]: kubelet.service: Dependency ConflictedBy=cadvisor.service dropped, merged into kubelet.service
Nov 16 16:59:06 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kube-proxy.service'] in directory '/root'
Nov 16 16:59:06 ctl01 kubelet[9716]: E1116 16:59:06.390308    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:06 ctl01 salt-minion[4526]: [INFO    ] {'kube-proxy': True}
Nov 16 16:59:06 ctl01 salt-minion[4526]: [INFO    ] Completed state [kube-proxy] at time 16:59:06.404733 duration_in_ms=393.139
Nov 16 16:59:06 ctl01 salt-minion[4526]: [INFO    ] Returning information for job: 20191116165753766584
Nov 16 16:59:06 ctl01 kubelet[9716]: E1116 16:59:06.465671    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:06 ctl01 kubelet[9716]: E1116 16:59:06.466752    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:06 ctl01 kubelet[9716]: E1116 16:59:06.467843    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:06 ctl01 kubelet[9716]: E1116 16:59:06.490725    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:06 ctl01 kernel: [  205.368845] IPVS: ipvs loaded.
Nov 16 16:59:06 ctl01 kernel: [  205.376031] IPVS: [rr] scheduler registered.
Nov 16 16:59:06 ctl01 kernel: [  205.383219] IPVS: [wrr] scheduler registered.
Nov 16 16:59:06 ctl01 kernel: [  205.387439] IPVS: [sh] scheduler registered.
Nov 16 16:59:06 ctl01 kubelet[9716]: I1116 16:59:06.588951    9716 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:06 ctl01 kubelet[9716]: I1116 16:59:06.589285    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 16:59:06 ctl01 kubelet[9716]: I1116 16:59:06.590073    9716 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node ctl01
Nov 16 16:59:06 ctl01 kubelet[9716]: I1116 16:59:06.590125    9716 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node ctl01
Nov 16 16:59:06 ctl01 kubelet[9716]: I1116 16:59:06.590144    9716 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node ctl01
Nov 16 16:59:06 ctl01 kubelet[9716]: I1116 16:59:06.590177    9716 kubelet_node_status.go:72] Attempting to register node ctl01
Nov 16 16:59:06 ctl01 kubelet[9716]: E1116 16:59:06.590832    9716 kubelet_node_status.go:94] Unable to register node "ctl01" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:06 ctl01 kubelet[9716]: E1116 16:59:06.590847    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:06 ctl01 kube-proxy[9926]: W1116 16:59:06.597395    9926 node.go:103] Failed to retrieve node info: Get https://172.16.10.36:443/api/v1/nodes/ctl01: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.597433    9926 server_others.go:148] Using iptables Proxier.
Nov 16 16:59:06 ctl01 kube-proxy[9926]: W1116 16:59:06.597543    9926 proxier.go:314] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.597602    9926 server_others.go:178] Tearing down inactive rules.
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.623978    9926 server.go:483] Version: v1.13.5-3+98374c02d2d8c1
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.632326    9926 server.go:509] Running in resource-only container "/kube-proxy"
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.633092    9926 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 262144
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.633165    9926 conntrack.go:52] Setting nf_conntrack_max to 262144
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.633348    9926 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.633441    9926 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.633600    9926 config.go:102] Starting endpoints config controller
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.633624    9926 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.633650    9926 config.go:202] Starting service config controller
Nov 16 16:59:06 ctl01 kube-proxy[9926]: I1116 16:59:06.633657    9926 controller_utils.go:1027] Waiting for caches to sync for service config controller
Nov 16 16:59:06 ctl01 kube-proxy[9926]: E1116 16:59:06.633927    9926 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 16:59:06 ctl01 kube-proxy[9926]: E1116 16:59:06.634139    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:06 ctl01 kube-proxy[9926]: E1116 16:59:06.634236    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:06 ctl01 kubelet[9716]: E1116 16:59:06.691157    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:06 ctl01 kubelet[9716]: E1116 16:59:06.791402    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:06 ctl01 kubelet[9716]: E1116 16:59:06.891739    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:06 ctl01 kubelet[9716]: E1116 16:59:06.991965    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:07 ctl01 kubelet[9716]: E1116 16:59:07.092209    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:07 ctl01 kubelet[9716]: E1116 16:59:07.192548    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:07 ctl01 kubelet[9716]: E1116 16:59:07.292776    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:07 ctl01 kubelet[9716]: E1116 16:59:07.393055    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:07 ctl01 kubelet[9716]: E1116 16:59:07.466812    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:07 ctl01 kubelet[9716]: E1116 16:59:07.467723    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:07 ctl01 kubelet[9716]: E1116 16:59:07.468654    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:07 ctl01 kubelet[9716]: E1116 16:59:07.493714    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:07 ctl01 kubelet[9716]: E1116 16:59:07.594075    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:07 ctl01 kube-proxy[9926]: E1116 16:59:07.634973    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:07 ctl01 kube-proxy[9926]: E1116 16:59:07.636973    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:07 ctl01 kubelet[9716]: E1116 16:59:07.694411    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:07 ctl01 kubelet[9716]: E1116 16:59:07.794706    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:07 ctl01 kubelet[9716]: E1116 16:59:07.894962    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:07 ctl01 kubelet[9716]: E1116 16:59:07.995373    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:08 ctl01 kubelet[9716]: E1116 16:59:08.099320    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:08 ctl01 kubelet[9716]: E1116 16:59:08.199701    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:08 ctl01 kubelet[9716]: E1116 16:59:08.300009    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:08 ctl01 kubelet[9716]: E1116 16:59:08.400351    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:08 ctl01 kubelet[9716]: E1116 16:59:08.467879    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:08 ctl01 kubelet[9716]: E1116 16:59:08.469200    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:08 ctl01 kubelet[9716]: E1116 16:59:08.469728    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:08 ctl01 kubelet[9716]: E1116 16:59:08.500580    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:08 ctl01 kubelet[9716]: E1116 16:59:08.600830    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:08 ctl01 kube-proxy[9926]: E1116 16:59:08.635819    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:08 ctl01 kube-proxy[9926]: E1116 16:59:08.637897    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:08 ctl01 kubelet[9716]: E1116 16:59:08.701076    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:08 ctl01 kubelet[9716]: E1116 16:59:08.801427    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:08 ctl01 kubelet[9716]: E1116 16:59:08.902117    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:09 ctl01 kubelet[9716]: E1116 16:59:09.002493    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:09 ctl01 kubelet[9716]: E1116 16:59:09.102784    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:09 ctl01 kubelet[9716]: E1116 16:59:09.203236    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:09 ctl01 kubelet[9716]: E1116 16:59:09.303512    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:09 ctl01 kubelet[9716]: E1116 16:59:09.404117    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:09 ctl01 kubelet[9716]: E1116 16:59:09.469209    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:09 ctl01 kubelet[9716]: E1116 16:59:09.470160    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:09 ctl01 kubelet[9716]: E1116 16:59:09.471297    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:09 ctl01 kubelet[9716]: E1116 16:59:09.504768    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:09 ctl01 kubelet[9716]: E1116 16:59:09.605236    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:09 ctl01 kube-proxy[9926]: E1116 16:59:09.636970    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:09 ctl01 kube-proxy[9926]: E1116 16:59:09.638925    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:09 ctl01 kubelet[9716]: E1116 16:59:09.705581    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:09 ctl01 kubelet[9716]: E1116 16:59:09.805863    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:09 ctl01 kubelet[9716]: I1116 16:59:09.813442    9716 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:09 ctl01 kubelet[9716]: I1116 16:59:09.813789    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 16:59:09 ctl01 kubelet[9716]: I1116 16:59:09.814720    9716 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node ctl01
Nov 16 16:59:09 ctl01 kubelet[9716]: I1116 16:59:09.814938    9716 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node ctl01
Nov 16 16:59:09 ctl01 kubelet[9716]: I1116 16:59:09.815124    9716 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node ctl01
Nov 16 16:59:09 ctl01 kubelet[9716]: I1116 16:59:09.815335    9716 kubelet_node_status.go:72] Attempting to register node ctl01
Nov 16 16:59:09 ctl01 kubelet[9716]: E1116 16:59:09.815850    9716 kubelet_node_status.go:94] Unable to register node "ctl01" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:09 ctl01 kubelet[9716]: E1116 16:59:09.906264    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:10 ctl01 kubelet[9716]: E1116 16:59:10.006520    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:10 ctl01 kubelet[9716]: E1116 16:59:10.107098    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:10 ctl01 kubelet[9716]: E1116 16:59:10.207723    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:10 ctl01 kubelet[9716]: E1116 16:59:10.252696    9716 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 16:59:10 ctl01 kubelet[9716]: E1116 16:59:10.308771    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:10 ctl01 kubelet[9716]: E1116 16:59:10.409320    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:10 ctl01 kubelet[9716]: E1116 16:59:10.470299    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:10 ctl01 kubelet[9716]: E1116 16:59:10.471620    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:10 ctl01 kubelet[9716]: E1116 16:59:10.472674    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:10 ctl01 kubelet[9716]: E1116 16:59:10.509986    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:10 ctl01 kubelet[9716]: E1116 16:59:10.610319    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:10 ctl01 kube-proxy[9926]: E1116 16:59:10.638014    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:10 ctl01 kube-proxy[9926]: E1116 16:59:10.639986    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:10 ctl01 kubelet[9716]: E1116 16:59:10.710678    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:10 ctl01 kubelet[9716]: E1116 16:59:10.810998    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:10 ctl01 kubelet[9716]: E1116 16:59:10.911387    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:11 ctl01 kubelet[9716]: E1116 16:59:11.011783    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:11 ctl01 kubelet[9716]: E1116 16:59:11.112183    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:11 ctl01 kubelet[9716]: E1116 16:59:11.212566    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:11 ctl01 kubelet[9716]: E1116 16:59:11.313002    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:11 ctl01 kubelet[9716]: E1116 16:59:11.413352    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:11 ctl01 kubelet[9716]: E1116 16:59:11.471466    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:11 ctl01 kubelet[9716]: E1116 16:59:11.472912    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:11 ctl01 kubelet[9716]: E1116 16:59:11.474084    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:11 ctl01 kubelet[9716]: E1116 16:59:11.513661    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:11 ctl01 kubelet[9716]: E1116 16:59:11.614315    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:11 ctl01 kube-proxy[9926]: E1116 16:59:11.638983    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:11 ctl01 kube-proxy[9926]: E1116 16:59:11.640786    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:11 ctl01 kubelet[9716]: E1116 16:59:11.714587    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:11 ctl01 kubelet[9716]: E1116 16:59:11.815222    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:11 ctl01 kubelet[9716]: E1116 16:59:11.915770    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:12 ctl01 kubelet[9716]: E1116 16:59:12.016328    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:12 ctl01 kubelet[9716]: E1116 16:59:12.116957    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:12 ctl01 kubelet[9716]: E1116 16:59:12.217569    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:12 ctl01 kubelet[9716]: E1116 16:59:12.318124    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:12 ctl01 kubelet[9716]: E1116 16:59:12.418797    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:12 ctl01 kubelet[9716]: E1116 16:59:12.472714    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:12 ctl01 kubelet[9716]: E1116 16:59:12.473859    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:12 ctl01 kubelet[9716]: E1116 16:59:12.474933    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:12 ctl01 kubelet[9716]: E1116 16:59:12.519406    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:12 ctl01 kubelet[9716]: E1116 16:59:12.619954    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:12 ctl01 kube-proxy[9926]: E1116 16:59:12.639952    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:12 ctl01 kube-proxy[9926]: E1116 16:59:12.641811    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:12 ctl01 kubelet[9716]: E1116 16:59:12.721326    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:12 ctl01 kubelet[9716]: E1116 16:59:12.821588    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:12 ctl01 kubelet[9716]: E1116 16:59:12.921863    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:13 ctl01 kubelet[9716]: E1116 16:59:13.022172    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:13 ctl01 kubelet[9716]: E1116 16:59:13.122561    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:13 ctl01 kubelet[9716]: E1116 16:59:13.222842    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:13 ctl01 kubelet[9716]: E1116 16:59:13.323104    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:13 ctl01 kubelet[9716]: E1116 16:59:13.423281    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:13 ctl01 kubelet[9716]: E1116 16:59:13.474002    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:13 ctl01 kubelet[9716]: E1116 16:59:13.474878    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:13 ctl01 kubelet[9716]: E1116 16:59:13.475867    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:13 ctl01 kubelet[9716]: E1116 16:59:13.523546    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:13 ctl01 kubelet[9716]: E1116 16:59:13.623790    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:13 ctl01 kubelet[9716]: E1116 16:59:13.632361    9716 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "ctl01" not found
Nov 16 16:59:13 ctl01 kube-proxy[9926]: E1116 16:59:13.640745    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:13 ctl01 kube-proxy[9926]: E1116 16:59:13.642660    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:13 ctl01 kubelet[9716]: E1116 16:59:13.723972    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:13 ctl01 kubelet[9716]: E1116 16:59:13.824365    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:13 ctl01 kubelet[9716]: E1116 16:59:13.924553    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:14 ctl01 kubelet[9716]: E1116 16:59:14.024840    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:14 ctl01 kubelet[9716]: E1116 16:59:14.125148    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:14 ctl01 kubelet[9716]: E1116 16:59:14.225584    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:14 ctl01 kubelet[9716]: E1116 16:59:14.325850    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:14 ctl01 kubelet[9716]: E1116 16:59:14.426121    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:14 ctl01 kubelet[9716]: E1116 16:59:14.475081    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:14 ctl01 kubelet[9716]: E1116 16:59:14.476014    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:14 ctl01 kubelet[9716]: E1116 16:59:14.477409    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:14 ctl01 kubelet[9716]: E1116 16:59:14.526409    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:14 ctl01 kubelet[9716]: E1116 16:59:14.626724    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:14 ctl01 kube-proxy[9926]: E1116 16:59:14.641854    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:14 ctl01 kube-proxy[9926]: E1116 16:59:14.643517    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:14 ctl01 kubelet[9716]: E1116 16:59:14.726955    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:14 ctl01 kubelet[9716]: E1116 16:59:14.827146    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:14 ctl01 kubelet[9716]: E1116 16:59:14.927329    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:15 ctl01 kubelet[9716]: E1116 16:59:15.027503    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:15 ctl01 kubelet[9716]: E1116 16:59:15.127762    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:15 ctl01 kubelet[9716]: E1116 16:59:15.228094    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:15 ctl01 kube-proxy[9926]: E1116 16:59:15.229684    9926 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 16:59:15 ctl01 kubelet[9716]: E1116 16:59:15.328357    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:15 ctl01 kubelet[9716]: E1116 16:59:15.428449    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:15 ctl01 kubelet[9716]: E1116 16:59:15.475904    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:15 ctl01 kubelet[9716]: E1116 16:59:15.477287    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:15 ctl01 kubelet[9716]: E1116 16:59:15.478366    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:15 ctl01 kubelet[9716]: E1116 16:59:15.528780    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:15 ctl01 kubelet[9716]: E1116 16:59:15.629100    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:15 ctl01 kube-proxy[9926]: E1116 16:59:15.642987    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:15 ctl01 kube-proxy[9926]: E1116 16:59:15.644301    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:15 ctl01 kubelet[9716]: E1116 16:59:15.729353    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:15 ctl01 kubelet[9716]: E1116 16:59:15.829635    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:15 ctl01 kubelet[9716]: E1116 16:59:15.929899    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:16 ctl01 kubelet[9716]: E1116 16:59:16.030239    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:16 ctl01 kubelet[9716]: E1116 16:59:16.130496    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:16 ctl01 kubelet[9716]: I1116 16:59:16.216264    9716 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:16 ctl01 kubelet[9716]: I1116 16:59:16.216692    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 16:59:16 ctl01 kubelet[9716]: I1116 16:59:16.218012    9716 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node ctl01
Nov 16 16:59:16 ctl01 kubelet[9716]: I1116 16:59:16.218079    9716 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node ctl01
Nov 16 16:59:16 ctl01 kubelet[9716]: I1116 16:59:16.218102    9716 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node ctl01
Nov 16 16:59:16 ctl01 kubelet[9716]: I1116 16:59:16.218144    9716 kubelet_node_status.go:72] Attempting to register node ctl01
Nov 16 16:59:16 ctl01 kubelet[9716]: E1116 16:59:16.218712    9716 kubelet_node_status.go:94] Unable to register node "ctl01" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:16 ctl01 kubelet[9716]: E1116 16:59:16.230666    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:16 ctl01 kubelet[9716]: E1116 16:59:16.330851    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:16 ctl01 kubelet[9716]: E1116 16:59:16.431049    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:16 ctl01 kubelet[9716]: E1116 16:59:16.476828    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:16 ctl01 kubelet[9716]: E1116 16:59:16.477958    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:16 ctl01 kubelet[9716]: E1116 16:59:16.479210    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:16 ctl01 kubelet[9716]: E1116 16:59:16.531339    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:16 ctl01 kubelet[9716]: E1116 16:59:16.631688    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:16 ctl01 kube-proxy[9926]: E1116 16:59:16.643818    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:16 ctl01 kube-proxy[9926]: E1116 16:59:16.645196    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:16 ctl01 kubelet[9716]: E1116 16:59:16.731948    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:16 ctl01 kubelet[9716]: E1116 16:59:16.832198    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:16 ctl01 kubelet[9716]: E1116 16:59:16.932402    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:17 ctl01 kubelet[9716]: E1116 16:59:17.032625    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:17 ctl01 kubelet[9716]: E1116 16:59:17.132941    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:17 ctl01 kubelet[9716]: E1116 16:59:17.233110    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:17 ctl01 kubelet[9716]: E1116 16:59:17.333424    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:17 ctl01 kubelet[9716]: E1116 16:59:17.433651    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:17 ctl01 kubelet[9716]: E1116 16:59:17.477870    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:17 ctl01 kubelet[9716]: E1116 16:59:17.478938    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:17 ctl01 kubelet[9716]: E1116 16:59:17.480391    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:17 ctl01 kubelet[9716]: E1116 16:59:17.534010    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:17 ctl01 kubelet[9716]: E1116 16:59:17.634332    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:17 ctl01 kube-proxy[9926]: E1116 16:59:17.644960    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:17 ctl01 kube-proxy[9926]: E1116 16:59:17.646278    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:17 ctl01 kubelet[9716]: E1116 16:59:17.734567    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:17 ctl01 kubelet[9716]: E1116 16:59:17.834919    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:17 ctl01 kubelet[9716]: E1116 16:59:17.935207    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:18 ctl01 kubelet[9716]: E1116 16:59:18.035601    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:18 ctl01 kubelet[9716]: E1116 16:59:18.135922    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:18 ctl01 kubelet[9716]: E1116 16:59:18.236309    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:18 ctl01 kubelet[9716]: E1116 16:59:18.336621    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:18 ctl01 kubelet[9716]: E1116 16:59:18.436973    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:18 ctl01 kubelet[9716]: E1116 16:59:18.478875    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:18 ctl01 kubelet[9716]: E1116 16:59:18.479833    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:18 ctl01 kubelet[9716]: E1116 16:59:18.481257    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:18 ctl01 kubelet[9716]: E1116 16:59:18.537324    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:18 ctl01 kubelet[9716]: E1116 16:59:18.637542    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:18 ctl01 kube-proxy[9926]: E1116 16:59:18.645737    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:18 ctl01 kube-proxy[9926]: E1116 16:59:18.646948    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:18 ctl01 kubelet[9716]: E1116 16:59:18.737803    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:18 ctl01 kubelet[9716]: E1116 16:59:18.838004    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:18 ctl01 kubelet[9716]: E1116 16:59:18.938293    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:19 ctl01 kubelet[9716]: E1116 16:59:19.038590    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:19 ctl01 kubelet[9716]: E1116 16:59:19.138922    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:19 ctl01 kubelet[9716]: E1116 16:59:19.239153    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:19 ctl01 kubelet[9716]: E1116 16:59:19.339233    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:19 ctl01 kubelet[9716]: E1116 16:59:19.439414    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:19 ctl01 kubelet[9716]: E1116 16:59:19.479929    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:19 ctl01 kubelet[9716]: E1116 16:59:19.481003    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:19 ctl01 kubelet[9716]: E1116 16:59:19.482316    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:19 ctl01 kubelet[9716]: E1116 16:59:19.539762    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:19 ctl01 kubelet[9716]: E1116 16:59:19.639967    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:19 ctl01 kube-proxy[9926]: E1116 16:59:19.646339    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:19 ctl01 kube-proxy[9926]: E1116 16:59:19.647735    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:19 ctl01 kubelet[9716]: E1116 16:59:19.740857    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:19 ctl01 kubelet[9716]: E1116 16:59:19.841430    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:19 ctl01 kubelet[9716]: E1116 16:59:19.941646    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:20 ctl01 kubelet[9716]: E1116 16:59:20.041964    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:20 ctl01 kubelet[9716]: E1116 16:59:20.142242    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:20 ctl01 kubelet[9716]: E1116 16:59:20.242435    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:20 ctl01 kubelet[9716]: E1116 16:59:20.253356    9716 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 16:59:20 ctl01 kubelet[9716]: E1116 16:59:20.342632    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:20 ctl01 kubelet[9716]: E1116 16:59:20.442829    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:20 ctl01 kubelet[9716]: E1116 16:59:20.480514    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:20 ctl01 kubelet[9716]: E1116 16:59:20.481466    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:20 ctl01 kubelet[9716]: E1116 16:59:20.482771    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:20 ctl01 kubelet[9716]: E1116 16:59:20.544222    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:20 ctl01 kubelet[9716]: E1116 16:59:20.644609    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:20 ctl01 kube-proxy[9926]: E1116 16:59:20.646927    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:20 ctl01 kube-proxy[9926]: E1116 16:59:20.648173    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:20 ctl01 kubelet[9716]: E1116 16:59:20.744861    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:20 ctl01 kubelet[9716]: E1116 16:59:20.845108    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:20 ctl01 kubelet[9716]: E1116 16:59:20.945842    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:21 ctl01 kubelet[9716]: E1116 16:59:21.046263    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:21 ctl01 kubelet[9716]: E1116 16:59:21.146638    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:21 ctl01 kubelet[9716]: E1116 16:59:21.246942    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:21 ctl01 kubelet[9716]: E1116 16:59:21.347500    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:21 ctl01 kubelet[9716]: E1116 16:59:21.447760    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:21 ctl01 kubelet[9716]: E1116 16:59:21.481201    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:21 ctl01 kubelet[9716]: E1116 16:59:21.482058    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:21 ctl01 kubelet[9716]: E1116 16:59:21.483280    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:21 ctl01 kubelet[9716]: E1116 16:59:21.547953    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:21 ctl01 kube-proxy[9926]: E1116 16:59:21.647720    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:21 ctl01 kube-proxy[9926]: E1116 16:59:21.648631    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:21 ctl01 kubelet[9716]: E1116 16:59:21.648591    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:21 ctl01 kubelet[9716]: E1116 16:59:21.748768    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:21 ctl01 kubelet[9716]: E1116 16:59:21.848957    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:21 ctl01 kubelet[9716]: E1116 16:59:21.949568    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:22 ctl01 kubelet[9716]: E1116 16:59:22.049903    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:22 ctl01 kubelet[9716]: E1116 16:59:22.150169    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:22 ctl01 kubelet[9716]: E1116 16:59:22.250456    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:22 ctl01 kubelet[9716]: E1116 16:59:22.350751    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:22 ctl01 kubelet[9716]: E1116 16:59:22.451219    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:22 ctl01 kubelet[9716]: E1116 16:59:22.482876    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:22 ctl01 kubelet[9716]: E1116 16:59:22.482978    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:22 ctl01 kubelet[9716]: E1116 16:59:22.484191    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:22 ctl01 kubelet[9716]: E1116 16:59:22.551955    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:22 ctl01 kube-proxy[9926]: E1116 16:59:22.649266    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:22 ctl01 kube-proxy[9926]: E1116 16:59:22.649432    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:22 ctl01 kubelet[9716]: E1116 16:59:22.652447    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:22 ctl01 kubelet[9716]: E1116 16:59:22.753090    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:22 ctl01 kubelet[9716]: E1116 16:59:22.853813    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:22 ctl01 kubelet[9716]: E1116 16:59:22.954352    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:23 ctl01 kubelet[9716]: E1116 16:59:23.054734    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:23 ctl01 kubelet[9716]: E1116 16:59:23.155078    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:23 ctl01 kubelet[9716]: I1116 16:59:23.218944    9716 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:23 ctl01 kubelet[9716]: I1116 16:59:23.219270    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 16:59:23 ctl01 kubelet[9716]: I1116 16:59:23.220902    9716 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node ctl01
Nov 16 16:59:23 ctl01 kubelet[9716]: I1116 16:59:23.221022    9716 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node ctl01
Nov 16 16:59:23 ctl01 kubelet[9716]: I1116 16:59:23.221075    9716 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node ctl01
Nov 16 16:59:23 ctl01 kubelet[9716]: I1116 16:59:23.221146    9716 kubelet_node_status.go:72] Attempting to register node ctl01
Nov 16 16:59:23 ctl01 kubelet[9716]: E1116 16:59:23.222243    9716 kubelet_node_status.go:94] Unable to register node "ctl01" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:23 ctl01 kubelet[9716]: E1116 16:59:23.255374    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:23 ctl01 kubelet[9716]: E1116 16:59:23.355632    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:23 ctl01 kubelet[9716]: E1116 16:59:23.455882    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:23 ctl01 kubelet[9716]: E1116 16:59:23.483813    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:23 ctl01 kubelet[9716]: E1116 16:59:23.484704    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:23 ctl01 kubelet[9716]: E1116 16:59:23.485756    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:23 ctl01 kubelet[9716]: E1116 16:59:23.556260    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:23 ctl01 kubelet[9716]: E1116 16:59:23.632795    9716 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "ctl01" not found
Nov 16 16:59:23 ctl01 kube-proxy[9926]: E1116 16:59:23.650403    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:23 ctl01 kube-proxy[9926]: E1116 16:59:23.651428    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:23 ctl01 kubelet[9716]: E1116 16:59:23.656625    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:23 ctl01 kubelet[9716]: E1116 16:59:23.756878    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:23 ctl01 kubelet[9716]: E1116 16:59:23.857174    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:23 ctl01 kubelet[9716]: E1116 16:59:23.957363    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:24 ctl01 kubelet[9716]: E1116 16:59:24.057727    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:24 ctl01 kubelet[9716]: E1116 16:59:24.158023    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:24 ctl01 kubelet[9716]: E1116 16:59:24.258208    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:24 ctl01 kubelet[9716]: E1116 16:59:24.358419    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:24 ctl01 kubelet[9716]: E1116 16:59:24.458594    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:24 ctl01 kubelet[9716]: E1116 16:59:24.484501    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:24 ctl01 kubelet[9716]: E1116 16:59:24.485527    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:24 ctl01 kubelet[9716]: E1116 16:59:24.486702    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:24 ctl01 kubelet[9716]: E1116 16:59:24.558860    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:24 ctl01 kube-proxy[9926]: E1116 16:59:24.651218    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:24 ctl01 kube-proxy[9926]: E1116 16:59:24.651919    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:24 ctl01 kubelet[9716]: E1116 16:59:24.659065    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:24 ctl01 kubelet[9716]: E1116 16:59:24.759261    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:24 ctl01 kubelet[9716]: E1116 16:59:24.859462    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:24 ctl01 kubelet[9716]: E1116 16:59:24.959762    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:25 ctl01 kubelet[9716]: E1116 16:59:25.060038    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:25 ctl01 kubelet[9716]: E1116 16:59:25.160271    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:25 ctl01 kube-proxy[9926]: E1116 16:59:25.230746    9926 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 16:59:25 ctl01 kubelet[9716]: E1116 16:59:25.260501    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:25 ctl01 kubelet[9716]: E1116 16:59:25.360698    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:25 ctl01 kubelet[9716]: E1116 16:59:25.460910    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:25 ctl01 kubelet[9716]: E1116 16:59:25.485277    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:25 ctl01 kubelet[9716]: E1116 16:59:25.486375    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:25 ctl01 kubelet[9716]: E1116 16:59:25.487189    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:25 ctl01 kubelet[9716]: E1116 16:59:25.561195    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:25 ctl01 kube-proxy[9926]: E1116 16:59:25.652286    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:25 ctl01 kube-proxy[9926]: E1116 16:59:25.653806    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:25 ctl01 kubelet[9716]: E1116 16:59:25.661468    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:25 ctl01 kubelet[9716]: E1116 16:59:25.761732    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:25 ctl01 kubelet[9716]: E1116 16:59:25.862112    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:25 ctl01 kubelet[9716]: E1116 16:59:25.962376    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:26 ctl01 kubelet[9716]: E1116 16:59:26.062626    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:26 ctl01 kubelet[9716]: E1116 16:59:26.162883    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:26 ctl01 kubelet[9716]: E1116 16:59:26.263144    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:26 ctl01 kubelet[9716]: E1116 16:59:26.363372    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:26 ctl01 kubelet[9716]: E1116 16:59:26.463633    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:26 ctl01 kubelet[9716]: E1116 16:59:26.486211    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:26 ctl01 kubelet[9716]: E1116 16:59:26.486990    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:26 ctl01 kubelet[9716]: E1116 16:59:26.488048    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:26 ctl01 kubelet[9716]: E1116 16:59:26.563845    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:26 ctl01 kube-proxy[9926]: E1116 16:59:26.653646    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:26 ctl01 kube-proxy[9926]: E1116 16:59:26.654627    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:26 ctl01 kubelet[9716]: E1116 16:59:26.664690    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:26 ctl01 kubelet[9716]: E1116 16:59:26.764940    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:26 ctl01 kubelet[9716]: E1116 16:59:26.865269    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:26 ctl01 kubelet[9716]: E1116 16:59:26.966249    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:27 ctl01 kubelet[9716]: E1116 16:59:27.066514    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:27 ctl01 kubelet[9716]: E1116 16:59:27.166853    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:27 ctl01 kubelet[9716]: E1116 16:59:27.267229    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:27 ctl01 kubelet[9716]: E1116 16:59:27.367493    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:27 ctl01 kubelet[9716]: E1116 16:59:27.467743    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:27 ctl01 kubelet[9716]: E1116 16:59:27.487396    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:27 ctl01 kubelet[9716]: E1116 16:59:27.488299    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:27 ctl01 kubelet[9716]: E1116 16:59:27.489420    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:27 ctl01 kubelet[9716]: E1116 16:59:27.568387    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:27 ctl01 kube-proxy[9926]: E1116 16:59:27.655012    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:27 ctl01 kube-proxy[9926]: E1116 16:59:27.655760    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:27 ctl01 kubelet[9716]: E1116 16:59:27.668745    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:27 ctl01 kubelet[9716]: E1116 16:59:27.768991    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:27 ctl01 kubelet[9716]: E1116 16:59:27.869340    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:27 ctl01 kubelet[9716]: E1116 16:59:27.969620    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:28 ctl01 kubelet[9716]: E1116 16:59:28.069956    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:28 ctl01 kubelet[9716]: E1116 16:59:28.170196    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:28 ctl01 kubelet[9716]: E1116 16:59:28.270475    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:28 ctl01 kubelet[9716]: E1116 16:59:28.370839    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:28 ctl01 kubelet[9716]: E1116 16:59:28.471111    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:28 ctl01 kubelet[9716]: E1116 16:59:28.488445    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:28 ctl01 kubelet[9716]: E1116 16:59:28.489595    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:28 ctl01 kubelet[9716]: E1116 16:59:28.490450    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:28 ctl01 kubelet[9716]: E1116 16:59:28.571371    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:28 ctl01 kube-proxy[9926]: E1116 16:59:28.656179    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:28 ctl01 kube-proxy[9926]: E1116 16:59:28.657051    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:28 ctl01 kubelet[9716]: E1116 16:59:28.671725    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:28 ctl01 kubelet[9716]: E1116 16:59:28.772747    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:28 ctl01 kubelet[9716]: E1116 16:59:28.873007    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:28 ctl01 kubelet[9716]: E1116 16:59:28.973392    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:29 ctl01 kubelet[9716]: E1116 16:59:29.073740    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:29 ctl01 kubelet[9716]: E1116 16:59:29.174000    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:29 ctl01 kubelet[9716]: E1116 16:59:29.274366    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:29 ctl01 kubelet[9716]: E1116 16:59:29.374597    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:29 ctl01 kubelet[9716]: E1116 16:59:29.474917    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:29 ctl01 kubelet[9716]: E1116 16:59:29.489796    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:29 ctl01 kubelet[9716]: E1116 16:59:29.490822    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:29 ctl01 kubelet[9716]: E1116 16:59:29.491196    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:29 ctl01 kubelet[9716]: E1116 16:59:29.575212    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:29 ctl01 kube-proxy[9926]: E1116 16:59:29.657325    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:29 ctl01 kube-proxy[9926]: E1116 16:59:29.658360    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:29 ctl01 kubelet[9716]: E1116 16:59:29.675818    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:29 ctl01 kubelet[9716]: E1116 16:59:29.775981    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:29 ctl01 kubelet[9716]: E1116 16:59:29.876150    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:29 ctl01 kubelet[9716]: E1116 16:59:29.976363    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:30 ctl01 kubelet[9716]: E1116 16:59:30.076526    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:30 ctl01 kubelet[9716]: E1116 16:59:30.176736    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:30 ctl01 kubelet[9716]: I1116 16:59:30.222519    9716 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:30 ctl01 kubelet[9716]: I1116 16:59:30.222916    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 16:59:30 ctl01 kubelet[9716]: I1116 16:59:30.224071    9716 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node ctl01
Nov 16 16:59:30 ctl01 kubelet[9716]: I1116 16:59:30.224120    9716 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node ctl01
Nov 16 16:59:30 ctl01 kubelet[9716]: I1116 16:59:30.224139    9716 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node ctl01
Nov 16 16:59:30 ctl01 kubelet[9716]: I1116 16:59:30.224167    9716 kubelet_node_status.go:72] Attempting to register node ctl01
Nov 16 16:59:30 ctl01 kubelet[9716]: E1116 16:59:30.224659    9716 kubelet_node_status.go:94] Unable to register node "ctl01" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:30 ctl01 kubelet[9716]: E1116 16:59:30.254053    9716 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 16:59:30 ctl01 kubelet[9716]: E1116 16:59:30.276920    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:30 ctl01 kubelet[9716]: E1116 16:59:30.377166    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:30 ctl01 kubelet[9716]: E1116 16:59:30.477418    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:30 ctl01 kubelet[9716]: E1116 16:59:30.490425    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:30 ctl01 kubelet[9716]: E1116 16:59:30.491413    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:30 ctl01 kubelet[9716]: E1116 16:59:30.492735    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:30 ctl01 kubelet[9716]: E1116 16:59:30.577618    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:30 ctl01 kube-proxy[9926]: E1116 16:59:30.658148    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:30 ctl01 kube-proxy[9926]: E1116 16:59:30.658829    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:30 ctl01 kubelet[9716]: E1116 16:59:30.677787    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:30 ctl01 kubelet[9716]: E1116 16:59:30.777945    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:30 ctl01 kubelet[9716]: E1116 16:59:30.878133    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:30 ctl01 kubelet[9716]: E1116 16:59:30.978286    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:31 ctl01 kubelet[9716]: E1116 16:59:31.078439    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:31 ctl01 kubelet[9716]: E1116 16:59:31.178616    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:31 ctl01 kubelet[9716]: E1116 16:59:31.278792    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:31 ctl01 kubelet[9716]: E1116 16:59:31.379034    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:31 ctl01 kubelet[9716]: E1116 16:59:31.479205    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:31 ctl01 kubelet[9716]: E1116 16:59:31.490987    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:31 ctl01 kubelet[9716]: E1116 16:59:31.492006    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:31 ctl01 kubelet[9716]: E1116 16:59:31.493266    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:31 ctl01 kubelet[9716]: E1116 16:59:31.579416    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:31 ctl01 kube-proxy[9926]: E1116 16:59:31.659242    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:31 ctl01 kube-proxy[9926]: E1116 16:59:31.660134    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:31 ctl01 kubelet[9716]: E1116 16:59:31.679670    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:31 ctl01 kubelet[9716]: E1116 16:59:31.779884    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:31 ctl01 kubelet[9716]: E1116 16:59:31.880046    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:31 ctl01 kubelet[9716]: E1116 16:59:31.980260    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:32 ctl01 kubelet[9716]: E1116 16:59:32.080571    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:32 ctl01 kubelet[9716]: E1116 16:59:32.180961    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:32 ctl01 kubelet[9716]: E1116 16:59:32.281313    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:32 ctl01 kubelet[9716]: E1116 16:59:32.381579    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:32 ctl01 kubelet[9716]: E1116 16:59:32.481883    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:32 ctl01 kubelet[9716]: E1116 16:59:32.491939    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:32 ctl01 kubelet[9716]: E1116 16:59:32.492885    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:32 ctl01 kubelet[9716]: E1116 16:59:32.494596    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:32 ctl01 kubelet[9716]: E1116 16:59:32.582120    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:32 ctl01 kube-proxy[9926]: E1116 16:59:32.660329    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:32 ctl01 kube-proxy[9926]: E1116 16:59:32.661121    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:32 ctl01 kubelet[9716]: E1116 16:59:32.682411    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:32 ctl01 kubelet[9716]: E1116 16:59:32.783439    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:32 ctl01 kubelet[9716]: E1116 16:59:32.883758    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:32 ctl01 kubelet[9716]: E1116 16:59:32.984191    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:33 ctl01 kubelet[9716]: E1116 16:59:33.084415    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:33 ctl01 kubelet[9716]: E1116 16:59:33.184682    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:33 ctl01 kubelet[9716]: E1116 16:59:33.284943    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:33 ctl01 kubelet[9716]: E1116 16:59:33.385327    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:33 ctl01 kubelet[9716]: E1116 16:59:33.485801    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:33 ctl01 kubelet[9716]: E1116 16:59:33.492666    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:33 ctl01 kubelet[9716]: E1116 16:59:33.493577    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:33 ctl01 kubelet[9716]: E1116 16:59:33.495411    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:33 ctl01 kubelet[9716]: E1116 16:59:33.586079    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:33 ctl01 kubelet[9716]: E1116 16:59:33.633160    9716 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "ctl01" not found
Nov 16 16:59:33 ctl01 kube-proxy[9926]: E1116 16:59:33.661409    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:33 ctl01 kube-proxy[9926]: E1116 16:59:33.662470    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:33 ctl01 kubelet[9716]: E1116 16:59:33.686403    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:33 ctl01 salt-minion[4526]: [INFO    ] User sudo_ubuntu Executing command cmd.run with jid 20191116165933687300
Nov 16 16:59:33 ctl01 salt-minion[4526]: [INFO    ] Starting a new job with PID 10093
Nov 16 16:59:33 ctl01 salt-minion[4526]: [INFO    ] Executing command 'calicoctl node status' in directory '/root'
Nov 16 16:59:33 ctl01 kubelet[9716]: E1116 16:59:33.786624    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:33 ctl01 salt-minion[4526]: [INFO    ] Returning information for job: 20191116165933687300
Nov 16 16:59:33 ctl01 kubelet[9716]: E1116 16:59:33.886960    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:33 ctl01 kubelet[9716]: E1116 16:59:33.987205    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:34 ctl01 kubelet[9716]: E1116 16:59:34.087561    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:34 ctl01 kubelet[9716]: E1116 16:59:34.187774    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:34 ctl01 kubelet[9716]: E1116 16:59:34.288099    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:34 ctl01 kubelet[9716]: E1116 16:59:34.388459    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:34 ctl01 salt-minion[4526]: [INFO    ] User sudo_ubuntu Executing command cmd.run with jid 20191116165934415277
Nov 16 16:59:34 ctl01 salt-minion[4526]: [INFO    ] Starting a new job with PID 10112
Nov 16 16:59:34 ctl01 salt-minion[4526]: [INFO    ] Executing command 'calicoctl get ippool' in directory '/root'
Nov 16 16:59:34 ctl01 kubelet[9716]: E1116 16:59:34.488724    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:34 ctl01 kubelet[9716]: E1116 16:59:34.493666    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:34 ctl01 kubelet[9716]: E1116 16:59:34.495090    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:34 ctl01 kubelet[9716]: E1116 16:59:34.496318    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:34 ctl01 salt-minion[4526]: [INFO    ] Returning information for job: 20191116165934415277
Nov 16 16:59:34 ctl01 kubelet[9716]: E1116 16:59:34.589047    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:34 ctl01 kube-proxy[9926]: E1116 16:59:34.662571    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:34 ctl01 kube-proxy[9926]: E1116 16:59:34.663387    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:34 ctl01 kubelet[9716]: E1116 16:59:34.689392    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:34 ctl01 kubelet[9716]: E1116 16:59:34.789616    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:34 ctl01 kubelet[9716]: E1116 16:59:34.889985    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:34 ctl01 kubelet[9716]: E1116 16:59:34.990151    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:35 ctl01 kubelet[9716]: E1116 16:59:35.090370    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:35 ctl01 kubelet[9716]: E1116 16:59:35.190604    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:35 ctl01 salt-minion[4526]: [INFO    ] User sudo_ubuntu Executing command state.sls with jid 20191116165935179717
Nov 16 16:59:35 ctl01 salt-minion[4526]: [INFO    ] Starting a new job with PID 10133
Nov 16 16:59:35 ctl01 kube-proxy[9926]: E1116 16:59:35.231766    9926 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 16:59:35 ctl01 kubelet[9716]: E1116 16:59:35.290951    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:35 ctl01 kubelet[9716]: E1116 16:59:35.391235    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:35 ctl01 kubelet[9716]: E1116 16:59:35.491497    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:35 ctl01 kubelet[9716]: E1116 16:59:35.494678    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:35 ctl01 kubelet[9716]: E1116 16:59:35.495617    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:35 ctl01 kubelet[9716]: E1116 16:59:35.496753    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:35 ctl01 kubelet[9716]: E1116 16:59:35.591792    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:35 ctl01 kube-proxy[9926]: E1116 16:59:35.663360    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:35 ctl01 kube-proxy[9926]: E1116 16:59:35.664327    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:35 ctl01 kubelet[9716]: E1116 16:59:35.692070    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:35 ctl01 kubelet[9716]: E1116 16:59:35.792263    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:35 ctl01 kubelet[9716]: E1116 16:59:35.892450    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:35 ctl01 kubelet[9716]: E1116 16:59:35.992777    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:36 ctl01 kubelet[9716]: E1116 16:59:36.092950    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:36 ctl01 kubelet[9716]: E1116 16:59:36.193170    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:36 ctl01 kubelet[9716]: E1116 16:59:36.293473    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:36 ctl01 kubelet[9716]: E1116 16:59:36.393697    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:36 ctl01 kubelet[9716]: E1116 16:59:36.493886    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:36 ctl01 kubelet[9716]: E1116 16:59:36.495667    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:36 ctl01 kubelet[9716]: E1116 16:59:36.496502    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:36 ctl01 kubelet[9716]: E1116 16:59:36.497906    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:36 ctl01 kubelet[9716]: E1116 16:59:36.594153    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:36 ctl01 kube-proxy[9926]: I1116 16:59:36.633875    9926 proxier.go:645] Not syncing iptables until Services and Endpoints have been received from master
Nov 16 16:59:36 ctl01 kube-proxy[9926]: E1116 16:59:36.664390    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:36 ctl01 kube-proxy[9926]: E1116 16:59:36.665306    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:36 ctl01 kubelet[9716]: E1116 16:59:36.694486    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:36 ctl01 kubelet[9716]: E1116 16:59:36.794776    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:36 ctl01 kubelet[9716]: E1116 16:59:36.895121    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:36 ctl01 kubelet[9716]: E1116 16:59:36.995433    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:37 ctl01 kubelet[9716]: E1116 16:59:37.095767    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:37 ctl01 kubelet[9716]: E1116 16:59:37.196123    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:37 ctl01 kubelet[9716]: I1116 16:59:37.225082    9716 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:37 ctl01 kubelet[9716]: I1116 16:59:37.225955    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 16:59:37 ctl01 kubelet[9716]: I1116 16:59:37.228052    9716 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node ctl01
Nov 16 16:59:37 ctl01 kubelet[9716]: I1116 16:59:37.228146    9716 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node ctl01
Nov 16 16:59:37 ctl01 kubelet[9716]: I1116 16:59:37.228176    9716 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node ctl01
Nov 16 16:59:37 ctl01 kubelet[9716]: I1116 16:59:37.228234    9716 kubelet_node_status.go:72] Attempting to register node ctl01
Nov 16 16:59:37 ctl01 kubelet[9716]: E1116 16:59:37.228981    9716 kubelet_node_status.go:94] Unable to register node "ctl01" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:37 ctl01 kubelet[9716]: E1116 16:59:37.296404    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:37 ctl01 kubelet[9716]: E1116 16:59:37.396749    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:37 ctl01 kubelet[9716]: E1116 16:59:37.496555    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:37 ctl01 kubelet[9716]: E1116 16:59:37.496973    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:37 ctl01 kubelet[9716]: E1116 16:59:37.497762    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:37 ctl01 kubelet[9716]: E1116 16:59:37.498368    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:37 ctl01 kubelet[9716]: E1116 16:59:37.597251    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:37 ctl01 kube-proxy[9926]: E1116 16:59:37.665233    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:37 ctl01 kube-proxy[9926]: E1116 16:59:37.666073    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:37 ctl01 kubelet[9716]: E1116 16:59:37.697561    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:37 ctl01 kubelet[9716]: E1116 16:59:37.797881    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:37 ctl01 kubelet[9716]: E1116 16:59:37.898196    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:37 ctl01 kubelet[9716]: E1116 16:59:37.998500    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:38 ctl01 kubelet[9716]: E1116 16:59:38.098689    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:38 ctl01 kubelet[9716]: E1116 16:59:38.198953    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:38 ctl01 kubelet[9716]: E1116 16:59:38.299136    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:38 ctl01 kubelet[9716]: E1116 16:59:38.399432    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:38 ctl01 kubelet[9716]: E1116 16:59:38.497363    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:38 ctl01 kubelet[9716]: E1116 16:59:38.498740    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:38 ctl01 kubelet[9716]: E1116 16:59:38.499655    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:38 ctl01 kubelet[9716]: E1116 16:59:38.499867    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:38 ctl01 kubelet[9716]: E1116 16:59:38.599934    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:38 ctl01 kube-proxy[9926]: E1116 16:59:38.665970    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:38 ctl01 kube-proxy[9926]: E1116 16:59:38.666851    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:38 ctl01 kubelet[9716]: E1116 16:59:38.700103    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:38 ctl01 kubelet[9716]: E1116 16:59:38.800334    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:38 ctl01 kubelet[9716]: E1116 16:59:38.900537    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:39 ctl01 kubelet[9716]: E1116 16:59:39.000804    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:39 ctl01 kubelet[9716]: E1116 16:59:39.101132    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:39 ctl01 kubelet[9716]: E1116 16:59:39.201367    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:39 ctl01 kubelet[9716]: E1116 16:59:39.301631    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:39 ctl01 kubelet[9716]: E1116 16:59:39.401993    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:39 ctl01 kubelet[9716]: E1116 16:59:39.498081    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:39 ctl01 kubelet[9716]: E1116 16:59:39.499179    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:39 ctl01 kubelet[9716]: E1116 16:59:39.500305    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:39 ctl01 kubelet[9716]: E1116 16:59:39.502154    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:39 ctl01 kubelet[9716]: E1116 16:59:39.602428    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:39 ctl01 kube-proxy[9926]: E1116 16:59:39.666646    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:39 ctl01 kube-proxy[9926]: E1116 16:59:39.667682    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:39 ctl01 kubelet[9716]: E1116 16:59:39.702715    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:39 ctl01 kubelet[9716]: E1116 16:59:39.803040    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:39 ctl01 kubelet[9716]: E1116 16:59:39.903221    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:40 ctl01 kubelet[9716]: E1116 16:59:40.003450    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:40 ctl01 kubelet[9716]: E1116 16:59:40.103659    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:40 ctl01 kubelet[9716]: E1116 16:59:40.203953    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:40 ctl01 kubelet[9716]: E1116 16:59:40.254934    9716 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 16:59:40 ctl01 kubelet[9716]: E1116 16:59:40.304215    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:40 ctl01 kubelet[9716]: E1116 16:59:40.404442    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:40 ctl01 kubelet[9716]: E1116 16:59:40.498643    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:40 ctl01 kubelet[9716]: E1116 16:59:40.499618    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:40 ctl01 kubelet[9716]: E1116 16:59:40.501013    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:40 ctl01 kubelet[9716]: E1116 16:59:40.504622    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:40 ctl01 kubelet[9716]: E1116 16:59:40.604822    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:40 ctl01 kube-proxy[9926]: E1116 16:59:40.667545    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:40 ctl01 kube-proxy[9926]: E1116 16:59:40.668343    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:40 ctl01 kubelet[9716]: E1116 16:59:40.705010    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:40 ctl01 kubelet[9716]: E1116 16:59:40.805261    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:40 ctl01 kubelet[9716]: E1116 16:59:40.905543    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:41 ctl01 kubelet[9716]: E1116 16:59:41.005732    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:41 ctl01 kubelet[9716]: E1116 16:59:41.108469    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:41 ctl01 kubelet[9716]: E1116 16:59:41.208690    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:41 ctl01 kubelet[9716]: E1116 16:59:41.308949    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:41 ctl01 kubelet[9716]: E1116 16:59:41.409241    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:41 ctl01 kubelet[9716]: E1116 16:59:41.499411    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:41 ctl01 kubelet[9716]: E1116 16:59:41.500525    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:41 ctl01 kubelet[9716]: E1116 16:59:41.501477    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:41 ctl01 kubelet[9716]: E1116 16:59:41.509400    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:41 ctl01 salt-minion[4526]: [INFO    ] Loading fresh modules for state activity
Nov 16 16:59:41 ctl01 kubelet[9716]: E1116 16:59:41.609775    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:41 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'etcd/server/setup.sls'
Nov 16 16:59:41 ctl01 salt-minion[4526]: [INFO    ] Running state [/calico/ipam/v2/assignment/ipv4/block/192.168.0.0-16] at time 16:59:41.664468
Nov 16 16:59:41 ctl01 salt-minion[4526]: [INFO    ] Executing state etcd.set for [/calico/ipam/v2/assignment/ipv4/block/192.168.0.0-16]
Nov 16 16:59:41 ctl01 kube-proxy[9926]: E1116 16:59:41.668501    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:41 ctl01 kube-proxy[9926]: E1116 16:59:41.669480    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:41 ctl01 salt-minion[4526]: [ERROR   ] etcd: Key not found : /calico
Nov 16 16:59:41 ctl01 salt-minion[4526]: [INFO    ] {'/calico/ipam/v2/assignment/ipv4/block/192.168.0.0-16': '{"masquerade":true,"cidr":"192.168.0.0/16"}'}
Nov 16 16:59:41 ctl01 salt-minion[4526]: [INFO    ] Completed state [/calico/ipam/v2/assignment/ipv4/block/192.168.0.0-16] at time 16:59:41.672739 duration_in_ms=8.272
Nov 16 16:59:41 ctl01 salt-minion[4526]: [INFO    ] Returning information for job: 20191116165935179717
Nov 16 16:59:41 ctl01 kubelet[9716]: E1116 16:59:41.709996    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:41 ctl01 kubelet[9716]: E1116 16:59:41.810296    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:41 ctl01 kubelet[9716]: E1116 16:59:41.910550    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:42 ctl01 kubelet[9716]: E1116 16:59:42.010856    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:42 ctl01 kubelet[9716]: E1116 16:59:42.111107    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:42 ctl01 kubelet[9716]: E1116 16:59:42.211432    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:42 ctl01 salt-minion[4526]: [INFO    ] User sudo_ubuntu Executing command state.sls with jid 20191116165942225753
Nov 16 16:59:42 ctl01 salt-minion[4526]: [INFO    ] Starting a new job with PID 10148
Nov 16 16:59:42 ctl01 kubelet[9716]: E1116 16:59:42.311791    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:42 ctl01 kubelet[9716]: E1116 16:59:42.412046    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:42 ctl01 kubelet[9716]: E1116 16:59:42.500564    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:42 ctl01 kubelet[9716]: E1116 16:59:42.501405    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:42 ctl01 kubelet[9716]: E1116 16:59:42.502322    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:42 ctl01 kubelet[9716]: E1116 16:59:42.512250    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:42 ctl01 kubelet[9716]: E1116 16:59:42.612487    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:42 ctl01 kube-proxy[9926]: E1116 16:59:42.669806    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:42 ctl01 kube-proxy[9926]: E1116 16:59:42.670443    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:42 ctl01 kubelet[9716]: E1116 16:59:42.712746    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:42 ctl01 kubelet[9716]: E1116 16:59:42.813028    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:42 ctl01 kubelet[9716]: E1116 16:59:42.913221    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:43 ctl01 kubelet[9716]: E1116 16:59:43.013392    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:43 ctl01 kubelet[9716]: E1116 16:59:43.113665    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:43 ctl01 kubelet[9716]: E1116 16:59:43.213938    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:43 ctl01 kubelet[9716]: E1116 16:59:43.314152    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:43 ctl01 salt-minion[4526]: [INFO    ] Loading fresh modules for state activity
Nov 16 16:59:43 ctl01 kubelet[9716]: E1116 16:59:43.414423    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:43 ctl01 kubelet[9716]: E1116 16:59:43.501599    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:43 ctl01 kubelet[9716]: E1116 16:59:43.502550    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:43 ctl01 kubelet[9716]: E1116 16:59:43.503783    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:43 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/init.sls'
Nov 16 16:59:43 ctl01 kubelet[9716]: E1116 16:59:43.514739    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:43 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/master/init.sls'
Nov 16 16:59:43 ctl01 kubelet[9716]: E1116 16:59:43.615086    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:43 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:43 ctl01 kubelet[9716]: E1116 16:59:43.633569    9716 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "ctl01" not found
Nov 16 16:59:43 ctl01 kube-proxy[9926]: E1116 16:59:43.670573    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:43 ctl01 kube-proxy[9926]: E1116 16:59:43.671544    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:43 ctl01 kubelet[9716]: E1116 16:59:43.715285    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:43 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:43 ctl01 kubelet[9716]: E1116 16:59:43.815472    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:43 ctl01 kubelet[9716]: E1116 16:59:43.915783    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:43 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/master/service.sls'
Nov 16 16:59:43 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:44 ctl01 kubelet[9716]: E1116 16:59:44.016080    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:44 ctl01 kubelet[9716]: E1116 16:59:44.116361    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:44 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:44 ctl01 kubelet[9716]: E1116 16:59:44.216633    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:44 ctl01 kubelet[9716]: I1116 16:59:44.229258    9716 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:44 ctl01 kubelet[9716]: I1116 16:59:44.229592    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 16:59:44 ctl01 kubelet[9716]: I1116 16:59:44.230545    9716 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node ctl01
Nov 16 16:59:44 ctl01 kubelet[9716]: I1116 16:59:44.230582    9716 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node ctl01
Nov 16 16:59:44 ctl01 kubelet[9716]: I1116 16:59:44.230596    9716 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node ctl01
Nov 16 16:59:44 ctl01 kubelet[9716]: I1116 16:59:44.230618    9716 kubelet_node_status.go:72] Attempting to register node ctl01
Nov 16 16:59:44 ctl01 kubelet[9716]: E1116 16:59:44.231087    9716 kubelet_node_status.go:94] Unable to register node "ctl01" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:44 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:44 ctl01 kubelet[9716]: E1116 16:59:44.316912    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:44 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:44 ctl01 kubelet[9716]: E1116 16:59:44.417132    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:44 ctl01 kubelet[9716]: E1116 16:59:44.502924    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:44 ctl01 kubelet[9716]: E1116 16:59:44.503392    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:44 ctl01 kubelet[9716]: E1116 16:59:44.504590    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:44 ctl01 kubelet[9716]: E1116 16:59:44.517382    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:44 ctl01 kubelet[9716]: E1116 16:59:44.617574    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:44 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:44 ctl01 kube-proxy[9926]: E1116 16:59:44.671294    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:44 ctl01 kube-proxy[9926]: E1116 16:59:44.672286    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:44 ctl01 kubelet[9716]: E1116 16:59:44.717854    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:44 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:44 ctl01 kubelet[9716]: E1116 16:59:44.818134    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:44 ctl01 kubelet[9716]: E1116 16:59:44.918430    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:45 ctl01 kubelet[9716]: E1116 16:59:45.018712    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:45 ctl01 kubelet[9716]: E1116 16:59:45.118902    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:45 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:45 ctl01 kubelet[9716]: E1116 16:59:45.219126    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:45 ctl01 kube-proxy[9926]: E1116 16:59:45.232641    9926 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 16:59:45 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:45 ctl01 kubelet[9716]: E1116 16:59:45.319435    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:45 ctl01 kubelet[9716]: E1116 16:59:45.419747    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:45 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:45 ctl01 kubelet[9716]: E1116 16:59:45.503828    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:45 ctl01 kubelet[9716]: E1116 16:59:45.504860    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:45 ctl01 kubelet[9716]: E1116 16:59:45.505966    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:45 ctl01 kubelet[9716]: E1116 16:59:45.520336    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:45 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:45 ctl01 kubelet[9716]: E1116 16:59:45.620563    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:45 ctl01 kube-proxy[9926]: E1116 16:59:45.672226    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:45 ctl01 kube-proxy[9926]: E1116 16:59:45.673112    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:45 ctl01 kubelet[9716]: E1116 16:59:45.720792    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:45 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:45 ctl01 kubelet[9716]: E1116 16:59:45.821040    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:45 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:45 ctl01 kubelet[9716]: E1116 16:59:45.921349    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:46 ctl01 kubelet[9716]: E1116 16:59:46.021605    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:46 ctl01 kubelet[9716]: E1116 16:59:46.121951    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:46 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/master/controller.sls'
Nov 16 16:59:46 ctl01 kubelet[9716]: E1116 16:59:46.222214    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:46 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:46 ctl01 kubelet[9716]: E1116 16:59:46.322407    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:46 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:46 ctl01 kubelet[9716]: E1116 16:59:46.422675    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:46 ctl01 kubelet[9716]: E1116 16:59:46.504823    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:46 ctl01 kubelet[9716]: E1116 16:59:46.505589    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:46 ctl01 kubelet[9716]: E1116 16:59:46.506835    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:46 ctl01 kubelet[9716]: E1116 16:59:46.522887    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:46 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:46 ctl01 kubelet[9716]: E1116 16:59:46.623245    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:46 ctl01 kube-proxy[9926]: E1116 16:59:46.673313    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:46 ctl01 kube-proxy[9926]: E1116 16:59:46.674053    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:46 ctl01 kubelet[9716]: E1116 16:59:46.723546    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:46 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:46 ctl01 kubelet[9716]: E1116 16:59:46.823847    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:46 ctl01 kubelet[9716]: E1116 16:59:46.924090    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:46 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:47 ctl01 kubelet[9716]: E1116 16:59:47.024459    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:47 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:47 ctl01 kubelet[9716]: E1116 16:59:47.124773    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:47 ctl01 kubelet[9716]: E1116 16:59:47.224942    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:47 ctl01 salt-minion[4526]: [INFO    ] Executing command 'date "+%FT%TZ"' in directory '/root'
Nov 16 16:59:47 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/master/setup.sls'
Nov 16 16:59:47 ctl01 kubelet[9716]: E1116 16:59:47.325106    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:47 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:47 ctl01 kubelet[9716]: E1116 16:59:47.425406    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:47 ctl01 kubelet[9716]: E1116 16:59:47.505794    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:47 ctl01 kubelet[9716]: E1116 16:59:47.506955    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:47 ctl01 kubelet[9716]: E1116 16:59:47.507895    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:47 ctl01 kubelet[9716]: E1116 16:59:47.525695    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:47 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:47 ctl01 kubelet[9716]: E1116 16:59:47.625984    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:47 ctl01 kube-proxy[9926]: E1116 16:59:47.674099    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:47 ctl01 kube-proxy[9926]: E1116 16:59:47.675007    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:47 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:47 ctl01 kubelet[9716]: E1116 16:59:47.726180    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:47 ctl01 kubelet[9716]: E1116 16:59:47.826386    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:47 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:47 ctl01 kubelet[9716]: E1116 16:59:47.926601    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:48 ctl01 kubelet[9716]: E1116 16:59:48.026859    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:48 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:48 ctl01 kubelet[9716]: E1116 16:59:48.127062    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:48 ctl01 kubelet[9716]: E1116 16:59:48.227314    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:48 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:48 ctl01 kubelet[9716]: E1116 16:59:48.327524    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:48 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:48 ctl01 kubelet[9716]: E1116 16:59:48.428113    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:48 ctl01 kubelet[9716]: E1116 16:59:48.507150    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:48 ctl01 kubelet[9716]: E1116 16:59:48.507669    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:48 ctl01 kubelet[9716]: E1116 16:59:48.508799    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:48 ctl01 kubelet[9716]: E1116 16:59:48.528672    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:48 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:48 ctl01 kubelet[9716]: E1116 16:59:48.629045    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:48 ctl01 kube-proxy[9926]: E1116 16:59:48.675429    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:48 ctl01 kube-proxy[9926]: E1116 16:59:48.676091    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:48 ctl01 kubelet[9716]: E1116 16:59:48.729224    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:48 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:48 ctl01 kubelet[9716]: E1116 16:59:48.829369    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:48 ctl01 kubelet[9716]: E1116 16:59:48.929551    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:48 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:49 ctl01 kubelet[9716]: E1116 16:59:49.029908    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:49 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:49 ctl01 kubelet[9716]: E1116 16:59:49.130326    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:49 ctl01 kubelet[9716]: E1116 16:59:49.230510    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:49 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:49 ctl01 kubelet[9716]: E1116 16:59:49.330658    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:49 ctl01 kubelet[9716]: E1116 16:59:49.430806    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:49 ctl01 kubelet[9716]: E1116 16:59:49.508052    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:49 ctl01 kubelet[9716]: E1116 16:59:49.508731    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:49 ctl01 kubelet[9716]: E1116 16:59:49.510343    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:49 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:49 ctl01 kubelet[9716]: E1116 16:59:49.530959    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:49 ctl01 kubelet[9716]: E1116 16:59:49.631116    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:49 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:49 ctl01 kube-proxy[9926]: E1116 16:59:49.676279    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:49 ctl01 kube-proxy[9926]: E1116 16:59:49.677233    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:49 ctl01 kubelet[9716]: E1116 16:59:49.731270    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:49 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:49 ctl01 kubelet[9716]: E1116 16:59:49.831549    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:49 ctl01 kubelet[9716]: E1116 16:59:49.931798    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:50 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:50 ctl01 kubelet[9716]: E1116 16:59:50.032082    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:50 ctl01 kubelet[9716]: E1116 16:59:50.132363    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:50 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:50 ctl01 kubelet[9716]: E1116 16:59:50.232595    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:50 ctl01 kubelet[9716]: E1116 16:59:50.255894    9716 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 16:59:50 ctl01 kubelet[9716]: E1116 16:59:50.332810    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:50 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:50 ctl01 kubelet[9716]: E1116 16:59:50.434643    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:50 ctl01 kubelet[9716]: E1116 16:59:50.508946    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:50 ctl01 kubelet[9716]: E1116 16:59:50.509735    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:50 ctl01 kubelet[9716]: E1116 16:59:50.510964    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:50 ctl01 kubelet[9716]: E1116 16:59:50.534928    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:50 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/control/init.sls'
Nov 16 16:59:50 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:50 ctl01 kubelet[9716]: E1116 16:59:50.635108    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:50 ctl01 kube-proxy[9926]: E1116 16:59:50.676803    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:50 ctl01 kube-proxy[9926]: E1116 16:59:50.677775    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:50 ctl01 kubelet[9716]: E1116 16:59:50.735349    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:50 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:50 ctl01 kubelet[9716]: E1116 16:59:50.835547    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:50 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/control/service.sls'
Nov 16 16:59:50 ctl01 kubelet[9716]: E1116 16:59:50.935718    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:51 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:51 ctl01 kubelet[9716]: E1116 16:59:51.035962    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:51 ctl01 kubelet[9716]: E1116 16:59:51.136222    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:51 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:51 ctl01 kubelet[9716]: I1116 16:59:51.231276    9716 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:51 ctl01 kubelet[9716]: I1116 16:59:51.231701    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 16:59:51 ctl01 kubelet[9716]: I1116 16:59:51.232712    9716 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node ctl01
Nov 16 16:59:51 ctl01 kubelet[9716]: I1116 16:59:51.232763    9716 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node ctl01
Nov 16 16:59:51 ctl01 kubelet[9716]: I1116 16:59:51.232786    9716 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node ctl01
Nov 16 16:59:51 ctl01 kubelet[9716]: I1116 16:59:51.232820    9716 kubelet_node_status.go:72] Attempting to register node ctl01
Nov 16 16:59:51 ctl01 kubelet[9716]: E1116 16:59:51.233282    9716 kubelet_node_status.go:94] Unable to register node "ctl01" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:51 ctl01 kubelet[9716]: E1116 16:59:51.236364    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:51 ctl01 kubelet[9716]: E1116 16:59:51.336608    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:51 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/control/role.sls'
Nov 16 16:59:51 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:51 ctl01 kubelet[9716]: E1116 16:59:51.436884    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:51 ctl01 kubelet[9716]: E1116 16:59:51.509820    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:51 ctl01 kubelet[9716]: E1116 16:59:51.510494    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:51 ctl01 kubelet[9716]: E1116 16:59:51.511924    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:51 ctl01 kubelet[9716]: E1116 16:59:51.537063    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:51 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:51 ctl01 kubelet[9716]: E1116 16:59:51.637277    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:51 ctl01 kube-proxy[9926]: E1116 16:59:51.677965    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:51 ctl01 kube-proxy[9926]: E1116 16:59:51.678458    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:51 ctl01 kubelet[9716]: E1116 16:59:51.737475    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:51 ctl01 kubelet[9716]: E1116 16:59:51.837671    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:51 ctl01 kubelet[9716]: E1116 16:59:51.937917    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:52 ctl01 kubelet[9716]: E1116 16:59:52.038136    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:52 ctl01 kubelet[9716]: E1116 16:59:52.138275    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:52 ctl01 kubelet[9716]: E1116 16:59:52.238435    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:52 ctl01 kubelet[9716]: E1116 16:59:52.338870    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:52 ctl01 kubelet[9716]: E1116 16:59:52.439238    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:52 ctl01 salt-minion[4526]: [INFO    ] Running state [curl] at time 16:59:52.476697
Nov 16 16:59:52 ctl01 salt-minion[4526]: [INFO    ] Executing state pkg.installed for [curl]
Nov 16 16:59:52 ctl01 salt-minion[4526]: [INFO    ] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
Nov 16 16:59:52 ctl01 kubelet[9716]: E1116 16:59:52.511047    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:52 ctl01 kubelet[9716]: E1116 16:59:52.511909    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:52 ctl01 kubelet[9716]: E1116 16:59:52.512764    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:52 ctl01 kubelet[9716]: E1116 16:59:52.539847    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:52 ctl01 kubelet[9716]: E1116 16:59:52.640144    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:52 ctl01 kube-proxy[9926]: E1116 16:59:52.678874    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:52 ctl01 kube-proxy[9926]: E1116 16:59:52.679694    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:52 ctl01 kubelet[9716]: E1116 16:59:52.740754    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:52 ctl01 kubelet[9716]: E1116 16:59:52.841345    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:52 ctl01 kubelet[9716]: E1116 16:59:52.941677    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:53 ctl01 kubelet[9716]: E1116 16:59:53.041892    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:53 ctl01 kubelet[9716]: E1116 16:59:53.142158    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:53 ctl01 kubelet[9716]: E1116 16:59:53.242806    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:53 ctl01 salt-minion[4526]: [INFO    ] All specified packages are already installed
Nov 16 16:59:53 ctl01 salt-minion[4526]: [INFO    ] Completed state [curl] at time 16:59:53.334336 duration_in_ms=857.641
Nov 16 16:59:53 ctl01 salt-minion[4526]: [INFO    ] Running state [git] at time 16:59:53.334686
Nov 16 16:59:53 ctl01 salt-minion[4526]: [INFO    ] Executing state pkg.installed for [git]
Nov 16 16:59:53 ctl01 kubelet[9716]: E1116 16:59:53.343412    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:53 ctl01 salt-minion[4526]: [INFO    ] All specified packages are already installed
Nov 16 16:59:53 ctl01 salt-minion[4526]: [INFO    ] Completed state [git] at time 16:59:53.344180 duration_in_ms=9.494
Nov 16 16:59:53 ctl01 salt-minion[4526]: [INFO    ] Running state [apt-transport-https] at time 16:59:53.344412
Nov 16 16:59:53 ctl01 salt-minion[4526]: [INFO    ] Executing state pkg.installed for [apt-transport-https]
Nov 16 16:59:53 ctl01 salt-minion[4526]: [INFO    ] All specified packages are already installed
Nov 16 16:59:53 ctl01 salt-minion[4526]: [INFO    ] Completed state [apt-transport-https] at time 16:59:53.353315 duration_in_ms=8.902
Nov 16 16:59:53 ctl01 salt-minion[4526]: [INFO    ] Running state [python-apt] at time 16:59:53.353556
Nov 16 16:59:53 ctl01 salt-minion[4526]: [INFO    ] Executing state pkg.installed for [python-apt]
Nov 16 16:59:53 ctl01 salt-minion[4526]: [INFO    ] All specified packages are already installed
Nov 16 16:59:53 ctl01 salt-minion[4526]: [INFO    ] Completed state [python-apt] at time 16:59:53.362308 duration_in_ms=8.752
Nov 16 16:59:53 ctl01 salt-minion[4526]: [INFO    ] Running state [socat] at time 16:59:53.362542
Nov 16 16:59:53 ctl01 salt-minion[4526]: [INFO    ] Executing state pkg.installed for [socat]
Nov 16 16:59:53 ctl01 salt-minion[4526]: [INFO    ] All specified packages are already installed
Nov 16 16:59:53 ctl01 salt-minion[4526]: [INFO    ] Completed state [socat] at time 16:59:53.371696 duration_in_ms=9.154
Nov 16 16:59:53 ctl01 salt-minion[4526]: [INFO    ] Running state [openssl] at time 16:59:53.371941
Nov 16 16:59:53 ctl01 salt-minion[4526]: [INFO    ] Executing state pkg.installed for [openssl]
Nov 16 16:59:53 ctl01 salt-minion[4526]: [INFO    ] All specified packages are already installed
Nov 16 16:59:53 ctl01 salt-minion[4526]: [INFO    ] Completed state [openssl] at time 16:59:53.380915 duration_in_ms=8.973
Nov 16 16:59:53 ctl01 salt-minion[4526]: [INFO    ] Running state [conntrack] at time 16:59:53.381149
Nov 16 16:59:53 ctl01 salt-minion[4526]: [INFO    ] Executing state pkg.installed for [conntrack]
Nov 16 16:59:53 ctl01 salt-minion[4526]: [INFO    ] All specified packages are already installed
Nov 16 16:59:53 ctl01 salt-minion[4526]: [INFO    ] Completed state [conntrack] at time 16:59:53.389847 duration_in_ms=8.698
Nov 16 16:59:53 ctl01 salt-minion[4526]: [INFO    ] Running state [nfs-common] at time 16:59:53.390090
Nov 16 16:59:53 ctl01 salt-minion[4526]: [INFO    ] Executing state pkg.installed for [nfs-common]
Nov 16 16:59:53 ctl01 salt-minion[4526]: [INFO    ] All specified packages are already installed
Nov 16 16:59:53 ctl01 salt-minion[4526]: [INFO    ] Completed state [nfs-common] at time 16:59:53.398909 duration_in_ms=8.819
Nov 16 16:59:53 ctl01 salt-minion[4526]: [INFO    ] Running state [cifs-utils] at time 16:59:53.399137
Nov 16 16:59:53 ctl01 salt-minion[4526]: [INFO    ] Executing state pkg.installed for [cifs-utils]
Nov 16 16:59:53 ctl01 salt-minion[4526]: [INFO    ] All specified packages are already installed
Nov 16 16:59:53 ctl01 salt-minion[4526]: [INFO    ] Completed state [cifs-utils] at time 16:59:53.408019 duration_in_ms=8.882
Nov 16 16:59:53 ctl01 salt-minion[4526]: [INFO    ] Running state [/usr/bin/hyperkube] at time 16:59:53.411229
Nov 16 16:59:53 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/usr/bin/hyperkube]
Nov 16 16:59:53 ctl01 kubelet[9716]: E1116 16:59:53.443697    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:53 ctl01 kubelet[9716]: E1116 16:59:53.512580    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:53 ctl01 kubelet[9716]: E1116 16:59:53.513504    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:53 ctl01 kubelet[9716]: E1116 16:59:53.515133    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:53 ctl01 kubelet[9716]: E1116 16:59:53.543982    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:53 ctl01 kubelet[9716]: E1116 16:59:53.633875    9716 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "ctl01" not found
Nov 16 16:59:53 ctl01 kubelet[9716]: E1116 16:59:53.644261    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:53 ctl01 kube-proxy[9926]: E1116 16:59:53.679606    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:53 ctl01 kube-proxy[9926]: E1116 16:59:53.680453    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:53 ctl01 kubelet[9716]: E1116 16:59:53.744848    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:53 ctl01 kubelet[9716]: E1116 16:59:53.845667    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:53 ctl01 kubelet[9716]: E1116 16:59:53.946269    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:54 ctl01 kubelet[9716]: E1116 16:59:54.046917    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:54 ctl01 salt-minion[4526]: [INFO    ] File /usr/bin/hyperkube is in the correct state
Nov 16 16:59:54 ctl01 salt-minion[4526]: [INFO    ] Completed state [/usr/bin/hyperkube] at time 16:59:54.139055 duration_in_ms=727.825
Nov 16 16:59:54 ctl01 salt-minion[4526]: [INFO    ] Running state [/usr/bin/kubectl] at time 16:59:54.140233
Nov 16 16:59:54 ctl01 salt-minion[4526]: [INFO    ] Executing state file.symlink for [/usr/bin/kubectl]
Nov 16 16:59:54 ctl01 salt-minion[4526]: [INFO    ] Symlink /usr/bin/kubectl is present and owned by root:root
Nov 16 16:59:54 ctl01 salt-minion[4526]: [INFO    ] Completed state [/usr/bin/kubectl] at time 16:59:54.142345 duration_in_ms=2.112
Nov 16 16:59:54 ctl01 salt-minion[4526]: [INFO    ] Running state [containerd] at time 16:59:54.142576
Nov 16 16:59:54 ctl01 salt-minion[4526]: [INFO    ] Executing state pkg.installed for [containerd]
Nov 16 16:59:54 ctl01 kubelet[9716]: E1116 16:59:54.147498    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:54 ctl01 salt-minion[4526]: [INFO    ] All specified packages are already installed
Nov 16 16:59:54 ctl01 salt-minion[4526]: [INFO    ] Completed state [containerd] at time 16:59:54.151801 duration_in_ms=9.225
Nov 16 16:59:54 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/containerd/config.toml] at time 16:59:54.152028
Nov 16 16:59:54 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/containerd/config.toml]
Nov 16 16:59:54 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:54 ctl01 kubelet[9716]: E1116 16:59:54.247838    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:54 ctl01 kubelet[9716]: E1116 16:59:54.348041    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:54 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:54 ctl01 kubelet[9716]: E1116 16:59:54.448297    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:54 ctl01 salt-minion[4526]: [INFO    ] File /etc/containerd/config.toml is in the correct state
Nov 16 16:59:54 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/containerd/config.toml] at time 16:59:54.506372 duration_in_ms=354.342
Nov 16 16:59:54 ctl01 salt-minion[4526]: [INFO    ] Running state [containerd] at time 16:59:54.508892
Nov 16 16:59:54 ctl01 salt-minion[4526]: [INFO    ] Executing state service.running for [containerd]
Nov 16 16:59:54 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'status', 'containerd.service', '-n', '0'] in directory '/root'
Nov 16 16:59:54 ctl01 kubelet[9716]: E1116 16:59:54.513811    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:54 ctl01 kubelet[9716]: E1116 16:59:54.514907    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:54 ctl01 kubelet[9716]: E1116 16:59:54.515861    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:54 ctl01 kubelet[9716]: E1116 16:59:54.548552    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:54 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-active', 'containerd.service'] in directory '/root'
Nov 16 16:59:54 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'containerd.service'] in directory '/root'
Nov 16 16:59:54 ctl01 salt-minion[4526]: [INFO    ] The service containerd is already running
Nov 16 16:59:54 ctl01 salt-minion[4526]: [INFO    ] Completed state [containerd] at time 16:59:54.599045 duration_in_ms=90.151
Nov 16 16:59:54 ctl01 salt-minion[4526]: [INFO    ] Running state [/tmp/crictl] at time 16:59:54.602859
Nov 16 16:59:54 ctl01 salt-minion[4526]: [INFO    ] Executing state archive.extracted for [/tmp/crictl]
Nov 16 16:59:54 ctl01 kubelet[9716]: E1116 16:59:54.648852    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:54 ctl01 kube-proxy[9926]: E1116 16:59:54.680900    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:54 ctl01 kube-proxy[9926]: E1116 16:59:54.681528    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:54 ctl01 kubelet[9716]: E1116 16:59:54.749125    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:54 ctl01 kubelet[9716]: E1116 16:59:54.849442    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:54 ctl01 kubelet[9716]: E1116 16:59:54.949718    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:55 ctl01 salt-minion[4526]: [INFO    ] All files in archive are already present
Nov 16 16:59:55 ctl01 salt-minion[4526]: [INFO    ] Completed state [/tmp/crictl] at time 16:59:55.047739 duration_in_ms=444.88
Nov 16 16:59:55 ctl01 salt-minion[4526]: [INFO    ] Running state [/usr/local/bin/crictl] at time 16:59:55.048858
Nov 16 16:59:55 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/usr/local/bin/crictl]
Nov 16 16:59:55 ctl01 salt-minion[4526]: [WARNING ] Use of argument owner found, "owner" is invalid, please use "user"
Nov 16 16:59:55 ctl01 kubelet[9716]: E1116 16:59:55.049874    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:55 ctl01 kubelet[9716]: E1116 16:59:55.150073    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:55 ctl01 kube-proxy[9926]: E1116 16:59:55.246160    9926 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 16:59:55 ctl01 kubelet[9716]: E1116 16:59:55.250253    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:55 ctl01 salt-minion[4526]: [INFO    ] File /usr/local/bin/crictl is in the correct state
Nov 16 16:59:55 ctl01 salt-minion[4526]: [INFO    ] Completed state [/usr/local/bin/crictl] at time 16:59:55.273366 duration_in_ms=224.507
Nov 16 16:59:55 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/crictl.yaml] at time 16:59:55.273644
Nov 16 16:59:55 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/crictl.yaml]
Nov 16 16:59:55 ctl01 salt-minion[4526]: [INFO    ] File /etc/crictl.yaml is in the correct state
Nov 16 16:59:55 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/crictl.yaml] at time 16:59:55.275699 duration_in_ms=2.055
Nov 16 16:59:55 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/criproxy] at time 16:59:55.275933
Nov 16 16:59:55 ctl01 salt-minion[4526]: [INFO    ] Executing state file.absent for [/etc/criproxy]
Nov 16 16:59:55 ctl01 salt-minion[4526]: [INFO    ] File /etc/criproxy is not present
Nov 16 16:59:55 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/criproxy] at time 16:59:55.276692 duration_in_ms=0.759
Nov 16 16:59:55 ctl01 salt-minion[4526]: [INFO    ] Running state [criproxy] at time 16:59:55.276914
Nov 16 16:59:55 ctl01 salt-minion[4526]: [INFO    ] Executing state service.dead for [criproxy]
Nov 16 16:59:55 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'status', 'criproxy.service', '-n', '0'] in directory '/root'
Nov 16 16:59:55 ctl01 salt-minion[4526]: [INFO    ] The named service criproxy is not available
Nov 16 16:59:55 ctl01 salt-minion[4526]: [INFO    ] Completed state [criproxy] at time 16:59:55.300230 duration_in_ms=23.315
Nov 16 16:59:55 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/systemd/system/kubelet.service] at time 16:59:55.300744
Nov 16 16:59:55 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/systemd/system/kubelet.service]
Nov 16 16:59:55 ctl01 kubelet[9716]: E1116 16:59:55.350496    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:55 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:55 ctl01 kubelet[9716]: E1116 16:59:55.450662    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:55 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:55 ctl01 kubelet[9716]: E1116 16:59:55.514826    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:55 ctl01 kubelet[9716]: E1116 16:59:55.515615    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:55 ctl01 kubelet[9716]: E1116 16:59:55.516905    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:55 ctl01 kubelet[9716]: E1116 16:59:55.550829    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:55 ctl01 kubelet[9716]: E1116 16:59:55.651032    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:55 ctl01 salt-minion[4526]: [INFO    ] File /etc/systemd/system/kubelet.service is in the correct state
Nov 16 16:59:55 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/systemd/system/kubelet.service] at time 16:59:55.662019 duration_in_ms=361.274
Nov 16 16:59:55 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/config] at time 16:59:55.662399
Nov 16 16:59:55 ctl01 salt-minion[4526]: [INFO    ] Executing state file.absent for [/etc/kubernetes/config]
Nov 16 16:59:55 ctl01 salt-minion[4526]: [INFO    ] File /etc/kubernetes/config is not present
Nov 16 16:59:55 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/config] at time 16:59:55.663391 duration_in_ms=0.992
Nov 16 16:59:55 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/default/kubelet] at time 16:59:55.663655
Nov 16 16:59:55 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/default/kubelet]
Nov 16 16:59:55 ctl01 kube-proxy[9926]: E1116 16:59:55.681597    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:55 ctl01 kube-proxy[9926]: E1116 16:59:55.682543    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:55 ctl01 kubelet[9716]: E1116 16:59:55.751333    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:55 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:55 ctl01 kubelet[9716]: E1116 16:59:55.851495    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:55 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:55 ctl01 kubelet[9716]: E1116 16:59:55.951692    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:56 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:56 ctl01 kubelet[9716]: E1116 16:59:56.051928    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:56 ctl01 kubelet[9716]: E1116 16:59:56.152231    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:56 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:56 ctl01 kubelet[9716]: E1116 16:59:56.252504    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:56 ctl01 kubelet[9716]: E1116 16:59:56.352801    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:56 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:56 ctl01 kubelet[9716]: E1116 16:59:56.453046    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:56 ctl01 kubelet[9716]: E1116 16:59:56.515956    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:56 ctl01 kubelet[9716]: E1116 16:59:56.516794    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:56 ctl01 kubelet[9716]: E1116 16:59:56.518012    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:56 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:56 ctl01 kubelet[9716]: E1116 16:59:56.553247    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:56 ctl01 kubelet[9716]: E1116 16:59:56.653513    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:56 ctl01 salt-minion[4526]: [INFO    ] File /etc/default/kubelet is in the correct state
Nov 16 16:59:56 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/default/kubelet] at time 16:59:56.667990 duration_in_ms=1004.333
Nov 16 16:59:56 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/kubelet.kubeconfig] at time 16:59:56.668471
Nov 16 16:59:56 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/kubelet.kubeconfig]
Nov 16 16:59:56 ctl01 kube-proxy[9926]: E1116 16:59:56.682341    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:56 ctl01 kube-proxy[9926]: E1116 16:59:56.683185    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:56 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:56 ctl01 kubelet[9716]: E1116 16:59:56.753708    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:56 ctl01 kubelet[9716]: E1116 16:59:56.853869    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:56 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:56 ctl01 kubelet[9716]: E1116 16:59:56.954166    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:57 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:57 ctl01 kubelet[9716]: E1116 16:59:57.054483    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:57 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:57 ctl01 kubelet[9716]: E1116 16:59:57.154779    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:57 ctl01 kubelet[9716]: E1116 16:59:57.254967    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:57 ctl01 salt-minion[4526]: [INFO    ] File /etc/kubernetes/kubelet.kubeconfig is in the correct state
Nov 16 16:59:57 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/kubelet.kubeconfig] at time 16:59:57.291456 duration_in_ms=622.985
Nov 16 16:59:57 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/manifests] at time 16:59:57.291877
Nov 16 16:59:57 ctl01 salt-minion[4526]: [INFO    ] Executing state file.directory for [/etc/kubernetes/manifests]
Nov 16 16:59:57 ctl01 salt-minion[4526]: [INFO    ] Directory /etc/kubernetes/manifests is in the correct state
Nov 16 16:59:57 ctl01 salt-minion[4526]: Directory /etc/kubernetes/manifests updated
Nov 16 16:59:57 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/manifests] at time 16:59:57.293619 duration_in_ms=1.742
Nov 16 16:59:57 ctl01 salt-minion[4526]: [INFO    ] Running state [kubelet] at time 16:59:57.297709
Nov 16 16:59:57 ctl01 salt-minion[4526]: [INFO    ] Executing state service.running for [kubelet]
Nov 16 16:59:57 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'status', 'kubelet.service', '-n', '0'] in directory '/root'
Nov 16 16:59:57 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-active', 'kubelet.service'] in directory '/root'
Nov 16 16:59:57 ctl01 salt-minion[4526]: [INFO    ] User sudo_ubuntu Executing command saltutil.find_job with jid 20191116165957322174
Nov 16 16:59:57 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kubelet.service'] in directory '/root'
Nov 16 16:59:57 ctl01 kubelet[9716]: E1116 16:59:57.355238    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:57 ctl01 salt-minion[4526]: [INFO    ] Starting a new job with PID 11244
Nov 16 16:59:57 ctl01 salt-minion[4526]: [INFO    ] The service kubelet is already running
Nov 16 16:59:57 ctl01 salt-minion[4526]: [INFO    ] Completed state [kubelet] at time 16:59:57.375320 duration_in_ms=77.608
Nov 16 16:59:57 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/logrotate.d/kubernetes] at time 16:59:57.376132
Nov 16 16:59:57 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/logrotate.d/kubernetes]
Nov 16 16:59:57 ctl01 salt-minion[4526]: [INFO    ] Returning information for job: 20191116165957322174
Nov 16 16:59:57 ctl01 salt-minion[4526]: [INFO    ] File /etc/logrotate.d/kubernetes is in the correct state
Nov 16 16:59:57 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/logrotate.d/kubernetes] at time 16:59:57.403802 duration_in_ms=27.669
Nov 16 16:59:57 ctl01 salt-minion[4526]: [INFO    ] Running state [/opt/cni/bin] at time 16:59:57.404041
Nov 16 16:59:57 ctl01 salt-minion[4526]: [INFO    ] Executing state archive.extracted for [/opt/cni/bin]
Nov 16 16:59:57 ctl01 kubelet[9716]: E1116 16:59:57.455553    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:57 ctl01 kubelet[9716]: E1116 16:59:57.517068    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:57 ctl01 kubelet[9716]: E1116 16:59:57.518025    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:57 ctl01 kubelet[9716]: E1116 16:59:57.518675    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:57 ctl01 kubelet[9716]: E1116 16:59:57.555825    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:57 ctl01 kubelet[9716]: E1116 16:59:57.655984    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:57 ctl01 kube-proxy[9926]: E1116 16:59:57.683267    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:57 ctl01 kube-proxy[9926]: E1116 16:59:57.684037    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:57 ctl01 kubelet[9716]: E1116 16:59:57.756227    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:57 ctl01 kubelet[9716]: E1116 16:59:57.856438    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:57 ctl01 kubelet[9716]: E1116 16:59:57.956633    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:58 ctl01 kubelet[9716]: E1116 16:59:58.056828    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:58 ctl01 kubelet[9716]: E1116 16:59:58.157092    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:58 ctl01 kubelet[9716]: I1116 16:59:58.233466    9716 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:58 ctl01 kubelet[9716]: I1116 16:59:58.233726    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 16:59:58 ctl01 kubelet[9716]: I1116 16:59:58.234801    9716 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node ctl01
Nov 16 16:59:58 ctl01 kubelet[9716]: I1116 16:59:58.234886    9716 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node ctl01
Nov 16 16:59:58 ctl01 kubelet[9716]: I1116 16:59:58.234914    9716 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node ctl01
Nov 16 16:59:58 ctl01 kubelet[9716]: I1116 16:59:58.234962    9716 kubelet_node_status.go:72] Attempting to register node ctl01
Nov 16 16:59:58 ctl01 kubelet[9716]: E1116 16:59:58.235516    9716 kubelet_node_status.go:94] Unable to register node "ctl01" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:58 ctl01 kubelet[9716]: E1116 16:59:58.257374    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:58 ctl01 kubelet[9716]: E1116 16:59:58.357663    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:58 ctl01 salt-minion[4526]: [INFO    ] Executing command ['tar', 'xz', '-f', '/var/cache/salt/minion/extrn_files/base/docker-prod-local.artifactory.mirantis.com/artifactory/binary-prod-local/mirantis/kubernetes/containernetworking-plugins/containernetworking-plugins_v0.7.2-173-g8db2808.tar.gz'] in directory '/opt/cni/bin/'
Nov 16 16:59:58 ctl01 kubelet[9716]: E1116 16:59:58.457915    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:58 ctl01 kubelet[9716]: E1116 16:59:58.517740    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:58 ctl01 kubelet[9716]: E1116 16:59:58.518610    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:58 ctl01 kubelet[9716]: E1116 16:59:58.519797    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:58 ctl01 kubelet[9716]: E1116 16:59:58.558122    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:58 ctl01 kubelet[9716]: E1116 16:59:58.658291    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:58 ctl01 kube-proxy[9926]: E1116 16:59:58.683933    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:58 ctl01 kube-proxy[9926]: E1116 16:59:58.685364    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:58 ctl01 kubelet[9716]: E1116 16:59:58.758478    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:58 ctl01 kubelet[9716]: E1116 16:59:58.858636    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:58 ctl01 kubelet[9716]: E1116 16:59:58.958859    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:59 ctl01 kubelet[9716]: E1116 16:59:59.059029    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:59 ctl01 salt-minion[4526]: [INFO    ] Executing command ['tar', '--version'] in directory '/root'
Nov 16 16:59:59 ctl01 salt-minion[4526]: [INFO    ] {'extracted_files': 'no tar output so far'}
Nov 16 16:59:59 ctl01 salt-minion[4526]: [INFO    ] Completed state [/opt/cni/bin] at time 16:59:59.116920 duration_in_ms=1712.875
Nov 16 16:59:59 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/addons] at time 16:59:59.117430
Nov 16 16:59:59 ctl01 salt-minion[4526]: [INFO    ] Executing state file.directory for [/etc/kubernetes/addons]
Nov 16 16:59:59 ctl01 salt-minion[4526]: [INFO    ] Directory /etc/kubernetes/addons is in the correct state
Nov 16 16:59:59 ctl01 salt-minion[4526]: Directory /etc/kubernetes/addons updated
Nov 16 16:59:59 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/addons] at time 16:59:59.120120 duration_in_ms=2.691
Nov 16 16:59:59 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/addons/calico/calico-kube-controllers.yml] at time 16:59:59.120509
Nov 16 16:59:59 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/addons/calico/calico-kube-controllers.yml]
Nov 16 16:59:59 ctl01 kubelet[9716]: E1116 16:59:59.159243    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:59 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:59 ctl01 kubelet[9716]: E1116 16:59:59.259458    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:59 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:59 ctl01 kubelet[9716]: E1116 16:59:59.359682    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:59 ctl01 kubelet[9716]: E1116 16:59:59.459884    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:59 ctl01 salt-minion[4526]: [INFO    ] File /etc/kubernetes/addons/calico/calico-kube-controllers.yml is in the correct state
Nov 16 16:59:59 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/addons/calico/calico-kube-controllers.yml] at time 16:59:59.502734 duration_in_ms=382.223
Nov 16 16:59:59 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/addons/calico/calico-rbac.yml] at time 16:59:59.503093
Nov 16 16:59:59 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/addons/calico/calico-rbac.yml]
Nov 16 16:59:59 ctl01 kubelet[9716]: E1116 16:59:59.518515    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:59 ctl01 kubelet[9716]: E1116 16:59:59.519321    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:59 ctl01 kubelet[9716]: E1116 16:59:59.520459    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:59 ctl01 salt-minion[4526]: [INFO    ] File /etc/kubernetes/addons/calico/calico-rbac.yml is in the correct state
Nov 16 16:59:59 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/addons/calico/calico-rbac.yml] at time 16:59:59.528024 duration_in_ms=24.93
Nov 16 16:59:59 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/addons/netchecker/netchecker-svc.yml] at time 16:59:59.528271
Nov 16 16:59:59 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/addons/netchecker/netchecker-svc.yml]
Nov 16 16:59:59 ctl01 kubelet[9716]: E1116 16:59:59.560090    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:59 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:59 ctl01 kubelet[9716]: E1116 16:59:59.660333    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:59 ctl01 kube-proxy[9926]: E1116 16:59:59.685048    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:59 ctl01 kube-proxy[9926]: E1116 16:59:59.685982    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:59 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:59 ctl01 kubelet[9716]: E1116 16:59:59.760614    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:59 ctl01 kubelet[9716]: E1116 16:59:59.860936    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 16:59:59 ctl01 salt-minion[4526]: [INFO    ] File /etc/kubernetes/addons/netchecker/netchecker-svc.yml is in the correct state
Nov 16 16:59:59 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/addons/netchecker/netchecker-svc.yml] at time 16:59:59.886020 duration_in_ms=357.747
Nov 16 16:59:59 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/addons/netchecker/netchecker-server.yml] at time 16:59:59.886528
Nov 16 16:59:59 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/addons/netchecker/netchecker-server.yml]
Nov 16 16:59:59 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:59 ctl01 kubelet[9716]: E1116 16:59:59.961120    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:00 ctl01 kubelet[9716]: E1116 17:00:00.061395    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:00 ctl01 kubelet[9716]: E1116 17:00:00.161558    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:00 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 17:00:00 ctl01 kubelet[9716]: E1116 17:00:00.256860    9716 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 17:00:00 ctl01 kubelet[9716]: E1116 17:00:00.261857    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:00 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 17:00:00 ctl01 kubelet[9716]: E1116 17:00:00.362087    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:00 ctl01 kubelet[9716]: E1116 17:00:00.462329    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:00 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 17:00:00 ctl01 kubelet[9716]: E1116 17:00:00.519480    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:00 ctl01 kubelet[9716]: E1116 17:00:00.520354    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:00 ctl01 kubelet[9716]: E1116 17:00:00.521427    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:00 ctl01 kubelet[9716]: E1116 17:00:00.562499    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:00 ctl01 salt-minion[4526]: [INFO    ] File /etc/kubernetes/addons/netchecker/netchecker-server.yml is in the correct state
Nov 16 17:00:00 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/addons/netchecker/netchecker-server.yml] at time 17:00:00.649720 duration_in_ms=763.19
Nov 16 17:00:00 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/addons/netchecker/netchecker-agent.yml] at time 17:00:00.650109
Nov 16 17:00:00 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/addons/netchecker/netchecker-agent.yml]
Nov 16 17:00:00 ctl01 kubelet[9716]: E1116 17:00:00.662822    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:00 ctl01 kube-proxy[9926]: E1116 17:00:00.685898    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:00 ctl01 kube-proxy[9926]: E1116 17:00:00.686566    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:00 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 17:00:00 ctl01 kubelet[9716]: E1116 17:00:00.763074    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:00 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 17:00:00 ctl01 kubelet[9716]: E1116 17:00:00.863304    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:00 ctl01 kubelet[9716]: E1116 17:00:00.963533    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:00 ctl01 salt-minion[4526]: [INFO    ] File /etc/kubernetes/addons/netchecker/netchecker-agent.yml is in the correct state
Nov 16 17:00:00 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/addons/netchecker/netchecker-agent.yml] at time 17:00:00.994144 duration_in_ms=344.034
Nov 16 17:00:00 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/addons/netchecker/netchecker-serviceaccount.yml] at time 17:00:00.994562
Nov 16 17:00:00 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/addons/netchecker/netchecker-serviceaccount.yml]
Nov 16 17:00:01 ctl01 salt-minion[4526]: [INFO    ] File /etc/kubernetes/addons/netchecker/netchecker-serviceaccount.yml is in the correct state
Nov 16 17:00:01 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/addons/netchecker/netchecker-serviceaccount.yml] at time 17:00:01.020763 duration_in_ms=26.2
Nov 16 17:00:01 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/addons/netchecker/netchecker-roles.yml] at time 17:00:01.021068
Nov 16 17:00:01 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/addons/netchecker/netchecker-roles.yml]
Nov 16 17:00:01 ctl01 salt-minion[4526]: [INFO    ] File /etc/kubernetes/addons/netchecker/netchecker-roles.yml is in the correct state
Nov 16 17:00:01 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/addons/netchecker/netchecker-roles.yml] at time 17:00:01.049606 duration_in_ms=28.537
Nov 16 17:00:01 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/addons/prometheus/prometheus-roles.yml] at time 17:00:01.049909
Nov 16 17:00:01 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/addons/prometheus/prometheus-roles.yml]
Nov 16 17:00:01 ctl01 kubelet[9716]: E1116 17:00:01.063712    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:01 ctl01 salt-minion[4526]: [INFO    ] File /etc/kubernetes/addons/prometheus/prometheus-roles.yml is in the correct state
Nov 16 17:00:01 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/addons/prometheus/prometheus-roles.yml] at time 17:00:01.073070 duration_in_ms=23.16
Nov 16 17:00:01 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/addons/dns] at time 17:00:01.073414
Nov 16 17:00:01 ctl01 salt-minion[4526]: [INFO    ] Executing state file.absent for [/etc/kubernetes/addons/dns]
Nov 16 17:00:01 ctl01 salt-minion[4526]: [INFO    ] {'removed': '/etc/kubernetes/addons/dns'}
Nov 16 17:00:01 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/addons/dns] at time 17:00:01.074749 duration_in_ms=1.336
Nov 16 17:00:01 ctl01 salt-minion[4526]: [INFO    ] Running state [kubectl -n kube-system delete svc kube-dns > /dev/null || echo "kube-dns is absent. OK" && true] at time 17:00:01.076144
Nov 16 17:00:01 ctl01 salt-minion[4526]: [INFO    ] Executing state cmd.run for [kubectl -n kube-system delete svc kube-dns > /dev/null || echo "kube-dns is absent. OK" && true]
Nov 16 17:00:01 ctl01 salt-minion[4526]: [INFO    ] Executing command 'kubectl -n kube-system delete svc kube-dns > /dev/null || echo "kube-dns is absent. OK" && true' in directory '/root'
Nov 16 17:00:01 ctl01 kubelet[9716]: E1116 17:00:01.163976    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:01 ctl01 salt-minion[4526]: [INFO    ] {'pid': 11440, 'retcode': 0, 'stderr': 'The connection to the server localhost:8080 was refused - did you specify the right host or port?', 'stdout': 'kube-dns is absent. OK'}
Nov 16 17:00:01 ctl01 salt-minion[4526]: [INFO    ] Completed state [kubectl -n kube-system delete svc kube-dns > /dev/null || echo "kube-dns is absent. OK" && true] at time 17:00:01.260184 duration_in_ms=184.04
Nov 16 17:00:01 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/addons/coredns/coredns-cm.yml] at time 17:00:01.260595
Nov 16 17:00:01 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/addons/coredns/coredns-cm.yml]
Nov 16 17:00:01 ctl01 kubelet[9716]: E1116 17:00:01.264256    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:01 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/kube-addons/coredns/coredns-cm.yml'
Nov 16 17:00:01 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 17:00:01 ctl01 kubelet[9716]: E1116 17:00:01.364497    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:01 ctl01 kubelet[9716]: E1116 17:00:01.464691    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:01 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 17:00:01 ctl01 kubelet[9716]: E1116 17:00:01.520351    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:01 ctl01 kubelet[9716]: E1116 17:00:01.521479    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:01 ctl01 kubelet[9716]: E1116 17:00:01.522607    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:01 ctl01 kubelet[9716]: E1116 17:00:01.564871    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:01 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 17:00:01 ctl01 kubelet[9716]: E1116 17:00:01.665173    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:01 ctl01 kube-proxy[9926]: E1116 17:00:01.686612    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:01 ctl01 kube-proxy[9926]: E1116 17:00:01.687394    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:01 ctl01 kubelet[9716]: E1116 17:00:01.765501    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:01 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 17:00:01 ctl01 kubelet[9716]: E1116 17:00:01.865747    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:01 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 17:00:01 ctl01 salt-minion[4526]: New file
Nov 16 17:00:01 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/addons/coredns/coredns-cm.yml] at time 17:00:01.907811 duration_in_ms=647.214
Nov 16 17:00:01 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/addons/coredns/coredns-deploy.yml] at time 17:00:01.908232
Nov 16 17:00:01 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/addons/coredns/coredns-deploy.yml]
Nov 16 17:00:01 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/kube-addons/coredns/coredns-deploy.yml'
Nov 16 17:00:01 ctl01 kubelet[9716]: E1116 17:00:01.966041    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:01 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 17:00:02 ctl01 kubelet[9716]: E1116 17:00:02.066279    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:02 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 17:00:02 ctl01 kubelet[9716]: E1116 17:00:02.166460    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:02 ctl01 kubelet[9716]: E1116 17:00:02.266727    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:02 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 17:00:02 ctl01 salt-minion[4526]: New file
Nov 16 17:00:02 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/addons/coredns/coredns-deploy.yml] at time 17:00:02.312528 duration_in_ms=404.294
Nov 16 17:00:02 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/addons/coredns/coredns-svc.yml] at time 17:00:02.312978
Nov 16 17:00:02 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/addons/coredns/coredns-svc.yml]
Nov 16 17:00:02 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/kube-addons/coredns/coredns-svc.yml'
Nov 16 17:00:02 ctl01 kubelet[9716]: E1116 17:00:02.367087    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:02 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 17:00:02 ctl01 kubelet[9716]: E1116 17:00:02.467361    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:02 ctl01 kubelet[9716]: E1116 17:00:02.521258    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:02 ctl01 kubelet[9716]: E1116 17:00:02.522257    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:02 ctl01 kubelet[9716]: E1116 17:00:02.523138    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:02 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 17:00:02 ctl01 kubelet[9716]: E1116 17:00:02.567723    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:02 ctl01 kubelet[9716]: E1116 17:00:02.667976    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:02 ctl01 kube-proxy[9926]: E1116 17:00:02.687253    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:02 ctl01 kube-proxy[9926]: E1116 17:00:02.688109    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:02 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 17:00:02 ctl01 salt-minion[4526]: New file
Nov 16 17:00:02 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/addons/coredns/coredns-svc.yml] at time 17:00:02.697692 duration_in_ms=384.711
Nov 16 17:00:02 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/addons/coredns/coredns-rbac.yml] at time 17:00:02.698184
Nov 16 17:00:02 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/addons/coredns/coredns-rbac.yml]
Nov 16 17:00:02 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/kube-addons/coredns/coredns-rbac.yml'
Nov 16 17:00:02 ctl01 kubelet[9716]: E1116 17:00:02.768225    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:02 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 17:00:02 ctl01 kubelet[9716]: E1116 17:00:02.868374    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:02 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 17:00:02 ctl01 kubelet[9716]: E1116 17:00:02.968577    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:03 ctl01 kubelet[9716]: E1116 17:00:03.068849    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:03 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 17:00:03 ctl01 salt-minion[4526]: New file
Nov 16 17:00:03 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/addons/coredns/coredns-rbac.yml] at time 17:00:03.113480 duration_in_ms=415.294
Nov 16 17:00:03 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/addons/metrics-server] at time 17:00:03.114145
Nov 16 17:00:03 ctl01 salt-minion[4526]: [INFO    ] Executing state file.absent for [/etc/kubernetes/addons/metrics-server]
Nov 16 17:00:03 ctl01 salt-minion[4526]: [INFO    ] File /etc/kubernetes/addons/metrics-server is not present
Nov 16 17:00:03 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/addons/metrics-server] at time 17:00:03.115721 duration_in_ms=1.575
Nov 16 17:00:03 ctl01 salt-minion[4526]: [INFO    ] Running state [/srv/kubernetes/known_tokens.csv] at time 17:00:03.116208
Nov 16 17:00:03 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/srv/kubernetes/known_tokens.csv]
Nov 16 17:00:03 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/known_tokens.csv'
Nov 16 17:00:03 ctl01 kubelet[9716]: E1116 17:00:03.169107    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:03 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 17:00:03 ctl01 kubelet[9716]: E1116 17:00:03.269364    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:03 ctl01 kubelet[9716]: E1116 17:00:03.370337    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:03 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 17:00:03 ctl01 kubelet[9716]: E1116 17:00:03.470545    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:03 ctl01 kubelet[9716]: E1116 17:00:03.521899    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:03 ctl01 kubelet[9716]: E1116 17:00:03.522830    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:03 ctl01 kubelet[9716]: E1116 17:00:03.523964    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:03 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 17:00:03 ctl01 kubelet[9716]: E1116 17:00:03.570797    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:03 ctl01 kubelet[9716]: E1116 17:00:03.634266    9716 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "ctl01" not found
Nov 16 17:00:03 ctl01 kubelet[9716]: E1116 17:00:03.671091    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:03 ctl01 kube-proxy[9926]: E1116 17:00:03.688263    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:03 ctl01 kube-proxy[9926]: E1116 17:00:03.689008    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:03 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 17:00:03 ctl01 kubelet[9716]: E1116 17:00:03.771343    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:03 ctl01 kubelet[9716]: E1116 17:00:03.871560    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:03 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 17:00:03 ctl01 salt-minion[4526]: New file
Nov 16 17:00:03 ctl01 salt-minion[4526]: [INFO    ] Completed state [/srv/kubernetes/known_tokens.csv] at time 17:00:03.880232 duration_in_ms=764.013
Nov 16 17:00:03 ctl01 salt-minion[4526]: [INFO    ] Running state [/srv/kubernetes/basic_auth.csv] at time 17:00:03.880986
Nov 16 17:00:03 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/srv/kubernetes/basic_auth.csv]
Nov 16 17:00:03 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/basic_auth.csv'
Nov 16 17:00:03 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 17:00:03 ctl01 kubelet[9716]: E1116 17:00:03.971802    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:04 ctl01 kubelet[9716]: E1116 17:00:04.072084    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:04 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 17:00:04 ctl01 kubelet[9716]: E1116 17:00:04.172337    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:04 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 17:00:04 ctl01 salt-minion[4526]: New file
Nov 16 17:00:04 ctl01 salt-minion[4526]: [INFO    ] Completed state [/srv/kubernetes/basic_auth.csv] at time 17:00:04.243465 duration_in_ms=362.478
Nov 16 17:00:04 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/default/kube-apiserver] at time 17:00:04.244115
Nov 16 17:00:04 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/default/kube-apiserver]
Nov 16 17:00:04 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 17:00:04 ctl01 salt-minion[4526]: New file
Nov 16 17:00:04 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/default/kube-apiserver] at time 17:00:04.248297 duration_in_ms=4.183
Nov 16 17:00:04 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/scheduler.kubeconfig] at time 17:00:04.248683
Nov 16 17:00:04 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/scheduler.kubeconfig]
Nov 16 17:00:04 ctl01 kubelet[9716]: E1116 17:00:04.272619    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:04 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/kube-scheduler/scheduler.kubeconfig'
Nov 16 17:00:04 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 17:00:04 ctl01 kubelet[9716]: E1116 17:00:04.372880    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:04 ctl01 kubelet[9716]: E1116 17:00:04.473231    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:04 ctl01 kubelet[9716]: E1116 17:00:04.522832    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:04 ctl01 kubelet[9716]: E1116 17:00:04.523755    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:04 ctl01 kubelet[9716]: E1116 17:00:04.524610    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:04 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 17:00:04 ctl01 kubelet[9716]: E1116 17:00:04.573432    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:04 ctl01 kubelet[9716]: E1116 17:00:04.673644    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:04 ctl01 kube-proxy[9926]: E1116 17:00:04.689318    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:04 ctl01 kube-proxy[9926]: E1116 17:00:04.689953    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:04 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 17:00:04 ctl01 kubelet[9716]: E1116 17:00:04.773960    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:04 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 17:00:04 ctl01 kubelet[9716]: E1116 17:00:04.874270    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:04 ctl01 kubelet[9716]: E1116 17:00:04.974481    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 17:00:05 ctl01 salt-minion[4526]: New file
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/scheduler.kubeconfig] at time 17:00:05.030760 duration_in_ms=782.077
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/controller-manager.kubeconfig] at time 17:00:05.031092
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/controller-manager.kubeconfig]
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/kube-controller-manager/controller-manager.kubeconfig'
Nov 16 17:00:05 ctl01 kubelet[9716]: E1116 17:00:05.074818    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 17:00:05 ctl01 kubelet[9716]: E1116 17:00:05.175102    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:05 ctl01 kubelet[9716]: I1116 17:00:05.235848    9716 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 17:00:05 ctl01 kubelet[9716]: I1116 17:00:05.236439    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 17:00:05 ctl01 kubelet[9716]: I1116 17:00:05.237955    9716 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node ctl01
Nov 16 17:00:05 ctl01 kubelet[9716]: I1116 17:00:05.238032    9716 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node ctl01
Nov 16 17:00:05 ctl01 kubelet[9716]: I1116 17:00:05.238063    9716 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node ctl01
Nov 16 17:00:05 ctl01 kubelet[9716]: I1116 17:00:05.238110    9716 kubelet_node_status.go:72] Attempting to register node ctl01
Nov 16 17:00:05 ctl01 kubelet[9716]: E1116 17:00:05.238903    9716 kubelet_node_status.go:94] Unable to register node "ctl01" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:05 ctl01 kube-proxy[9926]: E1116 17:00:05.247492    9926 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 17:00:05 ctl01 kubelet[9716]: E1116 17:00:05.275334    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:05 ctl01 kubelet[9716]: E1116 17:00:05.375522    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 17:00:05 ctl01 kubelet[9716]: E1116 17:00:05.475756    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:05 ctl01 kubelet[9716]: E1116 17:00:05.524005    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:05 ctl01 kubelet[9716]: E1116 17:00:05.524752    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:05 ctl01 kubelet[9716]: E1116 17:00:05.525996    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 17:00:05 ctl01 kubelet[9716]: E1116 17:00:05.575924    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:05 ctl01 kubelet[9716]: E1116 17:00:05.676189    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:05 ctl01 kube-proxy[9926]: E1116 17:00:05.689932    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:05 ctl01 kube-proxy[9926]: E1116 17:00:05.690874    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 17:00:05 ctl01 salt-minion[4526]: New file
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/controller-manager.kubeconfig] at time 17:00:05.692423 duration_in_ms=661.33
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/default/kube-controller-manager] at time 17:00:05.692773
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/default/kube-controller-manager]
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 17:00:05 ctl01 salt-minion[4526]: New file
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/default/kube-controller-manager] at time 17:00:05.695876 duration_in_ms=3.102
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/default/kube-scheduler] at time 17:00:05.696216
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/default/kube-scheduler]
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 17:00:05 ctl01 salt-minion[4526]: New file
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/default/kube-scheduler] at time 17:00:05.699181 duration_in_ms=2.965
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/systemd/system/kube-apiserver.service] at time 17:00:05.699517
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/systemd/system/kube-apiserver.service]
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/systemd/kube-apiserver.service'
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 17:00:05 ctl01 salt-minion[4526]: New file
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/systemd/system/kube-apiserver.service] at time 17:00:05.738681 duration_in_ms=39.163
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/systemd/system/kube-scheduler.service] at time 17:00:05.738924
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/systemd/system/kube-scheduler.service]
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/systemd/kube-scheduler.service'
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 17:00:05 ctl01 salt-minion[4526]: New file
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/systemd/system/kube-scheduler.service] at time 17:00:05.773978 duration_in_ms=35.054
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/systemd/system/kube-controller-manager.service] at time 17:00:05.774265
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/systemd/system/kube-controller-manager.service]
Nov 16 17:00:05 ctl01 kubelet[9716]: E1116 17:00:05.776443    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/systemd/kube-controller-manager.service'
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 17:00:05 ctl01 salt-minion[4526]: New file
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/systemd/system/kube-controller-manager.service] at time 17:00:05.806835 duration_in_ms=32.569
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/ssl/kubernetes-server.crt] at time 17:00:05.807110
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/ssl/kubernetes-server.crt]
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** '_certs/kubernetes/kubernetes-server.crt'
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 17:00:05 ctl01 salt-minion[4526]: New file
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/ssl/kubernetes-server.crt] at time 17:00:05.832973 duration_in_ms=25.863
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/ssl/kubernetes-server.key] at time 17:00:05.833317
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/ssl/kubernetes-server.key]
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** '_certs/kubernetes/kubernetes-server.key'
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 17:00:05 ctl01 salt-minion[4526]: New file
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/ssl/kubernetes-server.key] at time 17:00:05.857826 duration_in_ms=24.51
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/ssl/kubernetes-server.pem] at time 17:00:05.858268
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/ssl/kubernetes-server.pem]
Nov 16 17:00:05 ctl01 kubelet[9716]: E1116 17:00:05.876795    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** '_certs/kubernetes/kubernetes-server.pem'
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 17:00:05 ctl01 salt-minion[4526]: New file
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/ssl/kubernetes-server.pem] at time 17:00:05.884195 duration_in_ms=25.927
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Running state [kube-apiserver] at time 17:00:05.892070
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Executing state service.running for [kube-apiserver]
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'status', 'kube-apiserver.service', '-n', '0'] in directory '/root'
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-active', 'kube-apiserver.service'] in directory '/root'
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kube-apiserver.service'] in directory '/root'
Nov 16 17:00:05 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemd-run', '--scope', 'systemctl', 'start', 'kube-apiserver.service'] in directory '/root'
Nov 16 17:00:05 ctl01 kubelet[9716]: E1116 17:00:05.977107    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:05 ctl01 systemd[1]: Started /bin/systemctl start kube-apiserver.service.
Nov 16 17:00:06 ctl01 systemd[1]: Starting Kubernetes API Server...
Nov 16 17:00:06 ctl01 kubelet[9716]: E1116 17:00:06.077383    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: Flag --insecure-bind-address has been deprecated, This flag will be removed in a future version.
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: Flag --insecure-port has been deprecated, This flag will be removed in a future version.
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121267   11919 flags.go:33] FLAG: --address="0.0.0.0"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121392   11919 flags.go:33] FLAG: --admission-control="[]"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121433   11919 flags.go:33] FLAG: --admission-control-config-file=""
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121445   11919 flags.go:33] FLAG: --advertise-address="172.16.10.36"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121455   11919 flags.go:33] FLAG: --allow-privileged="true"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121466   11919 flags.go:33] FLAG: --alsologtostderr="false"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121477   11919 flags.go:33] FLAG: --anonymous-auth="false"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121486   11919 flags.go:33] FLAG: --api-audiences="[]"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121497   11919 flags.go:33] FLAG: --apiserver-count="1"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121514   11919 flags.go:33] FLAG: --application-metrics-count-limit="100"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121524   11919 flags.go:33] FLAG: --audit-dynamic-configuration="false"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121533   11919 flags.go:33] FLAG: --audit-log-batch-buffer-size="10000"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121542   11919 flags.go:33] FLAG: --audit-log-batch-max-size="1"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121550   11919 flags.go:33] FLAG: --audit-log-batch-max-wait="0s"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121561   11919 flags.go:33] FLAG: --audit-log-batch-throttle-burst="0"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121578   11919 flags.go:33] FLAG: --audit-log-batch-throttle-enable="false"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121586   11919 flags.go:33] FLAG: --audit-log-batch-throttle-qps="0"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121598   11919 flags.go:33] FLAG: --audit-log-format="json"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121607   11919 flags.go:33] FLAG: --audit-log-maxage="0"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121616   11919 flags.go:33] FLAG: --audit-log-maxbackup="0"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121624   11919 flags.go:33] FLAG: --audit-log-maxsize="0"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121632   11919 flags.go:33] FLAG: --audit-log-mode="blocking"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121641   11919 flags.go:33] FLAG: --audit-log-path=""
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121649   11919 flags.go:33] FLAG: --audit-log-truncate-enabled="false"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121657   11919 flags.go:33] FLAG: --audit-log-truncate-max-batch-size="10485760"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121668   11919 flags.go:33] FLAG: --audit-log-truncate-max-event-size="102400"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121677   11919 flags.go:33] FLAG: --audit-log-version="audit.k8s.io/v1"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121686   11919 flags.go:33] FLAG: --audit-policy-file=""
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121694   11919 flags.go:33] FLAG: --audit-webhook-batch-buffer-size="10000"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121702   11919 flags.go:33] FLAG: --audit-webhook-batch-initial-backoff="10s"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121711   11919 flags.go:33] FLAG: --audit-webhook-batch-max-size="400"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121720   11919 flags.go:33] FLAG: --audit-webhook-batch-max-wait="30s"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121729   11919 flags.go:33] FLAG: --audit-webhook-batch-throttle-burst="15"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121751   11919 flags.go:33] FLAG: --audit-webhook-batch-throttle-enable="true"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121761   11919 flags.go:33] FLAG: --audit-webhook-batch-throttle-qps="10"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121771   11919 flags.go:33] FLAG: --audit-webhook-config-file=""
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121779   11919 flags.go:33] FLAG: --audit-webhook-initial-backoff="10s"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121788   11919 flags.go:33] FLAG: --audit-webhook-mode="batch"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121797   11919 flags.go:33] FLAG: --audit-webhook-truncate-enabled="false"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121805   11919 flags.go:33] FLAG: --audit-webhook-truncate-max-batch-size="10485760"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121814   11919 flags.go:33] FLAG: --audit-webhook-truncate-max-event-size="102400"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121823   11919 flags.go:33] FLAG: --audit-webhook-version="audit.k8s.io/v1"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121832   11919 flags.go:33] FLAG: --authentication-token-webhook-cache-ttl="2m0s"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121841   11919 flags.go:33] FLAG: --authentication-token-webhook-config-file=""
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121849   11919 flags.go:33] FLAG: --authorization-mode="[Node,RBAC]"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121869   11919 flags.go:33] FLAG: --authorization-policy-file=""
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121888   11919 flags.go:33] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121897   11919 flags.go:33] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121906   11919 flags.go:33] FLAG: --authorization-webhook-config-file=""
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121914   11919 flags.go:33] FLAG: --azure-container-registry-config=""
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121921   11919 flags.go:33] FLAG: --basic-auth-file="/srv/kubernetes/basic_auth.csv"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121931   11919 flags.go:33] FLAG: --bind-address="0.0.0.0"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121940   11919 flags.go:33] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121950   11919 flags.go:33] FLAG: --cert-dir="/var/run/kubernetes"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121959   11919 flags.go:33] FLAG: --client-ca-file="/etc/kubernetes/ssl/ca-kubernetes.crt"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121968   11919 flags.go:33] FLAG: --cloud-config=""
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121976   11919 flags.go:33] FLAG: --cloud-provider=""
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121984   11919 flags.go:33] FLAG: --cloud-provider-gce-lb-src-cidrs="130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.121999   11919 flags.go:33] FLAG: --container-hints="/etc/cadvisor/container_hints.json"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122009   11919 flags.go:33] FLAG: --containerd="unix:///var/run/containerd.sock"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122018   11919 flags.go:33] FLAG: --contention-profiling="false"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122027   11919 flags.go:33] FLAG: --cors-allowed-origins="[]"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122037   11919 flags.go:33] FLAG: --default-not-ready-toleration-seconds="300"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122046   11919 flags.go:33] FLAG: --default-unreachable-toleration-seconds="300"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122055   11919 flags.go:33] FLAG: --default-watch-cache-size="100"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122077   11919 flags.go:33] FLAG: --delete-collection-workers="1"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122086   11919 flags.go:33] FLAG: --deserialization-cache-size="0"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122094   11919 flags.go:33] FLAG: --disable-admission-plugins="[]"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122107   11919 flags.go:33] FLAG: --docker="unix:///var/run/docker.sock"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122117   11919 flags.go:33] FLAG: --docker-env-metadata-whitelist=""
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122125   11919 flags.go:33] FLAG: --docker-only="false"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122133   11919 flags.go:33] FLAG: --docker-root="/var/lib/docker"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122180   11919 flags.go:33] FLAG: --docker-tls="false"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122217   11919 flags.go:33] FLAG: --docker-tls-ca="ca.pem"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122228   11919 flags.go:33] FLAG: --docker-tls-cert="cert.pem"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122237   11919 flags.go:33] FLAG: --docker-tls-key="key.pem"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122246   11919 flags.go:33] FLAG: --enable-admission-plugins="[NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,DefaultStorageClass]"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122272   11919 flags.go:33] FLAG: --enable-aggregator-routing="false"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122281   11919 flags.go:33] FLAG: --enable-bootstrap-token-auth="false"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122290   11919 flags.go:33] FLAG: --enable-garbage-collector="true"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122298   11919 flags.go:33] FLAG: --enable-load-reader="false"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122306   11919 flags.go:33] FLAG: --enable-logs-handler="true"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122314   11919 flags.go:33] FLAG: --enable-swagger-ui="false"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122323   11919 flags.go:33] FLAG: --encryption-provider-config=""
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122331   11919 flags.go:33] FLAG: --endpoint-reconciler-type="lease"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122339   11919 flags.go:33] FLAG: --etcd-cafile="/var/lib/etcd/ca.pem"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122348   11919 flags.go:33] FLAG: --etcd-certfile="/var/lib/etcd/etcd-client.crt"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122357   11919 flags.go:33] FLAG: --etcd-compaction-interval="5m0s"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122366   11919 flags.go:33] FLAG: --etcd-count-metric-poll-period="1m0s"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122375   11919 flags.go:33] FLAG: --etcd-keyfile="/var/lib/etcd/etcd-client.key"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122384   11919 flags.go:33] FLAG: --etcd-prefix="/registry"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122393   11919 flags.go:33] FLAG: --etcd-servers="[https://172.16.10.36:4001]"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122408   11919 flags.go:33] FLAG: --etcd-servers-overrides="[]"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122418   11919 flags.go:33] FLAG: --event-storage-age-limit="default=0"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122427   11919 flags.go:33] FLAG: --event-storage-event-limit="default=0"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122435   11919 flags.go:33] FLAG: --event-ttl="1h0m0s"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122445   11919 flags.go:33] FLAG: --experimental-encryption-provider-config=""
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122468   11919 flags.go:33] FLAG: --external-hostname=""
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122477   11919 flags.go:33] FLAG: --feature-gates=""
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122491   11919 flags.go:33] FLAG: --global-housekeeping-interval="1m0s"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122500   11919 flags.go:33] FLAG: --help="false"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122509   11919 flags.go:33] FLAG: --housekeeping-interval="10s"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122517   11919 flags.go:33] FLAG: --http2-max-streams-per-connection="0"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122526   11919 flags.go:33] FLAG: --insecure-bind-address="0.0.0.0"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122535   11919 flags.go:33] FLAG: --insecure-port="8080"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122544   11919 flags.go:33] FLAG: --kubelet-certificate-authority=""
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122553   11919 flags.go:33] FLAG: --kubelet-client-certificate=""
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122561   11919 flags.go:33] FLAG: --kubelet-client-key=""
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122569   11919 flags.go:33] FLAG: --kubelet-https="true"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122577   11919 flags.go:33] FLAG: --kubelet-port="10250"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122589   11919 flags.go:33] FLAG: --kubelet-preferred-address-types="[Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP]"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122607   11919 flags.go:33] FLAG: --kubelet-read-only-port="10255"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122616   11919 flags.go:33] FLAG: --kubelet-timeout="5s"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122626   11919 flags.go:33] FLAG: --kubernetes-service-node-port="0"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122634   11919 flags.go:33] FLAG: --log-backtrace-at=":0"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122647   11919 flags.go:33] FLAG: --log-cadvisor-usage="false"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122657   11919 flags.go:33] FLAG: --log-dir=""
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122665   11919 flags.go:33] FLAG: --log-file=""
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122673   11919 flags.go:33] FLAG: --log-flush-frequency="5s"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122681   11919 flags.go:33] FLAG: --logtostderr="true"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122690   11919 flags.go:33] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122700   11919 flags.go:33] FLAG: --master-service-namespace="default"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122708   11919 flags.go:33] FLAG: --max-connection-bytes-per-sec="0"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122717   11919 flags.go:33] FLAG: --max-mutating-requests-inflight="200"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122726   11919 flags.go:33] FLAG: --max-requests-inflight="400"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122734   11919 flags.go:33] FLAG: --mesos-agent="127.0.0.1:5051"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122743   11919 flags.go:33] FLAG: --mesos-agent-timeout="10s"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122755   11919 flags.go:33] FLAG: --min-request-timeout="1800"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122764   11919 flags.go:33] FLAG: --oidc-ca-file=""
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122785   11919 flags.go:33] FLAG: --oidc-client-id=""
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122793   11919 flags.go:33] FLAG: --oidc-groups-claim=""
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122801   11919 flags.go:33] FLAG: --oidc-groups-prefix=""
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122809   11919 flags.go:33] FLAG: --oidc-issuer-url=""
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122817   11919 flags.go:33] FLAG: --oidc-required-claim=""
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122829   11919 flags.go:33] FLAG: --oidc-signing-algs="[RS256]"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122841   11919 flags.go:33] FLAG: --oidc-username-claim="sub"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122849   11919 flags.go:33] FLAG: --oidc-username-prefix=""
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122857   11919 flags.go:33] FLAG: --port="8080"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122866   11919 flags.go:33] FLAG: --profiling="true"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122875   11919 flags.go:33] FLAG: --proxy-client-cert-file="/etc/kubernetes/ssl/kube-aggregator-proxy-client.crt"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122885   11919 flags.go:33] FLAG: --proxy-client-key-file="/etc/kubernetes/ssl/kube-aggregator-proxy-client.key"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122895   11919 flags.go:33] FLAG: --repair-malformed-updates="false"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122903   11919 flags.go:33] FLAG: --request-timeout="1m0s"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122912   11919 flags.go:33] FLAG: --requestheader-allowed-names="[system:kube-controller-manager]"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122928   11919 flags.go:33] FLAG: --requestheader-client-ca-file="/etc/kubernetes/ssl/ca-kubernetes.crt"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122938   11919 flags.go:33] FLAG: --requestheader-extra-headers-prefix="[X-Remote-Extra-]"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122949   11919 flags.go:33] FLAG: --requestheader-group-headers="[X-Remote-Group]"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122962   11919 flags.go:33] FLAG: --requestheader-username-headers="[X-Remote-User]"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122973   11919 flags.go:33] FLAG: --runtime-config=""
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122986   11919 flags.go:33] FLAG: --secure-port="443"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.122995   11919 flags.go:33] FLAG: --service-account-api-audiences="[]"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.123008   11919 flags.go:33] FLAG: --service-account-issuer=""
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.123017   11919 flags.go:33] FLAG: --service-account-key-file="[]"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.123028   11919 flags.go:33] FLAG: --service-account-lookup="true"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.123037   11919 flags.go:33] FLAG: --service-account-max-token-expiration="0s"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.123045   11919 flags.go:33] FLAG: --service-account-signing-key-file=""
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.123054   11919 flags.go:33] FLAG: --service-cluster-ip-range="10.254.0.0/16"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.123066   11919 flags.go:33] FLAG: --service-node-port-range="30000-32767"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.123079   11919 flags.go:33] FLAG: --skip-headers="false"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.123088   11919 flags.go:33] FLAG: --ssh-keyfile=""
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.123097   11919 flags.go:33] FLAG: --ssh-user=""
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.123119   11919 flags.go:33] FLAG: --stderrthreshold="2"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.123128   11919 flags.go:33] FLAG: --storage-backend=""
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.123136   11919 flags.go:33] FLAG: --storage-driver-buffer-duration="1m0s"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.123145   11919 flags.go:33] FLAG: --storage-driver-db="cadvisor"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.123154   11919 flags.go:33] FLAG: --storage-driver-host="localhost:8086"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.123162   11919 flags.go:33] FLAG: --storage-driver-password="root"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.123171   11919 flags.go:33] FLAG: --storage-driver-secure="false"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.123179   11919 flags.go:33] FLAG: --storage-driver-table="stats"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.123187   11919 flags.go:33] FLAG: --storage-driver-user="root"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.123195   11919 flags.go:33] FLAG: --storage-media-type="application/vnd.kubernetes.protobuf"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.123215   11919 flags.go:33] FLAG: --storage-versions="admission.k8s.io/v1beta1,admissionregistration.k8s.io/v1beta1,apps/v1,auditregistration.k8s.io/v1alpha1,authentication.k8s.io/v1,authorization.k8s.io/v1,autoscaling/v1,batch/v1,certificates.k8s.io/v1beta1,coordination.k8s.io/v1beta1,events.k8s.io/v1beta1,extensions/v1beta1,imagepolicy.k8s.io/v1alpha1,networking.k8s.io/v1,policy/v1beta1,rbac.authorization.k8s.io/v1,scheduling.k8s.io/v1beta1,settings.k8s.io/v1alpha1,storage.k8s.io/v1,v1"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.123309   11919 flags.go:33] FLAG: --target-ram-mb="0"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.123324   11919 flags.go:33] FLAG: --tls-cert-file="/etc/kubernetes/ssl/kubernetes-server.crt"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.123334   11919 flags.go:33] FLAG: --tls-cipher-suites="[]"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.123349   11919 flags.go:33] FLAG: --tls-min-version=""
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.123357   11919 flags.go:33] FLAG: --tls-private-key-file="/etc/kubernetes/ssl/kubernetes-server.key"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.123367   11919 flags.go:33] FLAG: --tls-sni-cert-key="[]"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.123379   11919 flags.go:33] FLAG: --token-auth-file="/srv/kubernetes/known_tokens.csv"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.123388   11919 flags.go:33] FLAG: --v="2"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.123397   11919 flags.go:33] FLAG: --version="false"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.123409   11919 flags.go:33] FLAG: --vmodule=""
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.123418   11919 flags.go:33] FLAG: --watch-cache="true"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.123427   11919 flags.go:33] FLAG: --watch-cache-sizes="[]"
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.123450   11919 server.go:557] external host was not specified, using 172.16.10.36
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.123755   11919 server.go:600] Initializing cache sizes based on 0MB limit
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.123897   11919 server.go:146] Version: v1.13.5-3+98374c02d2d8c1
Nov 16 17:00:06 ctl01 kubelet[9716]: E1116 17:00:06.177601    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:06 ctl01 kubelet[9716]: E1116 17:00:06.277815    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:06 ctl01 kubelet[9716]: E1116 17:00:06.378069    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:06 ctl01 kubelet[9716]: E1116 17:00:06.478255    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:06 ctl01 kubelet[9716]: E1116 17:00:06.578383    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.593093   11919 plugins.go:158] Loaded 8 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.593193   11919 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.594657   11919 plugins.go:158] Loaded 8 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.594690   11919 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 16 17:00:06 ctl01 kube-proxy[9926]: I1116 17:00:06.634252    9926 proxier.go:645] Not syncing iptables until Services and Endpoints have been received from master
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.659597   11919 store.go:1414] Monitoring customresourcedefinitions.apiextensions.k8s.io count at <storage-prefix>//apiextensions.k8s.io/customresourcedefinitions
Nov 16 17:00:06 ctl01 kubelet[9716]: E1116 17:00:06.678545    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.684481   11919 master.go:228] Using reconciler: lease
Nov 16 17:00:06 ctl01 kubelet[9716]: E1116 17:00:06.778714    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.795001   11919 store.go:1414] Monitoring podtemplates count at <storage-prefix>//podtemplates
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.845724   11919 store.go:1414] Monitoring events count at <storage-prefix>//events
Nov 16 17:00:06 ctl01 kubelet[9716]: E1116 17:00:06.878894    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.898912   11919 store.go:1414] Monitoring limitranges count at <storage-prefix>//limitranges
Nov 16 17:00:06 ctl01 kube-apiserver[11919]: I1116 17:00:06.956409   11919 store.go:1414] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
Nov 16 17:00:06 ctl01 kubelet[9716]: E1116 17:00:06.979121    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:07 ctl01 kube-apiserver[11919]: I1116 17:00:07.012875   11919 store.go:1414] Monitoring secrets count at <storage-prefix>//secrets
Nov 16 17:00:07 ctl01 kube-apiserver[11919]: I1116 17:00:07.065448   11919 store.go:1414] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
Nov 16 17:00:07 ctl01 kubelet[9716]: E1116 17:00:07.079369    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:07 ctl01 kube-apiserver[11919]: I1116 17:00:07.114834   11919 store.go:1414] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
Nov 16 17:00:07 ctl01 kube-apiserver[11919]: I1116 17:00:07.165613   11919 store.go:1414] Monitoring configmaps count at <storage-prefix>//configmaps
Nov 16 17:00:07 ctl01 kubelet[9716]: E1116 17:00:07.179613    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:07 ctl01 kube-apiserver[11919]: I1116 17:00:07.214663   11919 store.go:1414] Monitoring namespaces count at <storage-prefix>//namespaces
Nov 16 17:00:07 ctl01 kube-apiserver[11919]: I1116 17:00:07.271250   11919 store.go:1414] Monitoring endpoints count at <storage-prefix>//services/endpoints
Nov 16 17:00:07 ctl01 kubelet[9716]: E1116 17:00:07.279845    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:07 ctl01 kube-apiserver[11919]: I1116 17:00:07.327191   11919 store.go:1414] Monitoring nodes count at <storage-prefix>//minions
Nov 16 17:00:07 ctl01 kube-apiserver[11919]: I1116 17:00:07.373543   11919 store.go:1414] Monitoring pods count at <storage-prefix>//pods
Nov 16 17:00:07 ctl01 kubelet[9716]: E1116 17:00:07.380000    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:07 ctl01 kube-apiserver[11919]: I1116 17:00:07.428005   11919 store.go:1414] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
Nov 16 17:00:07 ctl01 kube-apiserver[11919]: I1116 17:00:07.479447   11919 store.go:1414] Monitoring services count at <storage-prefix>//services/specs
Nov 16 17:00:07 ctl01 kubelet[9716]: E1116 17:00:07.480189    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:07 ctl01 kubelet[9716]: E1116 17:00:07.580327    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:07 ctl01 kube-apiserver[11919]: I1116 17:00:07.649535   11919 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//controllers
Nov 16 17:00:07 ctl01 kubelet[9716]: E1116 17:00:07.680611    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:07 ctl01 kube-apiserver[11919]: I1116 17:00:07.738612   11919 master.go:407] Skipping disabled API group "auditregistration.k8s.io".
Nov 16 17:00:07 ctl01 kube-apiserver[11919]: I1116 17:00:07.738902   11919 master.go:415] Enabling API group "authentication.k8s.io".
Nov 16 17:00:07 ctl01 kube-apiserver[11919]: I1116 17:00:07.739020   11919 master.go:415] Enabling API group "authorization.k8s.io".
Nov 16 17:00:07 ctl01 kubelet[9716]: E1116 17:00:07.780838    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:07 ctl01 kube-apiserver[11919]: I1116 17:00:07.795499   11919 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
Nov 16 17:00:07 ctl01 kube-apiserver[11919]: I1116 17:00:07.843388   11919 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
Nov 16 17:00:07 ctl01 kubelet[9716]: E1116 17:00:07.881144    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:07 ctl01 kube-apiserver[11919]: I1116 17:00:07.895485   11919 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
Nov 16 17:00:07 ctl01 kube-apiserver[11919]: I1116 17:00:07.895633   11919 master.go:415] Enabling API group "autoscaling".
Nov 16 17:00:07 ctl01 kube-apiserver[11919]: I1116 17:00:07.947644   11919 store.go:1414] Monitoring jobs.batch count at <storage-prefix>//jobs
Nov 16 17:00:07 ctl01 kubelet[9716]: E1116 17:00:07.981378    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:08 ctl01 kube-apiserver[11919]: I1116 17:00:08.004370   11919 store.go:1414] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
Nov 16 17:00:08 ctl01 kube-apiserver[11919]: I1116 17:00:08.004607   11919 master.go:415] Enabling API group "batch".
Nov 16 17:00:08 ctl01 kube-apiserver[11919]: I1116 17:00:08.058291   11919 store.go:1414] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
Nov 16 17:00:08 ctl01 kube-apiserver[11919]: I1116 17:00:08.058318   11919 master.go:415] Enabling API group "certificates.k8s.io".
Nov 16 17:00:08 ctl01 kubelet[9716]: E1116 17:00:08.081626    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:08 ctl01 kube-apiserver[11919]: I1116 17:00:08.112951   11919 store.go:1414] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
Nov 16 17:00:08 ctl01 kube-apiserver[11919]: I1116 17:00:08.112974   11919 master.go:415] Enabling API group "coordination.k8s.io".
Nov 16 17:00:08 ctl01 kube-apiserver[11919]: I1116 17:00:08.161907   11919 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//controllers
Nov 16 17:00:08 ctl01 kubelet[9716]: E1116 17:00:08.181893    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:08 ctl01 kube-apiserver[11919]: I1116 17:00:08.214489   11919 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
Nov 16 17:00:08 ctl01 kube-apiserver[11919]: I1116 17:00:08.263933   11919 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
Nov 16 17:00:08 ctl01 kubelet[9716]: E1116 17:00:08.282132    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:08 ctl01 kube-apiserver[11919]: I1116 17:00:08.316075   11919 store.go:1414] Monitoring ingresses.extensions count at <storage-prefix>//ingress
Nov 16 17:00:08 ctl01 kube-apiserver[11919]: I1116 17:00:08.367662   11919 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
Nov 16 17:00:08 ctl01 kubelet[9716]: E1116 17:00:08.382375    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:08 ctl01 kube-apiserver[11919]: I1116 17:00:08.422016   11919 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
Nov 16 17:00:08 ctl01 kube-apiserver[11919]: I1116 17:00:08.471624   11919 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
Nov 16 17:00:08 ctl01 kube-apiserver[11919]: I1116 17:00:08.471654   11919 master.go:415] Enabling API group "extensions".
Nov 16 17:00:08 ctl01 kubelet[9716]: E1116 17:00:08.482547    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:08 ctl01 kube-apiserver[11919]: I1116 17:00:08.534072   11919 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
Nov 16 17:00:08 ctl01 kube-apiserver[11919]: I1116 17:00:08.534098   11919 master.go:415] Enabling API group "networking.k8s.io".
Nov 16 17:00:08 ctl01 kubelet[9716]: E1116 17:00:08.582755    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:08 ctl01 kube-apiserver[11919]: I1116 17:00:08.583864   11919 store.go:1414] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
Nov 16 17:00:08 ctl01 kube-apiserver[11919]: I1116 17:00:08.638573   11919 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
Nov 16 17:00:08 ctl01 kube-apiserver[11919]: I1116 17:00:08.638829   11919 master.go:415] Enabling API group "policy".
Nov 16 17:00:08 ctl01 kubelet[9716]: E1116 17:00:08.682886    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:08 ctl01 kube-apiserver[11919]: I1116 17:00:08.697351   11919 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
Nov 16 17:00:08 ctl01 kube-apiserver[11919]: I1116 17:00:08.745925   11919 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
Nov 16 17:00:08 ctl01 kubelet[9716]: E1116 17:00:08.783054    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:08 ctl01 kube-apiserver[11919]: I1116 17:00:08.805558   11919 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
Nov 16 17:00:08 ctl01 kube-apiserver[11919]: I1116 17:00:08.862253   11919 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
Nov 16 17:00:08 ctl01 kubelet[9716]: E1116 17:00:08.883323    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:08 ctl01 kube-apiserver[11919]: I1116 17:00:08.914417   11919 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
Nov 16 17:00:08 ctl01 kube-apiserver[11919]: I1116 17:00:08.971992   11919 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
Nov 16 17:00:08 ctl01 kubelet[9716]: E1116 17:00:08.983606    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:09 ctl01 kube-apiserver[11919]: I1116 17:00:09.030281   11919 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
Nov 16 17:00:09 ctl01 kubelet[9716]: E1116 17:00:09.083840    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:09 ctl01 kube-apiserver[11919]: I1116 17:00:09.084860   11919 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
Nov 16 17:00:09 ctl01 kube-apiserver[11919]: I1116 17:00:09.084938   11919 master.go:415] Enabling API group "rbac.authorization.k8s.io".
Nov 16 17:00:09 ctl01 kube-apiserver[11919]: I1116 17:00:09.142036   11919 store.go:1414] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
Nov 16 17:00:09 ctl01 kube-apiserver[11919]: I1116 17:00:09.142079   11919 master.go:415] Enabling API group "scheduling.k8s.io".
Nov 16 17:00:09 ctl01 kube-apiserver[11919]: I1116 17:00:09.142112   11919 master.go:407] Skipping disabled API group "settings.k8s.io".
Nov 16 17:00:09 ctl01 kubelet[9716]: E1116 17:00:09.184074    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:09 ctl01 kube-apiserver[11919]: I1116 17:00:09.199864   11919 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
Nov 16 17:00:09 ctl01 kube-apiserver[11919]: I1116 17:00:09.249895   11919 store.go:1414] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
Nov 16 17:00:09 ctl01 kubelet[9716]: E1116 17:00:09.284264    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:09 ctl01 kube-apiserver[11919]: I1116 17:00:09.301391   11919 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
Nov 16 17:00:09 ctl01 kube-apiserver[11919]: I1116 17:00:09.356994   11919 store.go:1414] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
Nov 16 17:00:09 ctl01 kube-apiserver[11919]: I1116 17:00:09.357057   11919 master.go:415] Enabling API group "storage.k8s.io".
Nov 16 17:00:09 ctl01 kubelet[9716]: E1116 17:00:09.384524    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:09 ctl01 kube-apiserver[11919]: I1116 17:00:09.409127   11919 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
Nov 16 17:00:09 ctl01 kube-apiserver[11919]: I1116 17:00:09.459011   11919 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
Nov 16 17:00:09 ctl01 kubelet[9716]: E1116 17:00:09.484679    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:09 ctl01 kube-apiserver[11919]: I1116 17:00:09.514766   11919 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
Nov 16 17:00:09 ctl01 kube-apiserver[11919]: I1116 17:00:09.567368   11919 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
Nov 16 17:00:09 ctl01 kubelet[9716]: E1116 17:00:09.584839    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:09 ctl01 kube-apiserver[11919]: I1116 17:00:09.621835   11919 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
Nov 16 17:00:09 ctl01 kube-apiserver[11919]: I1116 17:00:09.681117   11919 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
Nov 16 17:00:09 ctl01 kubelet[9716]: E1116 17:00:09.685062    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:09 ctl01 kube-apiserver[11919]: I1116 17:00:09.734058   11919 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
Nov 16 17:00:09 ctl01 kubelet[9716]: E1116 17:00:09.785239    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:09 ctl01 kube-apiserver[11919]: I1116 17:00:09.796972   11919 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
Nov 16 17:00:09 ctl01 kube-apiserver[11919]: I1116 17:00:09.851984   11919 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
Nov 16 17:00:09 ctl01 kubelet[9716]: E1116 17:00:09.885570    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:09 ctl01 kube-apiserver[11919]: I1116 17:00:09.922918   11919 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
Nov 16 17:00:09 ctl01 kube-apiserver[11919]: I1116 17:00:09.975897   11919 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
Nov 16 17:00:09 ctl01 kubelet[9716]: E1116 17:00:09.985789    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:10 ctl01 kube-apiserver[11919]: I1116 17:00:10.037315   11919 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
Nov 16 17:00:10 ctl01 kubelet[9716]: E1116 17:00:10.085936    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:10 ctl01 kube-apiserver[11919]: I1116 17:00:10.088991   11919 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
Nov 16 17:00:10 ctl01 kube-apiserver[11919]: I1116 17:00:10.089060   11919 master.go:415] Enabling API group "apps".
Nov 16 17:00:10 ctl01 kube-apiserver[11919]: I1116 17:00:10.141298   11919 store.go:1414] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
Nov 16 17:00:10 ctl01 kubelet[9716]: E1116 17:00:10.186181    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:10 ctl01 kube-apiserver[11919]: I1116 17:00:10.194358   11919 store.go:1414] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
Nov 16 17:00:10 ctl01 kube-apiserver[11919]: I1116 17:00:10.194388   11919 master.go:415] Enabling API group "admissionregistration.k8s.io".
Nov 16 17:00:10 ctl01 kube-apiserver[11919]: I1116 17:00:10.245980   11919 store.go:1414] Monitoring events count at <storage-prefix>//events
Nov 16 17:00:10 ctl01 kube-apiserver[11919]: I1116 17:00:10.246016   11919 master.go:415] Enabling API group "events.k8s.io".
Nov 16 17:00:10 ctl01 kubelet[9716]: E1116 17:00:10.286407    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:10 ctl01 kube-apiserver[11919]: W1116 17:00:10.371584   11919 genericapiserver.go:338] Skipping API batch/v2alpha1 because it has no resources.
Nov 16 17:00:10 ctl01 kubelet[9716]: E1116 17:00:10.386663    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:10 ctl01 kubelet[9716]: E1116 17:00:10.486868    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:10 ctl01 kube-apiserver[11919]: W1116 17:00:10.518068   11919 genericapiserver.go:338] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Nov 16 17:00:10 ctl01 kube-apiserver[11919]: W1116 17:00:10.529179   11919 genericapiserver.go:338] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Nov 16 17:00:10 ctl01 kube-apiserver[11919]: W1116 17:00:10.564949   11919 genericapiserver.go:338] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Nov 16 17:00:10 ctl01 kubelet[9716]: E1116 17:00:10.587163    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:10 ctl01 kube-apiserver[11919]: W1116 17:00:10.655001   11919 genericapiserver.go:338] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
Nov 16 17:00:10 ctl01 kubelet[9716]: E1116 17:00:10.687430    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:10 ctl01 kube-apiserver[11919]: [restful] 2019/11/16 17:00:10 log.go:33: [restful/swagger] listing is available at https://172.16.10.36:443/swaggerapi
Nov 16 17:00:10 ctl01 kube-apiserver[11919]: [restful] 2019/11/16 17:00:10 log.go:33: [restful/swagger] https://172.16.10.36:443/swaggerui/ is mapped to folder /swagger-ui/
Nov 16 17:00:10 ctl01 kubelet[9716]: E1116 17:00:10.787616    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:10 ctl01 kubelet[9716]: E1116 17:00:10.887880    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:10 ctl01 kubelet[9716]: E1116 17:00:10.988092    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:11 ctl01 kubelet[9716]: E1116 17:00:11.088376    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:11 ctl01 kubelet[9716]: E1116 17:00:11.188601    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:11 ctl01 kubelet[9716]: E1116 17:00:11.288922    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:11 ctl01 kubelet[9716]: E1116 17:00:11.389201    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:11 ctl01 kubelet[9716]: E1116 17:00:11.489381    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:11 ctl01 kubelet[9716]: E1116 17:00:11.589575    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:11 ctl01 kubelet[9716]: E1116 17:00:11.689856    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:11 ctl01 kubelet[9716]: E1116 17:00:11.790042    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:11 ctl01 kubelet[9716]: E1116 17:00:11.890293    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:11 ctl01 kubelet[9716]: E1116 17:00:11.990482    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:12 ctl01 kubelet[9716]: E1116 17:00:12.090660    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:12 ctl01 kubelet[9716]: E1116 17:00:12.190864    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:12 ctl01 kubelet[9716]: I1116 17:00:12.239155    9716 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 17:00:12 ctl01 kubelet[9716]: I1116 17:00:12.239527    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 17:00:12 ctl01 kubelet[9716]: I1116 17:00:12.240508    9716 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node ctl01
Nov 16 17:00:12 ctl01 kubelet[9716]: I1116 17:00:12.240545    9716 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node ctl01
Nov 16 17:00:12 ctl01 kubelet[9716]: I1116 17:00:12.240559    9716 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node ctl01
Nov 16 17:00:12 ctl01 kubelet[9716]: I1116 17:00:12.240582    9716 kubelet_node_status.go:72] Attempting to register node ctl01
Nov 16 17:00:12 ctl01 kubelet[9716]: E1116 17:00:12.291195    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:12 ctl01 kubelet[9716]: E1116 17:00:12.391489    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:12 ctl01 kubelet[9716]: E1116 17:00:12.491718    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:12 ctl01 kubelet[9716]: E1116 17:00:12.591899    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:12 ctl01 kubelet[9716]: E1116 17:00:12.692154    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:12 ctl01 kubelet[9716]: E1116 17:00:12.792516    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:12 ctl01 kubelet[9716]: E1116 17:00:12.892762    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:12 ctl01 kube-apiserver[11919]: [restful] 2019/11/16 17:00:12 log.go:33: [restful/swagger] listing is available at https://172.16.10.36:443/swaggerapi
Nov 16 17:00:12 ctl01 kube-apiserver[11919]: [restful] 2019/11/16 17:00:12 log.go:33: [restful/swagger] https://172.16.10.36:443/swaggerui/ is mapped to folder /swagger-ui/
Nov 16 17:00:12 ctl01 kubelet[9716]: E1116 17:00:12.992951    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:13 ctl01 kube-apiserver[11919]: I1116 17:00:13.035996   11919 plugins.go:158] Loaded 8 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
Nov 16 17:00:13 ctl01 kube-apiserver[11919]: I1116 17:00:13.036033   11919 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 16 17:00:13 ctl01 kube-apiserver[11919]: I1116 17:00:13.088044   11919 store.go:1414] Monitoring apiservices.apiregistration.k8s.io count at <storage-prefix>//apiregistration.k8s.io/apiservices
Nov 16 17:00:13 ctl01 kubelet[9716]: E1116 17:00:13.093110    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:13 ctl01 kube-apiserver[11919]: I1116 17:00:13.136217   11919 store.go:1414] Monitoring apiservices.apiregistration.k8s.io count at <storage-prefix>//apiregistration.k8s.io/apiservices
Nov 16 17:00:13 ctl01 kubelet[9716]: E1116 17:00:13.193346    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:13 ctl01 kubelet[9716]: E1116 17:00:13.293548    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:13 ctl01 kubelet[9716]: E1116 17:00:13.393740    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:13 ctl01 kubelet[9716]: E1116 17:00:13.493939    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:13 ctl01 kubelet[9716]: E1116 17:00:13.594085    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:13 ctl01 kubelet[9716]: E1116 17:00:13.634443    9716 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "ctl01" not found
Nov 16 17:00:13 ctl01 kubelet[9716]: E1116 17:00:13.694259    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:13 ctl01 kubelet[9716]: E1116 17:00:13.794502    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:13 ctl01 kubelet[9716]: E1116 17:00:13.894868    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:13 ctl01 kubelet[9716]: E1116 17:00:13.995453    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:14 ctl01 kubelet[9716]: E1116 17:00:14.095701    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:14 ctl01 kubelet[9716]: E1116 17:00:14.195915    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:14 ctl01 kubelet[9716]: E1116 17:00:14.296147    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:14 ctl01 kubelet[9716]: E1116 17:00:14.396450    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:14 ctl01 kubelet[9716]: E1116 17:00:14.496759    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:14 ctl01 kubelet[9716]: E1116 17:00:14.597004    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:14 ctl01 kubelet[9716]: E1116 17:00:14.697244    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:14 ctl01 kubelet[9716]: E1116 17:00:14.797449    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:14 ctl01 kubelet[9716]: E1116 17:00:14.897706    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:14 ctl01 kubelet[9716]: E1116 17:00:14.997999    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:15 ctl01 kubelet[9716]: E1116 17:00:15.098252    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:15 ctl01 kubelet[9716]: E1116 17:00:15.198435    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:15 ctl01 kubelet[9716]: E1116 17:00:15.298584    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:15 ctl01 kubelet[9716]: E1116 17:00:15.406646    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:15 ctl01 kubelet[9716]: E1116 17:00:15.506993    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:15 ctl01 kubelet[9716]: E1116 17:00:15.607256    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:15 ctl01 kubelet[9716]: E1116 17:00:15.707443    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:15 ctl01 kubelet[9716]: E1116 17:00:15.807754    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:15 ctl01 kubelet[9716]: E1116 17:00:15.908062    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:16 ctl01 kubelet[9716]: E1116 17:00:16.008291    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:16 ctl01 kubelet[9716]: E1116 17:00:16.108529    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:16 ctl01 kubelet[9716]: E1116 17:00:16.208743    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:16 ctl01 kubelet[9716]: E1116 17:00:16.309015    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:16 ctl01 kubelet[9716]: E1116 17:00:16.409319    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:16 ctl01 kubelet[9716]: E1116 17:00:16.509569    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:16 ctl01 kubelet[9716]: E1116 17:00:16.525455    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dctl01&limit=500&resourceVersion=0: net/http: TLS handshake timeout
Nov 16 17:00:16 ctl01 kubelet[9716]: E1116 17:00:16.526170    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: net/http: TLS handshake timeout
Nov 16 17:00:16 ctl01 kubelet[9716]: E1116 17:00:16.527176    9716 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dctl01&limit=500&resourceVersion=0: net/http: TLS handshake timeout
Nov 16 17:00:16 ctl01 kubelet[9716]: E1116 17:00:16.609731    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:16 ctl01 kube-proxy[9926]: E1116 17:00:16.690976    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: net/http: TLS handshake timeout
Nov 16 17:00:16 ctl01 kube-proxy[9926]: E1116 17:00:16.691697    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: net/http: TLS handshake timeout
Nov 16 17:00:16 ctl01 kubelet[9716]: E1116 17:00:16.709905    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:16 ctl01 kubelet[9716]: E1116 17:00:16.810084    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:16 ctl01 kubelet[9716]: E1116 17:00:16.910293    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:17 ctl01 kubelet[9716]: E1116 17:00:17.010518    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:17 ctl01 kubelet[9716]: E1116 17:00:17.110706    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:17 ctl01 kubelet[9716]: E1116 17:00:17.210891    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:17 ctl01 kubelet[9716]: E1116 17:00:17.311085    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:17 ctl01 kubelet[9716]: E1116 17:00:17.411308    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:17 ctl01 kubelet[9716]: E1116 17:00:17.511500    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:17 ctl01 kubelet[9716]: E1116 17:00:17.611740    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:17 ctl01 kubelet[9716]: E1116 17:00:17.711901    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:17 ctl01 kubelet[9716]: E1116 17:00:17.812080    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:17 ctl01 kubelet[9716]: E1116 17:00:17.912286    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.010168   11919 deprecated_insecure_serving.go:51] Serving insecurely on [::]:8080
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.011546   11919 secure_serving.go:116] Serving securely on [::]:443
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.011700   11919 crd_finalizer.go:242] Starting CRDFinalizer
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.011734   11919 controller.go:84] Starting OpenAPI AggregationController
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.011789   11919 apiservice_controller.go:90] Starting APIServiceRegistrationController
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.011805   11919 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.011834   11919 available_controller.go:283] Starting AvailableConditionController
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.011844   11919 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.011875   11919 autoregister_controller.go:136] Starting autoregister controller
Nov 16 17:00:18 ctl01 systemd[1]: Started Kubernetes API Server.
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.012268   11919 customresource_discovery_controller.go:203] Starting DiscoveryController
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.012292   11919 naming_controller.go:284] Starting NamingConditionController
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.012308   11919 establishing_controller.go:73] Starting EstablishingController
Nov 16 17:00:18 ctl01 kubelet[9716]: E1116 17:00:18.012441    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.012783   11919 cache.go:32] Waiting for caches to sync for autoregister controller
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.012999   11919 crdregistration_controller.go:112] Starting crd-autoregister controller
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.013168   11919 controller_utils.go:1027] Waiting for caches to sync for crd-autoregister controller
Nov 16 17:00:18 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-active', 'kube-apiserver.service'] in directory '/root'
Nov 16 17:00:18 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kube-apiserver.service'] in directory '/root'
Nov 16 17:00:18 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kube-apiserver.service'] in directory '/root'
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.083585   11919 log.go:172] http: TLS handshake error from 172.16.10.55:39060: EOF
Nov 16 17:00:18 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemd-run', '--scope', 'systemctl', 'enable', 'kube-apiserver.service'] in directory '/root'
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.095365   11919 log.go:172] http: TLS handshake error from 172.16.10.55:39062: EOF
Nov 16 17:00:18 ctl01 kube-proxy[9926]: E1116 17:00:18.096840    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-proxy" cannot list resource "services" in API group "" at the cluster scope
Nov 16 17:00:18 ctl01 systemd[1]: Started /bin/systemctl enable kube-apiserver.service.
Nov 16 17:00:18 ctl01 kubelet[9716]: E1116 17:00:18.124231    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:18 ctl01 systemd[1]: Reloading.
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.130694   11919 log.go:172] http: TLS handshake error from 172.16.10.55:39064: EOF
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.140670   11919 log.go:172] http: TLS handshake error from 172.16.10.55:39066: EOF
Nov 16 17:00:18 ctl01 kube-proxy[9926]: E1116 17:00:18.148735    9926 event.go:203] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ctl01.15d7b3206cb6e936", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ctl01", UID:"ctl01", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kube-proxy.", Source:v1.EventSource{Component:"kube-proxy", Host:"ctl01"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf6c2896a5c2e536, ext:509804381, loc:(*time.Location)(0xaf74780)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf6c2896a5c2e536, ext:509804381, loc:(*time.Location)(0xaf74780)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:kube-proxy" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.150579   11919 cache.go:39] Caches are synced for APIServiceRegistrationController controller
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.150620   11919 cache.go:39] Caches are synced for AvailableConditionController controller
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.150645   11919 cache.go:39] Caches are synced for autoregister controller
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.150887   11919 controller_utils.go:1034] Caches are synced for crd-autoregister controller
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.155403   11919 log.go:172] http: TLS handshake error from 172.16.10.55:39068: EOF
Nov 16 17:00:18 ctl01 kube-proxy[9926]: E1116 17:00:18.158559    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:kube-proxy" cannot list resource "endpoints" in API group "" at the cluster scope
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.161432   11919 log.go:172] http: TLS handshake error from 172.16.10.56:55012: EOF
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.163203   11919 cacher.go:605] cacher (*apiregistration.APIService): 1 objects queued in incoming channel.
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.163229   11919 cacher.go:605] cacher (*apiregistration.APIService): 2 objects queued in incoming channel.
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.167944   11919 cacher.go:605] cacher (*apiregistration.APIService): 1 objects queued in incoming channel.
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.171515   11919 log.go:172] http: TLS handshake error from 172.16.10.56:55014: EOF
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.184422   11919 log.go:172] http: TLS handshake error from 172.16.10.56:55010: EOF
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.199057   11919 log.go:172] http: TLS handshake error from 172.16.10.56:55016: EOF
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: W1116 17:00:18.209831   11919 lease.go:222] Resetting endpoints for master service "kubernetes" to [172.16.10.36]
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.210548   11919 log.go:172] http: TLS handshake error from 172.16.10.36:51944: EOF
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.211886   11919 controller.go:608] quota admission added evaluator for: endpoints
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.212372   11919 log.go:172] http: TLS handshake error from 172.16.10.36:51940: EOF
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.218070   11919 log.go:172] http: TLS handshake error from 172.16.10.36:51942: EOF
Nov 16 17:00:18 ctl01 kubelet[9716]: I1116 17:00:18.219606    9716 kubelet_node_status.go:75] Successfully registered node ctl01
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.223276   11919 cacher.go:605] cacher (*apiregistration.APIService): 3 objects queued in incoming channel.
Nov 16 17:00:18 ctl01 kubelet[9716]: E1116 17:00:18.224369    9716 kubelet.go:2266] node "ctl01" not found
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.235202   11919 log.go:172] http: TLS handshake error from 172.16.10.56:55018: EOF
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.237629   11919 log.go:172] http: TLS handshake error from 172.16.10.36:51950: EOF
Nov 16 17:00:18 ctl01 kubelet[9716]: I1116 17:00:18.239075    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 17:00:18 ctl01 kube-apiserver[11919]: I1116 17:00:18.244974   11919 log.go:172] http: TLS handshake error from 172.16.10.36:51952: EOF
Nov 16 17:00:18 ctl01 kubelet[9716]: I1116 17:00:18.254497    9716 kubelet.go:1908] SyncLoop (ADD, "api"): ""
Nov 16 17:00:18 ctl01 systemd[1]: kubelet.service: Dependency Conflicts=cadvisor.service dropped, merged into kubelet.service
Nov 16 17:00:18 ctl01 systemd[1]: kubelet.service: Dependency ConflictedBy=cadvisor.service dropped, merged into kubelet.service
Nov 16 17:00:18 ctl01 kubelet[9716]: I1116 17:00:18.290523    9716 reconciler.go:154] Reconciler: start to sync state
Nov 16 17:00:18 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kube-apiserver.service'] in directory '/root'
Nov 16 17:00:18 ctl01 salt-minion[4526]: [INFO    ] {'kube-apiserver': True}
Nov 16 17:00:18 ctl01 salt-minion[4526]: [INFO    ] Completed state [kube-apiserver] at time 17:00:18.329156 duration_in_ms=12437.084
Nov 16 17:00:18 ctl01 salt-minion[4526]: [INFO    ] Running state [kube-scheduler] at time 17:00:18.335871
Nov 16 17:00:18 ctl01 salt-minion[4526]: [INFO    ] Executing state service.running for [kube-scheduler]
Nov 16 17:00:18 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'status', 'kube-scheduler.service', '-n', '0'] in directory '/root'
Nov 16 17:00:18 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-active', 'kube-scheduler.service'] in directory '/root'
Nov 16 17:00:18 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kube-scheduler.service'] in directory '/root'
Nov 16 17:00:18 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemd-run', '--scope', 'systemctl', 'start', 'kube-scheduler.service'] in directory '/root'
Nov 16 17:00:18 ctl01 systemd[1]: Started /bin/systemctl start kube-scheduler.service.
Nov 16 17:00:18 ctl01 systemd[1]: Started Kubernetes Scheduler Plugin.
Nov 16 17:00:18 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-active', 'kube-scheduler.service'] in directory '/root'
Nov 16 17:00:18 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kube-scheduler.service'] in directory '/root'
Nov 16 17:00:18 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kube-scheduler.service'] in directory '/root'
Nov 16 17:00:18 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemd-run', '--scope', 'systemctl', 'enable', 'kube-scheduler.service'] in directory '/root'
Nov 16 17:00:18 ctl01 systemd[1]: Started /bin/systemctl enable kube-scheduler.service.
Nov 16 17:00:18 ctl01 systemd[1]: Reloading.
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578140   12023 flags.go:33] FLAG: --address="0.0.0.0"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578407   12023 flags.go:33] FLAG: --algorithm-provider=""
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578458   12023 flags.go:33] FLAG: --alsologtostderr="false"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578470   12023 flags.go:33] FLAG: --application-metrics-count-limit="100"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578477   12023 flags.go:33] FLAG: --authentication-kubeconfig=""
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578482   12023 flags.go:33] FLAG: --authentication-skip-lookup="false"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578489   12023 flags.go:33] FLAG: --authentication-token-webhook-cache-ttl="10s"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578497   12023 flags.go:33] FLAG: --authentication-tolerate-lookup-failure="true"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578502   12023 flags.go:33] FLAG: --authorization-always-allow-paths="[/healthz]"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578520   12023 flags.go:33] FLAG: --authorization-kubeconfig=""
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578526   12023 flags.go:33] FLAG: --authorization-webhook-cache-authorized-ttl="10s"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578532   12023 flags.go:33] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578538   12023 flags.go:33] FLAG: --azure-container-registry-config=""
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578543   12023 flags.go:33] FLAG: --bind-address="0.0.0.0"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578551   12023 flags.go:33] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578557   12023 flags.go:33] FLAG: --cert-dir=""
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578562   12023 flags.go:33] FLAG: --client-ca-file=""
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578581   12023 flags.go:33] FLAG: --cloud-provider-gce-lb-src-cidrs="130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578593   12023 flags.go:33] FLAG: --config=""
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578599   12023 flags.go:33] FLAG: --container-hints="/etc/cadvisor/container_hints.json"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578606   12023 flags.go:33] FLAG: --containerd="unix:///var/run/containerd.sock"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578612   12023 flags.go:33] FLAG: --contention-profiling="false"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578618   12023 flags.go:33] FLAG: --default-not-ready-toleration-seconds="300"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578625   12023 flags.go:33] FLAG: --default-unreachable-toleration-seconds="300"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578630   12023 flags.go:33] FLAG: --docker="unix:///var/run/docker.sock"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578637   12023 flags.go:33] FLAG: --docker-env-metadata-whitelist=""
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578642   12023 flags.go:33] FLAG: --docker-only="false"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578648   12023 flags.go:33] FLAG: --docker-root="/var/lib/docker"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578654   12023 flags.go:33] FLAG: --docker-tls="false"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578660   12023 flags.go:33] FLAG: --docker-tls-ca="ca.pem"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578666   12023 flags.go:33] FLAG: --docker-tls-cert="cert.pem"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578671   12023 flags.go:33] FLAG: --docker-tls-key="key.pem"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578677   12023 flags.go:33] FLAG: --enable-load-reader="false"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578683   12023 flags.go:33] FLAG: --event-storage-age-limit="default=0"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578697   12023 flags.go:33] FLAG: --event-storage-event-limit="default=0"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578703   12023 flags.go:33] FLAG: --failure-domains="kubernetes.io/hostname,failure-domain.beta.kubernetes.io/zone,failure-domain.beta.kubernetes.io/region"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578713   12023 flags.go:33] FLAG: --feature-gates=""
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578722   12023 flags.go:33] FLAG: --global-housekeeping-interval="1m0s"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578728   12023 flags.go:33] FLAG: --hard-pod-affinity-symmetric-weight="1"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578736   12023 flags.go:33] FLAG: --help="false"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578742   12023 flags.go:33] FLAG: --housekeeping-interval="10s"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578748   12023 flags.go:33] FLAG: --http2-max-streams-per-connection="0"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578756   12023 flags.go:33] FLAG: --kube-api-burst="100"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578762   12023 flags.go:33] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578769   12023 flags.go:33] FLAG: --kube-api-qps="50"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578779   12023 flags.go:33] FLAG: --kubeconfig="/etc/kubernetes/scheduler.kubeconfig"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578786   12023 flags.go:33] FLAG: --leader-elect="true"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578792   12023 flags.go:33] FLAG: --leader-elect-lease-duration="15s"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578798   12023 flags.go:33] FLAG: --leader-elect-renew-deadline="10s"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578804   12023 flags.go:33] FLAG: --leader-elect-resource-lock="endpoints"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578810   12023 flags.go:33] FLAG: --leader-elect-retry-period="2s"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578816   12023 flags.go:33] FLAG: --lock-object-name="kube-scheduler"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578822   12023 flags.go:33] FLAG: --lock-object-namespace="kube-system"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578828   12023 flags.go:33] FLAG: --log-backtrace-at=":0"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578835   12023 flags.go:33] FLAG: --log-cadvisor-usage="false"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578842   12023 flags.go:33] FLAG: --log-dir=""
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578851   12023 flags.go:33] FLAG: --log-file=""
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578862   12023 flags.go:33] FLAG: --log-flush-frequency="5s"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578873   12023 flags.go:33] FLAG: --logtostderr="true"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578883   12023 flags.go:33] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578896   12023 flags.go:33] FLAG: --master=""
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578906   12023 flags.go:33] FLAG: --mesos-agent="127.0.0.1:5051"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578913   12023 flags.go:33] FLAG: --mesos-agent-timeout="10s"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578920   12023 flags.go:33] FLAG: --policy-config-file=""
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578925   12023 flags.go:33] FLAG: --policy-configmap=""
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578931   12023 flags.go:33] FLAG: --policy-configmap-namespace="kube-system"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578946   12023 flags.go:33] FLAG: --port="10251"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578953   12023 flags.go:33] FLAG: --profiling="false"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578959   12023 flags.go:33] FLAG: --requestheader-allowed-names="[]"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578971   12023 flags.go:33] FLAG: --requestheader-client-ca-file=""
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578977   12023 flags.go:33] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.578991   12023 flags.go:33] FLAG: --requestheader-group-headers="[x-remote-group]"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.579002   12023 flags.go:33] FLAG: --requestheader-username-headers="[x-remote-user]"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.579021   12023 flags.go:33] FLAG: --scheduler-name="default-scheduler"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.579032   12023 flags.go:33] FLAG: --secure-port="10259"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.579039   12023 flags.go:33] FLAG: --skip-headers="false"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.579045   12023 flags.go:33] FLAG: --stderrthreshold="2"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.579051   12023 flags.go:33] FLAG: --storage-driver-buffer-duration="1m0s"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.579058   12023 flags.go:33] FLAG: --storage-driver-db="cadvisor"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.579064   12023 flags.go:33] FLAG: --storage-driver-host="localhost:8086"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.579070   12023 flags.go:33] FLAG: --storage-driver-password="root"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.579076   12023 flags.go:33] FLAG: --storage-driver-secure="false"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.579082   12023 flags.go:33] FLAG: --storage-driver-table="stats"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.579088   12023 flags.go:33] FLAG: --storage-driver-user="root"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.579093   12023 flags.go:33] FLAG: --tls-cert-file=""
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.579099   12023 flags.go:33] FLAG: --tls-cipher-suites="[]"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.579112   12023 flags.go:33] FLAG: --tls-min-version=""
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.579118   12023 flags.go:33] FLAG: --tls-private-key-file=""
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.579123   12023 flags.go:33] FLAG: --tls-sni-cert-key="[]"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.579133   12023 flags.go:33] FLAG: --use-legacy-policy-config="false"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.579139   12023 flags.go:33] FLAG: --v="2"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.579145   12023 flags.go:33] FLAG: --version="false"
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.579154   12023 flags.go:33] FLAG: --vmodule=""
Nov 16 17:00:18 ctl01 kube-scheduler[12023]: I1116 17:00:18.579160   12023 flags.go:33] FLAG: --write-config-to=""
Nov 16 17:00:18 ctl01 systemd[1]: kubelet.service: Dependency Conflicts=cadvisor.service dropped, merged into kubelet.service
Nov 16 17:00:18 ctl01 systemd[1]: kubelet.service: Dependency ConflictedBy=cadvisor.service dropped, merged into kubelet.service
Nov 16 17:00:18 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kube-scheduler.service'] in directory '/root'
Nov 16 17:00:18 ctl01 salt-minion[4526]: [INFO    ] {'kube-scheduler': True}
Nov 16 17:00:18 ctl01 salt-minion[4526]: [INFO    ] Completed state [kube-scheduler] at time 17:00:18.717323 duration_in_ms=381.451
Nov 16 17:00:18 ctl01 salt-minion[4526]: [INFO    ] Running state [kube-controller-manager] at time 17:00:18.722018
Nov 16 17:00:18 ctl01 salt-minion[4526]: [INFO    ] Executing state service.running for [kube-controller-manager]
Nov 16 17:00:18 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'status', 'kube-controller-manager.service', '-n', '0'] in directory '/root'
Nov 16 17:00:18 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-active', 'kube-controller-manager.service'] in directory '/root'
Nov 16 17:00:18 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kube-controller-manager.service'] in directory '/root'
Nov 16 17:00:18 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemd-run', '--scope', 'systemctl', 'start', 'kube-controller-manager.service'] in directory '/root'
Nov 16 17:00:18 ctl01 systemd[1]: Started /bin/systemctl start kube-controller-manager.service.
Nov 16 17:00:18 ctl01 systemd[1]: Started Kubernetes Controller Manager.
Nov 16 17:00:18 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-active', 'kube-controller-manager.service'] in directory '/root'
Nov 16 17:00:18 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kube-controller-manager.service'] in directory '/root'
Nov 16 17:00:18 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kube-controller-manager.service'] in directory '/root'
Nov 16 17:00:18 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemd-run', '--scope', 'systemctl', 'enable', 'kube-controller-manager.service'] in directory '/root'
Nov 16 17:00:18 ctl01 systemd[1]: Started /bin/systemctl enable kube-controller-manager.service.
Nov 16 17:00:18 ctl01 systemd[1]: Reloading.
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.947436   12110 flags.go:33] FLAG: --address="0.0.0.0"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948295   12110 flags.go:33] FLAG: --allocate-node-cidrs="false"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948327   12110 flags.go:33] FLAG: --allow-untagged-cloud="false"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948336   12110 flags.go:33] FLAG: --alsologtostderr="false"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948344   12110 flags.go:33] FLAG: --application-metrics-count-limit="100"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948350   12110 flags.go:33] FLAG: --attach-detach-reconcile-sync-period="1m0s"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948358   12110 flags.go:33] FLAG: --authentication-kubeconfig=""
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948365   12110 flags.go:33] FLAG: --authentication-skip-lookup="false"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948370   12110 flags.go:33] FLAG: --authentication-token-webhook-cache-ttl="10s"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948376   12110 flags.go:33] FLAG: --authentication-tolerate-lookup-failure="false"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948382   12110 flags.go:33] FLAG: --authorization-always-allow-paths="[/healthz]"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948396   12110 flags.go:33] FLAG: --authorization-kubeconfig=""
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948402   12110 flags.go:33] FLAG: --authorization-webhook-cache-authorized-ttl="10s"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948408   12110 flags.go:33] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948413   12110 flags.go:33] FLAG: --azure-container-registry-config=""
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948419   12110 flags.go:33] FLAG: --bind-address="0.0.0.0"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948425   12110 flags.go:33] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948431   12110 flags.go:33] FLAG: --cert-dir=""
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948437   12110 flags.go:33] FLAG: --cidr-allocator-type="RangeAllocator"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948442   12110 flags.go:33] FLAG: --client-ca-file=""
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948448   12110 flags.go:33] FLAG: --cloud-config=""
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948453   12110 flags.go:33] FLAG: --cloud-provider=""
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948458   12110 flags.go:33] FLAG: --cloud-provider-gce-lb-src-cidrs="130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948469   12110 flags.go:33] FLAG: --cluster-cidr=""
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948478   12110 flags.go:33] FLAG: --cluster-name="kubernetes"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948484   12110 flags.go:33] FLAG: --cluster-signing-cert-file="/etc/kubernetes/ca/ca.pem"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948490   12110 flags.go:33] FLAG: --cluster-signing-key-file="/etc/kubernetes/ca/ca.key"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948496   12110 flags.go:33] FLAG: --concurrent-deployment-syncs="5"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948503   12110 flags.go:33] FLAG: --concurrent-endpoint-syncs="5"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948509   12110 flags.go:33] FLAG: --concurrent-gc-syncs="20"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948514   12110 flags.go:33] FLAG: --concurrent-namespace-syncs="10"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948520   12110 flags.go:33] FLAG: --concurrent-replicaset-syncs="5"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948525   12110 flags.go:33] FLAG: --concurrent-resource-quota-syncs="5"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948531   12110 flags.go:33] FLAG: --concurrent-service-syncs="1"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948536   12110 flags.go:33] FLAG: --concurrent-serviceaccount-token-syncs="5"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948541   12110 flags.go:33] FLAG: --concurrent-ttl-after-finished-syncs="5"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948546   12110 flags.go:33] FLAG: --concurrent_rc_syncs="5"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948552   12110 flags.go:33] FLAG: --configure-cloud-routes="true"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948557   12110 flags.go:33] FLAG: --container-hints="/etc/cadvisor/container_hints.json"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948563   12110 flags.go:33] FLAG: --containerd="unix:///var/run/containerd.sock"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948569   12110 flags.go:33] FLAG: --contention-profiling="false"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948574   12110 flags.go:33] FLAG: --controller-start-interval="0s"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948580   12110 flags.go:33] FLAG: --controllers="[*]"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948593   12110 flags.go:33] FLAG: --default-not-ready-toleration-seconds="300"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948599   12110 flags.go:33] FLAG: --default-unreachable-toleration-seconds="300"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948605   12110 flags.go:33] FLAG: --deleting-pods-burst="0"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948610   12110 flags.go:33] FLAG: --deleting-pods-qps="0.1"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948619   12110 flags.go:33] FLAG: --deployment-controller-sync-period="30s"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948625   12110 flags.go:33] FLAG: --disable-attach-detach-reconcile-sync="false"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948630   12110 flags.go:33] FLAG: --docker="unix:///var/run/docker.sock"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948636   12110 flags.go:33] FLAG: --docker-env-metadata-whitelist=""
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948641   12110 flags.go:33] FLAG: --docker-only="false"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948647   12110 flags.go:33] FLAG: --docker-root="/var/lib/docker"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948652   12110 flags.go:33] FLAG: --docker-tls="false"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948657   12110 flags.go:33] FLAG: --docker-tls-ca="ca.pem"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948663   12110 flags.go:33] FLAG: --docker-tls-cert="cert.pem"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948668   12110 flags.go:33] FLAG: --docker-tls-key="key.pem"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948673   12110 flags.go:33] FLAG: --enable-dynamic-provisioning="true"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948679   12110 flags.go:33] FLAG: --enable-garbage-collector="true"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948684   12110 flags.go:33] FLAG: --enable-hostpath-provisioner="false"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948699   12110 flags.go:33] FLAG: --enable-load-reader="false"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948704   12110 flags.go:33] FLAG: --enable-taint-manager="true"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948710   12110 flags.go:33] FLAG: --event-storage-age-limit="default=0"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948716   12110 flags.go:33] FLAG: --event-storage-event-limit="default=0"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948721   12110 flags.go:33] FLAG: --experimental-cluster-signing-duration="8760h0m0s"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948727   12110 flags.go:33] FLAG: --external-cloud-volume-plugin=""
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948733   12110 flags.go:33] FLAG: --feature-gates=""
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948741   12110 flags.go:33] FLAG: --flex-volume-plugin-dir="/usr/libexec/kubernetes/kubelet-plugins/volume/exec/"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948748   12110 flags.go:33] FLAG: --global-housekeeping-interval="1m0s"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948754   12110 flags.go:33] FLAG: --help="false"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948759   12110 flags.go:33] FLAG: --horizontal-pod-autoscaler-cpu-initialization-period="5m0s"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948765   12110 flags.go:33] FLAG: --horizontal-pod-autoscaler-downscale-delay="5m0s"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948771   12110 flags.go:33] FLAG: --horizontal-pod-autoscaler-downscale-stabilization="5m0s"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948776   12110 flags.go:33] FLAG: --horizontal-pod-autoscaler-initial-readiness-delay="30s"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948782   12110 flags.go:33] FLAG: --horizontal-pod-autoscaler-sync-period="15s"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948787   12110 flags.go:33] FLAG: --horizontal-pod-autoscaler-tolerance="0.1"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948795   12110 flags.go:33] FLAG: --horizontal-pod-autoscaler-upscale-delay="3m0s"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948801   12110 flags.go:33] FLAG: --horizontal-pod-autoscaler-use-rest-clients="true"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948806   12110 flags.go:33] FLAG: --housekeeping-interval="10s"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948812   12110 flags.go:33] FLAG: --http2-max-streams-per-connection="0"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948819   12110 flags.go:33] FLAG: --kube-api-burst="30"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948824   12110 flags.go:33] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948830   12110 flags.go:33] FLAG: --kube-api-qps="20"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948837   12110 flags.go:33] FLAG: --kubeconfig="/etc/kubernetes/controller-manager.kubeconfig"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948843   12110 flags.go:33] FLAG: --large-cluster-size-threshold="50"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948849   12110 flags.go:33] FLAG: --leader-elect="true"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948854   12110 flags.go:33] FLAG: --leader-elect-lease-duration="15s"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948860   12110 flags.go:33] FLAG: --leader-elect-renew-deadline="10s"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948865   12110 flags.go:33] FLAG: --leader-elect-resource-lock="endpoints"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948871   12110 flags.go:33] FLAG: --leader-elect-retry-period="2s"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948876   12110 flags.go:33] FLAG: --log-backtrace-at=":0"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948885   12110 flags.go:33] FLAG: --log-cadvisor-usage="false"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948901   12110 flags.go:33] FLAG: --log-dir=""
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948907   12110 flags.go:33] FLAG: --log-file=""
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948912   12110 flags.go:33] FLAG: --log-flush-frequency="5s"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948918   12110 flags.go:33] FLAG: --logtostderr="true"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948924   12110 flags.go:33] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948930   12110 flags.go:33] FLAG: --master=""
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948936   12110 flags.go:33] FLAG: --mesos-agent="127.0.0.1:5051"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948942   12110 flags.go:33] FLAG: --mesos-agent-timeout="10s"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948947   12110 flags.go:33] FLAG: --min-resync-period="12h0m0s"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948953   12110 flags.go:33] FLAG: --namespace-sync-period="5m0s"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948959   12110 flags.go:33] FLAG: --node-cidr-mask-size="24"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948964   12110 flags.go:33] FLAG: --node-eviction-rate="0.1"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948971   12110 flags.go:33] FLAG: --node-monitor-grace-period="40s"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948977   12110 flags.go:33] FLAG: --node-monitor-period="5s"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948982   12110 flags.go:33] FLAG: --node-startup-grace-period="1m0s"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948988   12110 flags.go:33] FLAG: --node-sync-period="0s"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948994   12110 flags.go:33] FLAG: --pod-eviction-timeout="5m0s"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.948999   12110 flags.go:33] FLAG: --port="10252"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949005   12110 flags.go:33] FLAG: --profiling="false"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949011   12110 flags.go:33] FLAG: --pv-recycler-increment-timeout-nfs="30"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949017   12110 flags.go:33] FLAG: --pv-recycler-minimum-timeout-hostpath="60"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949022   12110 flags.go:33] FLAG: --pv-recycler-minimum-timeout-nfs="300"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949028   12110 flags.go:33] FLAG: --pv-recycler-pod-template-filepath-hostpath=""
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949034   12110 flags.go:33] FLAG: --pv-recycler-pod-template-filepath-nfs=""
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949039   12110 flags.go:33] FLAG: --pv-recycler-timeout-increment-hostpath="30"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949045   12110 flags.go:33] FLAG: --pvclaimbinder-sync-period="15s"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949050   12110 flags.go:33] FLAG: --register-retry-count="10"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949056   12110 flags.go:33] FLAG: --requestheader-allowed-names="[]"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949066   12110 flags.go:33] FLAG: --requestheader-client-ca-file=""
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949072   12110 flags.go:33] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949085   12110 flags.go:33] FLAG: --requestheader-group-headers="[x-remote-group]"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949095   12110 flags.go:33] FLAG: --requestheader-username-headers="[x-remote-user]"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949133   12110 flags.go:33] FLAG: --resource-quota-sync-period="5m0s"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949139   12110 flags.go:33] FLAG: --root-ca-file="/etc/kubernetes/ssl/ca-kubernetes.crt"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949146   12110 flags.go:33] FLAG: --route-reconciliation-period="10s"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949152   12110 flags.go:33] FLAG: --secondary-node-eviction-rate="0.01"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949159   12110 flags.go:33] FLAG: --secure-port="10257"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949165   12110 flags.go:33] FLAG: --service-account-private-key-file="/etc/kubernetes/ssl/kubernetes-server.key"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949171   12110 flags.go:33] FLAG: --service-cluster-ip-range=""
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949176   12110 flags.go:33] FLAG: --skip-headers="false"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949182   12110 flags.go:33] FLAG: --stderrthreshold="2"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949187   12110 flags.go:33] FLAG: --storage-driver-buffer-duration="1m0s"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949193   12110 flags.go:33] FLAG: --storage-driver-db="cadvisor"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949199   12110 flags.go:33] FLAG: --storage-driver-host="localhost:8086"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949205   12110 flags.go:33] FLAG: --storage-driver-password="root"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949210   12110 flags.go:33] FLAG: --storage-driver-secure="false"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949215   12110 flags.go:33] FLAG: --storage-driver-table="stats"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949221   12110 flags.go:33] FLAG: --storage-driver-user="root"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949226   12110 flags.go:33] FLAG: --terminated-pod-gc-threshold="12500"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949232   12110 flags.go:33] FLAG: --tls-cert-file=""
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949237   12110 flags.go:33] FLAG: --tls-cipher-suites="[]"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949246   12110 flags.go:33] FLAG: --tls-min-version=""
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949252   12110 flags.go:33] FLAG: --tls-private-key-file=""
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949257   12110 flags.go:33] FLAG: --tls-sni-cert-key="[]"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949265   12110 flags.go:33] FLAG: --unhealthy-zone-threshold="0.55"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949272   12110 flags.go:33] FLAG: --use-service-account-credentials="true"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949277   12110 flags.go:33] FLAG: --v="2"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949283   12110 flags.go:33] FLAG: --version="false"
Nov 16 17:00:18 ctl01 kube-controller-manager[12110]: I1116 17:00:18.949291   12110 flags.go:33] FLAG: --vmodule=""
Nov 16 17:00:19 ctl01 systemd[1]: kubelet.service: Dependency Conflicts=cadvisor.service dropped, merged into kubelet.service
Nov 16 17:00:19 ctl01 systemd[1]: kubelet.service: Dependency ConflictedBy=cadvisor.service dropped, merged into kubelet.service
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.028429   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/cluster-admin
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.031405   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:discovery
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.034494   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:basic-user
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.036602   11919 storage_scheduling.go:91] created PriorityClass system-node-critical with value 2000001000
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.037991   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/admin
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.041317   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/edit
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.041620   11919 storage_scheduling.go:91] created PriorityClass system-cluster-critical with value 2000000000
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.041636   11919 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.044370   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/view
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.047391   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.050370   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.053599   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.056373   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:heapster
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.060111   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.063452   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.066480   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.071727   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
Nov 16 17:00:19 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kube-controller-manager.service'] in directory '/root'
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.075227   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.084349   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.087381   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.090319   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
Nov 16 17:00:19 ctl01 kube-scheduler[12023]: I1116 17:00:19.091089   12023 serving.go:318] Generated self-signed cert in-memory
Nov 16 17:00:19 ctl01 salt-minion[4526]: [INFO    ] {'kube-controller-manager': True}
Nov 16 17:00:19 ctl01 salt-minion[4526]: [INFO    ] Completed state [kube-controller-manager] at time 17:00:19.093905 duration_in_ms=371.886
Nov 16 17:00:19 ctl01 salt-minion[4526]: [INFO    ] Running state [kubectl create ns "netchecker"] at time 17:00:19.094336
Nov 16 17:00:19 ctl01 salt-minion[4526]: [INFO    ] Executing state cmd.run for [kubectl create ns "netchecker"]
Nov 16 17:00:19 ctl01 salt-minion[4526]: [INFO    ] Executing command 'kubectl get ns -o=custom-columns=NAME:.metadata.name | grep -v NAME | grep "netchecker"' in directory '/root'
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.095778   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
Nov 16 17:00:19 ctl01 kube-proxy[9926]: E1116 17:00:19.098278    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-proxy" cannot list resource "services" in API group "" at the cluster scope
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.098583   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.101868   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.105033   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.108734   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.111824   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.114806   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.119319   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.122741   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.125230   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.128061   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.130920   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.133962   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.139651   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.142507   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.145718   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.148436   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.152626   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.155464   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.158613   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.161272   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
Nov 16 17:00:19 ctl01 kube-proxy[9926]: E1116 17:00:19.161547    9926 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:kube-proxy" cannot list resource "endpoints" in API group "" at the cluster scope
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.164288   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.168407   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.171368   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.174336   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.177140   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.180661   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.183991   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.186935   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.189881   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.193064   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.195775   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.214275   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.257155   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.293914   11919 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.334410   11919 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
Nov 16 17:00:19 ctl01 kube-controller-manager[12110]: I1116 17:00:19.335619   12110 serving.go:318] Generated self-signed cert in-memory
Nov 16 17:00:19 ctl01 salt-minion[4526]: [INFO    ] Executing command 'kubectl create ns "netchecker"' in directory '/root'
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.375950   11919 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.414275   11919 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.453806   11919 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.494262   11919 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
Nov 16 17:00:19 ctl01 salt-minion[4526]: [INFO    ] {'pid': 12205, 'retcode': 0, 'stderr': '', 'stdout': 'namespace/netchecker created'}
Nov 16 17:00:19 ctl01 salt-minion[4526]: [INFO    ] Completed state [kubectl create ns "netchecker"] at time 17:00:19.515972 duration_in_ms=421.635
Nov 16 17:00:19 ctl01 salt-minion[4526]: [INFO    ] Running state [/usr/bin/calicoctl] at time 17:00:19.516494
Nov 16 17:00:19 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/usr/bin/calicoctl]
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.534333   11919 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.574296   11919 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.613845   11919 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.654549   11919 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.694336   11919 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
Nov 16 17:00:19 ctl01 salt-minion[4526]: [INFO    ] File /usr/bin/calicoctl is in the correct state
Nov 16 17:00:19 ctl01 salt-minion[4526]: [INFO    ] Completed state [/usr/bin/calicoctl] at time 17:00:19.718408 duration_in_ms=201.912
Nov 16 17:00:19 ctl01 salt-minion[4526]: [INFO    ] Running state [/usr/bin/birdcl] at time 17:00:19.718829
Nov 16 17:00:19 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/usr/bin/birdcl]
Nov 16 17:00:19 ctl01 salt-minion[4526]: [INFO    ] File /usr/bin/birdcl is in the correct state
Nov 16 17:00:19 ctl01 salt-minion[4526]: [INFO    ] Completed state [/usr/bin/birdcl] at time 17:00:19.723452 duration_in_ms=4.624
Nov 16 17:00:19 ctl01 salt-minion[4526]: [INFO    ] Running state [/opt/cni/bin/calico] at time 17:00:19.723810
Nov 16 17:00:19 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/opt/cni/bin/calico]
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.734294   11919 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.773707   11919 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.814041   11919 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.854168   11919 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
Nov 16 17:00:19 ctl01 salt-minion[4526]: [INFO    ] File /opt/cni/bin/calico is in the correct state
Nov 16 17:00:19 ctl01 salt-minion[4526]: [INFO    ] Completed state [/opt/cni/bin/calico] at time 17:00:19.865408 duration_in_ms=141.597
Nov 16 17:00:19 ctl01 salt-minion[4526]: [INFO    ] Running state [/opt/cni/bin/calico-ipam] at time 17:00:19.865717
Nov 16 17:00:19 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/opt/cni/bin/calico-ipam]
Nov 16 17:00:19 ctl01 kube-scheduler[12023]: W1116 17:00:19.878318   12023 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
Nov 16 17:00:19 ctl01 kube-scheduler[12023]: W1116 17:00:19.878439   12023 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
Nov 16 17:00:19 ctl01 kube-scheduler[12023]: W1116 17:00:19.878465   12023 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
Nov 16 17:00:19 ctl01 kube-scheduler[12023]: I1116 17:00:19.885725   12023 server.go:150] Version: v1.13.5-3+98374c02d2d8c1
Nov 16 17:00:19 ctl01 kube-scheduler[12023]: I1116 17:00:19.885769   12023 defaults.go:210] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
Nov 16 17:00:19 ctl01 kube-scheduler[12023]: I1116 17:00:19.886007   12023 factory.go:1157] Creating scheduler from algorithm provider 'DefaultProvider'
Nov 16 17:00:19 ctl01 kube-scheduler[12023]: I1116 17:00:19.886063   12023 factory.go:1256] Creating scheduler with fit predicates 'map[NoVolumeZoneConflict:{} MaxAzureDiskVolumeCount:{} MatchInterPodAffinity:{} CheckNodeUnschedulable:{} PodToleratesNodeTaints:{} CheckVolumeBinding:{} MaxGCEPDVolumeCount:{} MaxCSIVolumeCountPred:{} GeneralPredicates:{} MaxEBSVolumeCount:{} NoDiskConflict:{}]' and priority functions 'map[ImageLocalityPriority:{} SelectorSpreadPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} BalancedResourceAllocation:{} NodePreferAvoidPodsPriority:{} NodeAffinityPriority:{} TaintTolerationPriority:{}]'
Nov 16 17:00:19 ctl01 kube-scheduler[12023]: W1116 17:00:19.886854   12023 authorization.go:47] Authorization is disabled
Nov 16 17:00:19 ctl01 kube-scheduler[12023]: W1116 17:00:19.886880   12023 authentication.go:55] Authentication is disabled
Nov 16 17:00:19 ctl01 kube-scheduler[12023]: I1116 17:00:19.886894   12023 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
Nov 16 17:00:19 ctl01 kube-scheduler[12023]: I1116 17:00:19.887708   12023 secure_serving.go:116] Serving securely on [::]:10259
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.922596   11919 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.946361   11919 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
Nov 16 17:00:19 ctl01 kube-apiserver[11919]: I1116 17:00:19.975353   11919 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
Nov 16 17:00:20 ctl01 salt-minion[4526]: [INFO    ] File /opt/cni/bin/calico-ipam is in the correct state
Nov 16 17:00:20 ctl01 salt-minion[4526]: [INFO    ] Completed state [/opt/cni/bin/calico-ipam] at time 17:00:20.001655 duration_in_ms=135.938
Nov 16 17:00:20 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/cni/net.d/10-calico.conf] at time 17:00:20.001954
Nov 16 17:00:20 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/cni/net.d/10-calico.conf]
Nov 16 17:00:20 ctl01 kube-apiserver[11919]: I1116 17:00:20.013982   11919 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
Nov 16 17:00:20 ctl01 kube-apiserver[11919]: I1116 17:00:20.054652   11919 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
Nov 16 17:00:20 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 17:00:20 ctl01 kube-apiserver[11919]: I1116 17:00:20.096567   11919 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
Nov 16 17:00:20 ctl01 kube-proxy[9926]: I1116 17:00:20.134340    9926 controller_utils.go:1034] Caches are synced for service config controller
Nov 16 17:00:20 ctl01 kube-proxy[9926]: I1116 17:00:20.134478    9926 proxier.go:645] Not syncing iptables until Services and Endpoints have been received from master
Nov 16 17:00:20 ctl01 kube-apiserver[11919]: I1116 17:00:20.134976   11919 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
Nov 16 17:00:20 ctl01 kube-apiserver[11919]: I1116 17:00:20.173821   11919 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
Nov 16 17:00:20 ctl01 kube-controller-manager[12110]: W1116 17:00:20.182558   12110 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
Nov 16 17:00:20 ctl01 kube-controller-manager[12110]: W1116 17:00:20.182671   12110 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
Nov 16 17:00:20 ctl01 kube-controller-manager[12110]: W1116 17:00:20.182695   12110 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
Nov 16 17:00:20 ctl01 kube-controller-manager[12110]: I1116 17:00:20.182724   12110 controllermanager.go:151] Version: v1.13.5-3+98374c02d2d8c1
Nov 16 17:00:20 ctl01 kube-controller-manager[12110]: I1116 17:00:20.183139   12110 secure_serving.go:116] Serving securely on [::]:10257
Nov 16 17:00:20 ctl01 kube-controller-manager[12110]: I1116 17:00:20.183524   12110 deprecated_insecure_serving.go:51] Serving insecurely on [::]:10252
Nov 16 17:00:20 ctl01 kube-controller-manager[12110]: I1116 17:00:20.183671   12110 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-controller-manager...
Nov 16 17:00:20 ctl01 kube-apiserver[11919]: I1116 17:00:20.214709   11919 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
Nov 16 17:00:20 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 17:00:20 ctl01 kube-controller-manager[12110]: I1116 17:00:20.232632   12110 leaderelection.go:214] successfully acquired lease kube-system/kube-controller-manager
Nov 16 17:00:20 ctl01 kube-proxy[9926]: I1116 17:00:20.233767    9926 controller_utils.go:1034] Caches are synced for endpoints config controller
Nov 16 17:00:20 ctl01 kube-proxy[9926]: I1116 17:00:20.233869    9926 service.go:309] Adding new service port "default/kubernetes:https" at 10.254.0.1:443/TCP
Nov 16 17:00:20 ctl01 kube-controller-manager[12110]: I1116 17:00:20.234129   12110 event.go:221] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"92611fa1-0892-11ea-a35a-5254009caaa4", APIVersion:"v1", ResourceVersion:"198", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ctl01_9259f004-0892-11ea-8d77-5254009caaa4 became leader
Nov 16 17:00:20 ctl01 kube-apiserver[11919]: I1116 17:00:20.255025   11919 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
Nov 16 17:00:20 ctl01 kube-apiserver[11919]: I1116 17:00:20.294470   11919 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
Nov 16 17:00:20 ctl01 kube-apiserver[11919]: I1116 17:00:20.335279   11919 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
Nov 16 17:00:20 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 17:00:20 ctl01 kube-apiserver[11919]: I1116 17:00:20.374389   11919 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
Nov 16 17:00:20 ctl01 kube-apiserver[11919]: I1116 17:00:20.415700   11919 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
Nov 16 17:00:20 ctl01 kube-apiserver[11919]: I1116 17:00:20.455548   11919 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
Nov 16 17:00:20 ctl01 kube-apiserver[11919]: I1116 17:00:20.496232   11919 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
Nov 16 17:00:20 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 17:00:20 ctl01 kube-apiserver[11919]: I1116 17:00:20.534585   11919 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
Nov 16 17:00:20 ctl01 kube-apiserver[11919]: I1116 17:00:20.573956   11919 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
Nov 16 17:00:20 ctl01 kube-apiserver[11919]: I1116 17:00:20.615811   11919 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
Nov 16 17:00:20 ctl01 kube-apiserver[11919]: I1116 17:00:20.654883   11919 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
Nov 16 17:00:20 ctl01 salt-minion[4526]: [INFO    ] File /etc/cni/net.d/10-calico.conf is in the correct state
Nov 16 17:00:20 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/cni/net.d/10-calico.conf] at time 17:00:20.670445 duration_in_ms=668.488
Nov 16 17:00:20 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/calico/network-environment] at time 17:00:20.673606
Nov 16 17:00:20 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/calico/network-environment]
Nov 16 17:00:20 ctl01 kube-apiserver[11919]: I1116 17:00:20.695418   11919 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
Nov 16 17:00:20 ctl01 kube-apiserver[11919]: I1116 17:00:20.734445   11919 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
Nov 16 17:00:20 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 17:00:20 ctl01 kube-apiserver[11919]: I1116 17:00:20.772590   11919 controller.go:608] quota admission added evaluator for: roles.rbac.authorization.k8s.io
Nov 16 17:00:20 ctl01 kube-apiserver[11919]: I1116 17:00:20.774136   11919 storage_rbac.go:246] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
Nov 16 17:00:20 ctl01 kube-scheduler[12023]: I1116 17:00:20.793518   12023 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
Nov 16 17:00:20 ctl01 kube-apiserver[11919]: I1116 17:00:20.815255   11919 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
Nov 16 17:00:20 ctl01 kube-apiserver[11919]: I1116 17:00:20.854777   11919 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
Nov 16 17:00:20 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 17:00:20 ctl01 kube-scheduler[12023]: I1116 17:00:20.893778   12023 controller_utils.go:1034] Caches are synced for scheduler controller
Nov 16 17:00:20 ctl01 kube-scheduler[12023]: I1116 17:00:20.893843   12023 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-scheduler...
Nov 16 17:00:20 ctl01 kube-apiserver[11919]: I1116 17:00:20.895193   11919 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
Nov 16 17:00:20 ctl01 kube-scheduler[12023]: I1116 17:00:20.902111   12023 leaderelection.go:214] successfully acquired lease kube-system/kube-scheduler
Nov 16 17:00:20 ctl01 kube-apiserver[11919]: I1116 17:00:20.934542   11919 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
Nov 16 17:00:20 ctl01 kube-apiserver[11919]: I1116 17:00:20.974127   11919 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
Nov 16 17:00:21 ctl01 kube-apiserver[11919]: I1116 17:00:21.018525   11919 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
Nov 16 17:00:21 ctl01 kube-apiserver[11919]: I1116 17:00:21.052879   11919 controller.go:608] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
Nov 16 17:00:21 ctl01 salt-minion[4526]: [INFO    ] File /etc/calico/network-environment is in the correct state
Nov 16 17:00:21 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/calico/network-environment] at time 17:00:21.055087 duration_in_ms=381.48
Nov 16 17:00:21 ctl01 kube-apiserver[11919]: I1116 17:00:21.055382   11919 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
Nov 16 17:00:21 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/calico/calicoctl.cfg] at time 17:00:21.055561
Nov 16 17:00:21 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/calico/calicoctl.cfg]
Nov 16 17:00:21 ctl01 kube-apiserver[11919]: I1116 17:00:21.095899   11919 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
Nov 16 17:00:21 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 17:00:21 ctl01 kube-apiserver[11919]: I1116 17:00:21.137479   11919 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
Nov 16 17:00:21 ctl01 kube-apiserver[11919]: I1116 17:00:21.174568   11919 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
Nov 16 17:00:21 ctl01 kube-apiserver[11919]: I1116 17:00:21.215043   11919 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
Nov 16 17:00:21 ctl01 kube-apiserver[11919]: I1116 17:00:21.254944   11919 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
Nov 16 17:00:21 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 17:00:21 ctl01 salt-minion[4526]: [INFO    ] File /etc/calico/calicoctl.cfg is in the correct state
Nov 16 17:00:21 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/calico/calicoctl.cfg] at time 17:00:21.425814 duration_in_ms=370.251
Nov 16 17:00:21 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/systemd/system/calico-node.service] at time 17:00:21.427460
Nov 16 17:00:21 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/systemd/system/calico-node.service]
Nov 16 17:00:21 ctl01 salt-minion[4526]: [INFO    ] File /etc/systemd/system/calico-node.service is in the correct state
Nov 16 17:00:21 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/systemd/system/calico-node.service] at time 17:00:21.462147 duration_in_ms=34.687
Nov 16 17:00:21 ctl01 salt-minion[4526]: [INFO    ] Running state [/var/lib/calico] at time 17:00:21.462441
Nov 16 17:00:21 ctl01 salt-minion[4526]: [INFO    ] Executing state file.directory for [/var/lib/calico]
Nov 16 17:00:21 ctl01 salt-minion[4526]: [INFO    ] Directory /var/lib/calico is in the correct state
Nov 16 17:00:21 ctl01 salt-minion[4526]: Directory /var/lib/calico updated
Nov 16 17:00:21 ctl01 salt-minion[4526]: [INFO    ] Completed state [/var/lib/calico] at time 17:00:21.463720 duration_in_ms=1.279
Nov 16 17:00:21 ctl01 salt-minion[4526]: [INFO    ] Running state [/var/log/calico] at time 17:00:21.463955
Nov 16 17:00:21 ctl01 salt-minion[4526]: [INFO    ] Executing state file.directory for [/var/log/calico]
Nov 16 17:00:21 ctl01 salt-minion[4526]: [INFO    ] Directory /var/log/calico is in the correct state
Nov 16 17:00:21 ctl01 salt-minion[4526]: Directory /var/log/calico updated
Nov 16 17:00:21 ctl01 salt-minion[4526]: [INFO    ] Completed state [/var/log/calico] at time 17:00:21.465169 duration_in_ms=1.214
Nov 16 17:00:21 ctl01 salt-minion[4526]: [INFO    ] Running state [calico-node] at time 17:00:21.468902
Nov 16 17:00:21 ctl01 salt-minion[4526]: [INFO    ] Executing state service.running for [calico-node]
Nov 16 17:00:21 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'status', 'calico-node.service', '-n', '0'] in directory '/root'
Nov 16 17:00:21 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-active', 'calico-node.service'] in directory '/root'
Nov 16 17:00:21 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'calico-node.service'] in directory '/root'
Nov 16 17:00:21 ctl01 salt-minion[4526]: [INFO    ] The service calico-node is already running
Nov 16 17:00:21 ctl01 salt-minion[4526]: [INFO    ] Completed state [calico-node] at time 17:00:21.535675 duration_in_ms=66.772
Nov 16 17:00:21 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/proxy.kubeconfig] at time 17:00:21.536120
Nov 16 17:00:21 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/proxy.kubeconfig]
Nov 16 17:00:21 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 17:00:21 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 17:00:21 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 17:00:22 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.267257   12110 plugins.go:103] No cloud provider specified.
Nov 16 17:00:22 ctl01 salt-minion[4526]: [INFO    ] File /etc/kubernetes/proxy.kubeconfig is in the correct state
Nov 16 17:00:22 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/proxy.kubeconfig] at time 17:00:22.267938 duration_in_ms=731.817
Nov 16 17:00:22 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/systemd/system/kube-proxy.service] at time 17:00:22.268293
Nov 16 17:00:22 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/systemd/system/kube-proxy.service]
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.269721   12110 controllermanager.go:501] Starting "replicaset"
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.270294   12110 controller_utils.go:1027] Waiting for caches to sync for tokens controller
Nov 16 17:00:22 ctl01 kube-apiserver[11919]: I1116 17:00:22.279620   11919 controller.go:608] quota admission added evaluator for: serviceaccounts
Nov 16 17:00:22 ctl01 salt-minion[4526]: [INFO    ] File /etc/systemd/system/kube-proxy.service is in the correct state
Nov 16 17:00:22 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/systemd/system/kube-proxy.service] at time 17:00:22.288616 duration_in_ms=20.323
Nov 16 17:00:22 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/default/kube-proxy] at time 17:00:22.288883
Nov 16 17:00:22 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/default/kube-proxy]
Nov 16 17:00:22 ctl01 salt-minion[4526]: [INFO    ] File /etc/default/kube-proxy is in the correct state
Nov 16 17:00:22 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/default/kube-proxy] at time 17:00:22.290791 duration_in_ms=1.909
Nov 16 17:00:22 ctl01 salt-minion[4526]: [INFO    ] Running state [kube-proxy] at time 17:00:22.293446
Nov 16 17:00:22 ctl01 salt-minion[4526]: [INFO    ] Executing state service.running for [kube-proxy]
Nov 16 17:00:22 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'status', 'kube-proxy.service', '-n', '0'] in directory '/root'
Nov 16 17:00:22 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-active', 'kube-proxy.service'] in directory '/root'
Nov 16 17:00:22 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kube-proxy.service'] in directory '/root'
Nov 16 17:00:22 ctl01 salt-minion[4526]: [INFO    ] The service kube-proxy is already running
Nov 16 17:00:22 ctl01 salt-minion[4526]: [INFO    ] Completed state [kube-proxy] at time 17:00:22.346790 duration_in_ms=53.345
Nov 16 17:00:22 ctl01 salt-minion[4526]: [INFO    ] Running state [/srv/kubernetes] at time 17:00:22.348709
Nov 16 17:00:22 ctl01 salt-minion[4526]: [INFO    ] Executing state file.directory for [/srv/kubernetes]
Nov 16 17:00:22 ctl01 salt-minion[4526]: [INFO    ] Directory /srv/kubernetes is in the correct state
Nov 16 17:00:22 ctl01 salt-minion[4526]: Directory /srv/kubernetes updated
Nov 16 17:00:22 ctl01 salt-minion[4526]: [INFO    ] Completed state [/srv/kubernetes] at time 17:00:22.350141 duration_in_ms=1.432
Nov 16 17:00:22 ctl01 salt-minion[4526]: [INFO    ] Running state [/srv/kubernetes/roles/cluster-admin/root-cluster-admin-binding-rolebinding.yml] at time 17:00:22.351389
Nov 16 17:00:22 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/srv/kubernetes/roles/cluster-admin/root-cluster-admin-binding-rolebinding.yml]
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.370900   12110 controller_utils.go:1034] Caches are synced for tokens controller
Nov 16 17:00:22 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/rolebinding.yml'
Nov 16 17:00:22 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 17:00:22 ctl01 salt-minion[4526]: New file
Nov 16 17:00:22 ctl01 salt-minion[4526]: [INFO    ] Completed state [/srv/kubernetes/roles/cluster-admin/root-cluster-admin-binding-rolebinding.yml] at time 17:00:22.390946 duration_in_ms=39.557
Nov 16 17:00:22 ctl01 salt-minion[4526]: [INFO    ] Running state [kubectl apply -f /srv/kubernetes/roles/cluster-admin/root-cluster-admin-binding-rolebinding.yml] at time 17:00:22.392241
Nov 16 17:00:22 ctl01 salt-minion[4526]: [INFO    ] Executing state cmd.run for [kubectl apply -f /srv/kubernetes/roles/cluster-admin/root-cluster-admin-binding-rolebinding.yml]
Nov 16 17:00:22 ctl01 salt-minion[4526]: [INFO    ] Executing command 'kubectl apply -f /srv/kubernetes/roles/cluster-admin/root-cluster-admin-binding-rolebinding.yml' in directory '/root'
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.412510   12110 controllermanager.go:516] Started "replicaset"
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.412535   12110 controllermanager.go:501] Starting "route"
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.412544   12110 core.go:151] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: W1116 17:00:22.412553   12110 controllermanager.go:508] Skipping "route"
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.412559   12110 controllermanager.go:501] Starting "attachdetach"
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.412630   12110 replica_set.go:182] Starting replicaset controller
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.412650   12110 controller_utils.go:1027] Waiting for caches to sync for ReplicaSet controller
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.470414   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/aws-ebs"
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.470515   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/gce-pd"
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.470530   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/cinder"
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.470552   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/portworx-volume"
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.470570   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/vsphere-volume"
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.470580   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/azure-disk"
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.470590   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/photon-pd"
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.470600   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/scaleio"
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.470618   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/storageos"
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.470629   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/fc"
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.470638   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/iscsi"
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.470654   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/rbd"
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.470666   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/csi"
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.470787   12110 controllermanager.go:516] Started "attachdetach"
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.470807   12110 controllermanager.go:501] Starting "clusterrole-aggregation"
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.470958   12110 attach_detach_controller.go:315] Starting attach detach controller
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.470980   12110 controller_utils.go:1027] Waiting for caches to sync for attach detach controller
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.517049   12110 controllermanager.go:516] Started "clusterrole-aggregation"
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.517089   12110 controllermanager.go:501] Starting "endpoint"
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.517216   12110 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.517235   12110 controller_utils.go:1027] Waiting for caches to sync for ClusterRoleAggregator controller
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.567285   12110 controllermanager.go:516] Started "endpoint"
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.567311   12110 controllermanager.go:501] Starting "garbagecollector"
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.567418   12110 endpoints_controller.go:160] Starting endpoint controller
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.567429   12110 controller_utils.go:1027] Waiting for caches to sync for endpoint controller
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: W1116 17:00:22.611068   12110 garbagecollector.go:649] failed to discover preferred resources: the cache has not been filled yet
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.611533   12110 controllermanager.go:516] Started "garbagecollector"
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.611556   12110 controllermanager.go:501] Starting "job"
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.611539   12110 garbagecollector.go:133] Starting garbage collector controller
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.611674   12110 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.611716   12110 graph_builder.go:308] GraphBuilder running
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.661707   12110 garbagecollector.go:204] syncing garbage collector with updated resources from discovery (attempt 1): added: [/v1, Resource=configmaps /v1, Resource=endpoints /v1, Resource=events /v1, Resource=limitranges /v1, Resource=namespaces /v1, Resource=nodes /v1, Resource=persistentvolumeclaims /v1, Resource=persistentvolumes /v1, Resource=pods /v1, Resource=podtemplates /v1, Resource=replicationcontrollers /v1, Resource=resourcequotas /v1, Resource=secrets /v1, Resource=serviceaccounts /v1, Resource=services admissionregistration.k8s.io/v1beta1, Resource=mutatingwebhookconfigurations admissionregistration.k8s.io/v1beta1, Resource=validatingwebhookconfigurations apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions apiregistration.k8s.io/v1, Resource=apiservices apps/v1, Resource=controllerrevisions apps/v1, Resource=daemonsets apps/v1, Resource=deployments apps/v1, Resource=replicasets apps/v1, Resource=statefulsets autoscaling/v1, Resource=horizontalpodautoscalers batch/v1, Resource=jobs batch/v1beta1, Resource=cronjobs certificates.k8s.io/v1beta1, Resource=certificatesigningrequests coordination.k8s.io/v1beta1, Resource=leases events.k8s.io/v1beta1, Resource=events extensions/v1beta1, Resource=daemonsets extensions/v1beta1, Resource=deployments extensions/v1beta1, Resource=ingresses extensions/v1beta1, Resource=networkpolicies extensions/v1beta1, Resource=podsecuritypolicies extensions/v1beta1, Resource=replicasets networking.k8s.io/v1, Resource=networkpolicies policy/v1beta1, Resource=poddisruptionbudgets policy/v1beta1, Resource=podsecuritypolicies rbac.authorization.k8s.io/v1, Resource=clusterrolebindings rbac.authorization.k8s.io/v1, Resource=clusterroles rbac.authorization.k8s.io/v1, Resource=rolebindings rbac.authorization.k8s.io/v1, Resource=roles scheduling.k8s.io/v1beta1, Resource=priorityclasses storage.k8s.io/v1, Resource=storageclasses storage.k8s.io/v1, Resource=volumeattachments], removed: []
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.673911   12110 controllermanager.go:516] Started "job"
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.673938   12110 controllermanager.go:501] Starting "deployment"
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.673975   12110 job_controller.go:143] Starting job controller
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.673985   12110 controller_utils.go:1027] Waiting for caches to sync for job controller
Nov 16 17:00:22 ctl01 salt-minion[4526]: [INFO    ] {'pid': 12525, 'retcode': 0, 'stderr': '', 'stdout': 'clusterrolebinding.rbac.authorization.k8s.io/root-cluster-admin-binding created'}
Nov 16 17:00:22 ctl01 salt-minion[4526]: [INFO    ] Completed state [kubectl apply -f /srv/kubernetes/roles/cluster-admin/root-cluster-admin-binding-rolebinding.yml] at time 17:00:22.775752 duration_in_ms=383.51
Nov 16 17:00:22 ctl01 salt-minion[4526]: [INFO    ] Returning information for job: 20191116165942225753
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.827128   12110 controllermanager.go:516] Started "deployment"
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.827164   12110 controllermanager.go:501] Starting "cronjob"
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.827197   12110 deployment_controller.go:152] Starting deployment controller
Nov 16 17:00:22 ctl01 kube-controller-manager[12110]: I1116 17:00:22.827232   12110 controller_utils.go:1027] Waiting for caches to sync for deployment controller
Nov 16 17:00:23 ctl01 kube-controller-manager[12110]: I1116 17:00:23.079696   12110 controllermanager.go:516] Started "cronjob"
Nov 16 17:00:23 ctl01 kube-controller-manager[12110]: I1116 17:00:23.079746   12110 controllermanager.go:501] Starting "serviceaccount"
Nov 16 17:00:23 ctl01 kube-controller-manager[12110]: I1116 17:00:23.079812   12110 cronjob_controller.go:92] Starting CronJob Manager
Nov 16 17:00:23 ctl01 kube-controller-manager[12110]: I1116 17:00:23.331952   12110 controllermanager.go:516] Started "serviceaccount"
Nov 16 17:00:23 ctl01 kube-controller-manager[12110]: I1116 17:00:23.331998   12110 controllermanager.go:501] Starting "disruption"
Nov 16 17:00:23 ctl01 kube-controller-manager[12110]: I1116 17:00:23.332067   12110 serviceaccounts_controller.go:115] Starting service account controller
Nov 16 17:00:23 ctl01 kube-controller-manager[12110]: I1116 17:00:23.332081   12110 controller_utils.go:1027] Waiting for caches to sync for service account controller
Nov 16 17:00:23 ctl01 salt-minion[4526]: [INFO    ] User sudo_ubuntu Executing command state.sls with jid 20191116170023465408
Nov 16 17:00:23 ctl01 salt-minion[4526]: [INFO    ] Starting a new job with PID 12550
Nov 16 17:00:23 ctl01 kube-controller-manager[12110]: I1116 17:00:23.586577   12110 controllermanager.go:516] Started "disruption"
Nov 16 17:00:23 ctl01 kube-controller-manager[12110]: I1116 17:00:23.586620   12110 controllermanager.go:501] Starting "csrapproving"
Nov 16 17:00:23 ctl01 kube-controller-manager[12110]: I1116 17:00:23.586664   12110 disruption.go:288] Starting disruption controller
Nov 16 17:00:23 ctl01 kube-controller-manager[12110]: I1116 17:00:23.586689   12110 controller_utils.go:1027] Waiting for caches to sync for disruption controller
Nov 16 17:00:23 ctl01 kube-controller-manager[12110]: I1116 17:00:23.828710   12110 controllermanager.go:516] Started "csrapproving"
Nov 16 17:00:23 ctl01 kube-controller-manager[12110]: I1116 17:00:23.828740   12110 controllermanager.go:501] Starting "pv-protection"
Nov 16 17:00:23 ctl01 kube-controller-manager[12110]: I1116 17:00:23.828798   12110 certificate_controller.go:113] Starting certificate controller
Nov 16 17:00:23 ctl01 kube-controller-manager[12110]: I1116 17:00:23.828819   12110 controller_utils.go:1027] Waiting for caches to sync for certificate controller
Nov 16 17:00:24 ctl01 kube-controller-manager[12110]: I1116 17:00:24.078243   12110 controllermanager.go:516] Started "pv-protection"
Nov 16 17:00:24 ctl01 kube-controller-manager[12110]: I1116 17:00:24.078266   12110 pv_protection_controller.go:81] Starting PV protection controller
Nov 16 17:00:24 ctl01 kube-controller-manager[12110]: I1116 17:00:24.078276   12110 controllermanager.go:501] Starting "statefulset"
Nov 16 17:00:24 ctl01 kube-controller-manager[12110]: I1116 17:00:24.078282   12110 controller_utils.go:1027] Waiting for caches to sync for PV protection controller
Nov 16 17:00:24 ctl01 kube-controller-manager[12110]: I1116 17:00:24.327227   12110 controllermanager.go:516] Started "statefulset"
Nov 16 17:00:24 ctl01 kube-controller-manager[12110]: I1116 17:00:24.327279   12110 controllermanager.go:501] Starting "service"
Nov 16 17:00:24 ctl01 kube-controller-manager[12110]: I1116 17:00:24.327303   12110 stateful_set.go:151] Starting stateful set controller
Nov 16 17:00:24 ctl01 kube-controller-manager[12110]: I1116 17:00:24.327324   12110 controller_utils.go:1027] Waiting for caches to sync for stateful set controller
Nov 16 17:00:24 ctl01 kube-controller-manager[12110]: E1116 17:00:24.579270   12110 core.go:76] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
Nov 16 17:00:24 ctl01 kube-controller-manager[12110]: W1116 17:00:24.579327   12110 controllermanager.go:508] Skipping "service"
Nov 16 17:00:24 ctl01 kube-controller-manager[12110]: I1116 17:00:24.579351   12110 controllermanager.go:501] Starting "csrsigning"
Nov 16 17:00:24 ctl01 kube-controller-manager[12110]: W1116 17:00:24.579399   12110 controllermanager.go:508] Skipping "csrsigning"
Nov 16 17:00:24 ctl01 kube-controller-manager[12110]: I1116 17:00:24.579416   12110 controllermanager.go:501] Starting "csrcleaner"
Nov 16 17:00:24 ctl01 kube-controller-manager[12110]: I1116 17:00:24.723828   12110 controllermanager.go:516] Started "csrcleaner"
Nov 16 17:00:24 ctl01 kube-controller-manager[12110]: I1116 17:00:24.723854   12110 controllermanager.go:501] Starting "nodeipam"
Nov 16 17:00:24 ctl01 kube-controller-manager[12110]: W1116 17:00:24.723863   12110 controllermanager.go:508] Skipping "nodeipam"
Nov 16 17:00:24 ctl01 kube-controller-manager[12110]: I1116 17:00:24.723870   12110 controllermanager.go:501] Starting "nodelifecycle"
Nov 16 17:00:24 ctl01 kube-controller-manager[12110]: I1116 17:00:24.723885   12110 cleaner.go:81] Starting CSR cleaner controller
Nov 16 17:00:24 ctl01 kube-controller-manager[12110]: I1116 17:00:24.977269   12110 node_lifecycle_controller.go:272] Sending events to api server.
Nov 16 17:00:24 ctl01 kube-controller-manager[12110]: I1116 17:00:24.977670   12110 node_lifecycle_controller.go:312] Controller is using taint based evictions.
Nov 16 17:00:24 ctl01 kube-controller-manager[12110]: I1116 17:00:24.977849   12110 taint_manager.go:175] Sending events to api server.
Nov 16 17:00:24 ctl01 kube-controller-manager[12110]: I1116 17:00:24.978063   12110 node_lifecycle_controller.go:378] Controller will taint node by condition.
Nov 16 17:00:24 ctl01 kube-controller-manager[12110]: I1116 17:00:24.978257   12110 controllermanager.go:516] Started "nodelifecycle"
Nov 16 17:00:24 ctl01 kube-controller-manager[12110]: I1116 17:00:24.978287   12110 controllermanager.go:501] Starting "root-ca-cert-publisher"
Nov 16 17:00:24 ctl01 kube-controller-manager[12110]: W1116 17:00:24.978303   12110 controllermanager.go:508] Skipping "root-ca-cert-publisher"
Nov 16 17:00:24 ctl01 kube-controller-manager[12110]: I1116 17:00:24.978315   12110 controllermanager.go:501] Starting "podgc"
Nov 16 17:00:24 ctl01 kube-controller-manager[12110]: I1116 17:00:24.978523   12110 node_lifecycle_controller.go:423] Starting node controller
Nov 16 17:00:24 ctl01 kube-controller-manager[12110]: I1116 17:00:24.978544   12110 controller_utils.go:1027] Waiting for caches to sync for taint controller
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: I1116 17:00:25.230124   12110 controllermanager.go:516] Started "podgc"
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: I1116 17:00:25.230160   12110 controllermanager.go:501] Starting "resourcequota"
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: I1116 17:00:25.230278   12110 gc_controller.go:76] Starting GC controller
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: I1116 17:00:25.230308   12110 controller_utils.go:1027] Waiting for caches to sync for GC controller
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: W1116 17:00:25.507271   12110 shared_informer.go:311] resyncPeriod 59324403503323 is smaller than resyncCheckPeriod 86289044575102 and the informer has already started. Changing it to 86289044575102
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: I1116 17:00:25.507410   12110 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: I1116 17:00:25.507493   12110 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: I1116 17:00:25.507521   12110 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: I1116 17:00:25.507553   12110 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: I1116 17:00:25.507628   12110 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: I1116 17:00:25.507653   12110 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: I1116 17:00:25.507689   12110 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: I1116 17:00:25.507719   12110 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: W1116 17:00:25.507740   12110 shared_informer.go:311] resyncPeriod 79687194849418 is smaller than resyncCheckPeriod 86289044575102 and the informer has already started. Changing it to 86289044575102
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: I1116 17:00:25.507789   12110 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: I1116 17:00:25.507850   12110 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.extensions
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: I1116 17:00:25.507880   12110 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: I1116 17:00:25.507909   12110 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: I1116 17:00:25.507934   12110 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.extensions
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: I1116 17:00:25.507963   12110 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: I1116 17:00:25.507993   12110 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: I1116 17:00:25.508020   12110 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.extensions
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: I1116 17:00:25.508048   12110 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: I1116 17:00:25.508073   12110 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: I1116 17:00:25.508136   12110 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: I1116 17:00:25.508170   12110 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: I1116 17:00:25.508210   12110 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: I1116 17:00:25.508246   12110 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: E1116 17:00:25.508264   12110 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: I1116 17:00:25.508286   12110 controllermanager.go:516] Started "resourcequota"
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: I1116 17:00:25.508297   12110 controllermanager.go:501] Starting "namespace"
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: I1116 17:00:25.508346   12110 resource_quota_controller.go:276] Starting resource quota controller
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: I1116 17:00:25.508372   12110 controller_utils.go:1027] Waiting for caches to sync for resource quota controller
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: I1116 17:00:25.508391   12110 resource_quota_monitor.go:301] QuotaMonitor running
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: I1116 17:00:25.739861   12110 controllermanager.go:516] Started "namespace"
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: I1116 17:00:25.739896   12110 controllermanager.go:501] Starting "horizontalpodautoscaling"
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: I1116 17:00:25.739935   12110 namespace_controller.go:186] Starting namespace controller
Nov 16 17:00:25 ctl01 kube-controller-manager[12110]: I1116 17:00:25.739944   12110 controller_utils.go:1027] Waiting for caches to sync for namespace controller
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.421445   12110 controllermanager.go:516] Started "horizontalpodautoscaling"
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.421482   12110 controllermanager.go:501] Starting "persistentvolume-binder"
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.421518   12110 horizontal.go:156] Starting HPA controller
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.421528   12110 controller_utils.go:1027] Waiting for caches to sync for HPA controller
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.676963   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/host-path"
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.676995   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/nfs"
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.677007   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/glusterfs"
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.677016   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/rbd"
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.677025   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/quobyte"
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.677035   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/azure-file"
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.677044   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/flocker"
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.677061   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/portworx-volume"
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.677074   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/scaleio"
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.677085   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/local-volume"
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.677095   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/storageos"
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.677104   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/aws-ebs"
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.677113   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/gce-pd"
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.677125   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/cinder"
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.677134   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/vsphere-volume"
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.677144   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/azure-disk"
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.677153   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/photon-pd"
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.677209   12110 controllermanager.go:516] Started "persistentvolume-binder"
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: W1116 17:00:26.677221   12110 controllermanager.go:495] "tokencleaner" is disabled
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.677229   12110 controllermanager.go:501] Starting "persistentvolume-expander"
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.677239   12110 pv_controller_base.go:271] Starting persistent volume controller
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.677256   12110 controller_utils.go:1027] Waiting for caches to sync for persistent volume controller
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.931150   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/aws-ebs"
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.931184   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/gce-pd"
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.931197   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/cinder"
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.931216   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/portworx-volume"
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.931228   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/vsphere-volume"
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.931239   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/glusterfs"
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.931250   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/rbd"
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.931260   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/azure-disk"
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.931272   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/azure-file"
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.931283   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/photon-pd"
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.931293   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/scaleio"
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.931304   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/storageos"
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.931315   12110 plugins.go:547] Loaded volume plugin "kubernetes.io/fc"
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.931457   12110 controllermanager.go:516] Started "persistentvolume-expander"
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.931471   12110 controllermanager.go:501] Starting "pvc-protection"
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.931557   12110 expand_controller.go:153] Starting expand controller
Nov 16 17:00:26 ctl01 kube-controller-manager[12110]: I1116 17:00:26.931569   12110 controller_utils.go:1027] Waiting for caches to sync for expand controller
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.001584   12110 resource_quota_controller.go:427] syncing resource quota controller with updated resources from discovery: map[/v1, Resource=events:{} /v1, Resource=persistentvolumeclaims:{} /v1, Resource=services:{} apps/v1, Resource=daemonsets:{} policy/v1beta1, Resource=poddisruptionbudgets:{} /v1, Resource=pods:{} /v1, Resource=resourcequotas:{} extensions/v1beta1, Resource=replicasets:{} apps/v1, Resource=statefulsets:{} events.k8s.io/v1beta1, Resource=events:{} /v1, Resource=podtemplates:{} /v1, Resource=endpoints:{} /v1, Resource=secrets:{} /v1, Resource=limitranges:{} extensions/v1beta1, Resource=deployments:{} /v1, Resource=configmaps:{} /v1, Resource=serviceaccounts:{} apps/v1, Resource=deployments:{} autoscaling/v1, Resource=horizontalpodautoscalers:{} extensions/v1beta1, Resource=networkpolicies:{} networking.k8s.io/v1, Resource=networkpolicies:{} rbac.authorization.k8s.io/v1, Resource=roles:{} extensions/v1beta1, Resource=daemonsets:{} apps/v1, Resource=controllerrevisions:{} apps/v1, Resource=replicasets:{} rbac.authorization.k8s.io/v1, Resource=rolebindings:{} coordination.k8s.io/v1beta1, Resource=leases:{} /v1, Resource=replicationcontrollers:{} extensions/v1beta1, Resource=ingresses:{} batch/v1, Resource=jobs:{} batch/v1beta1, Resource=cronjobs:{}]
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.180248   12110 controllermanager.go:516] Started "pvc-protection"
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.180287   12110 controllermanager.go:501] Starting "ttl-after-finished"
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: W1116 17:00:27.180302   12110 controllermanager.go:508] Skipping "ttl-after-finished"
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.180311   12110 controllermanager.go:501] Starting "replicationcontroller"
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.180360   12110 pvc_protection_controller.go:99] Starting PVC protection controller
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.180383   12110 controller_utils.go:1027] Waiting for caches to sync for PVC protection controller
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.427380   12110 controllermanager.go:516] Started "replicationcontroller"
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.427421   12110 controllermanager.go:501] Starting "daemonset"
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.427492   12110 replica_set.go:182] Starting replicationcontroller controller
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.427814   12110 controller_utils.go:1027] Waiting for caches to sync for ReplicationController controller
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.681186   12110 controllermanager.go:516] Started "daemonset"
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.681232   12110 daemon_controller.go:269] Starting daemon sets controller
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.681245   12110 controllermanager.go:501] Starting "ttl"
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.681257   12110 controller_utils.go:1027] Waiting for caches to sync for daemon sets controller
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.926968   12110 controllermanager.go:516] Started "ttl"
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: W1116 17:00:27.926992   12110 controllermanager.go:495] "bootstrapsigner" is disabled
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.927779   12110 ttl_controller.go:116] Starting TTL controller
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.927813   12110 controller_utils.go:1027] Waiting for caches to sync for TTL controller
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: E1116 17:00:27.930835   12110 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.933027   12110 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: W1116 17:00:27.935537   12110 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="cmp001" does not exist
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: W1116 17:00:27.935582   12110 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="cmp002" does not exist
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: W1116 17:00:27.935597   12110 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="ctl01" does not exist
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.967614   12110 controller_utils.go:1034] Caches are synced for endpoint controller
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.971142   12110 controller_utils.go:1034] Caches are synced for attach detach controller
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.974121   12110 controller_utils.go:1034] Caches are synced for job controller
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.977414   12110 controller_utils.go:1034] Caches are synced for persistent volume controller
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.978415   12110 controller_utils.go:1034] Caches are synced for PV protection controller
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.978672   12110 controller_utils.go:1034] Caches are synced for taint controller
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.978733   12110 node_lifecycle_controller.go:665] Controller observed a new Node: "cmp002"
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.978766   12110 controller_utils.go:212] Recording Registered Node cmp002 in Controller event message for node cmp002
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.978802   12110 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.978823   12110 node_lifecycle_controller.go:665] Controller observed a new Node: "ctl01"
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.978834   12110 controller_utils.go:212] Recording Registered Node ctl01 in Controller event message for node ctl01
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.978848   12110 node_lifecycle_controller.go:665] Controller observed a new Node: "cmp001"
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.978858   12110 controller_utils.go:212] Recording Registered Node cmp001 in Controller event message for node cmp001
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: W1116 17:00:27.978912   12110 node_lifecycle_controller.go:895] Missing timestamp for Node cmp002. Assuming now as a timestamp.
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: W1116 17:00:27.978955   12110 node_lifecycle_controller.go:895] Missing timestamp for Node ctl01. Assuming now as a timestamp.
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: W1116 17:00:27.978982   12110 node_lifecycle_controller.go:895] Missing timestamp for Node cmp001. Assuming now as a timestamp.
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.979020   12110 node_lifecycle_controller.go:1122] Controller detected that zone  is now in state Normal.
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.979461   12110 taint_manager.go:198] Starting NoExecuteTaintManager
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.980502   12110 controller_utils.go:1034] Caches are synced for PVC protection controller
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.980584   12110 event.go:221] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"cmp001", UID:"912bd657-0892-11ea-a35a-5254009caaa4", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node cmp001 event: Registered Node cmp001 in Controller
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.980616   12110 event.go:221] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"cmp002", UID:"912beffc-0892-11ea-a35a-5254009caaa4", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node cmp002 event: Registered Node cmp002 in Controller
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.980635   12110 event.go:221] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ctl01", UID:"912cd317-0892-11ea-a35a-5254009caaa4", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ctl01 event: Registered Node ctl01 in Controller
Nov 16 17:00:27 ctl01 kube-controller-manager[12110]: I1116 17:00:27.981447   12110 controller_utils.go:1034] Caches are synced for daemon sets controller
Nov 16 17:00:28 ctl01 kube-controller-manager[12110]: I1116 17:00:28.012812   12110 controller_utils.go:1034] Caches are synced for ReplicaSet controller
Nov 16 17:00:28 ctl01 kube-controller-manager[12110]: I1116 17:00:28.023356   12110 controller_utils.go:1034] Caches are synced for HPA controller
Nov 16 17:00:28 ctl01 kube-controller-manager[12110]: I1116 17:00:28.027371   12110 controller_utils.go:1034] Caches are synced for deployment controller
Nov 16 17:00:28 ctl01 kube-controller-manager[12110]: I1116 17:00:28.027442   12110 controller_utils.go:1034] Caches are synced for stateful set controller
Nov 16 17:00:28 ctl01 kube-controller-manager[12110]: I1116 17:00:28.027935   12110 controller_utils.go:1034] Caches are synced for TTL controller
Nov 16 17:00:28 ctl01 kube-controller-manager[12110]: I1116 17:00:28.027937   12110 controller_utils.go:1034] Caches are synced for ReplicationController controller
Nov 16 17:00:28 ctl01 kube-controller-manager[12110]: I1116 17:00:28.028936   12110 controller_utils.go:1034] Caches are synced for certificate controller
Nov 16 17:00:28 ctl01 kube-controller-manager[12110]: I1116 17:00:28.030491   12110 controller_utils.go:1034] Caches are synced for GC controller
Nov 16 17:00:28 ctl01 kube-controller-manager[12110]: I1116 17:00:28.031696   12110 controller_utils.go:1034] Caches are synced for expand controller
Nov 16 17:00:28 ctl01 kube-controller-manager[12110]: I1116 17:00:28.032218   12110 controller_utils.go:1034] Caches are synced for service account controller
Nov 16 17:00:28 ctl01 kube-controller-manager[12110]: I1116 17:00:28.039528   12110 ttl_controller.go:271] Changed ttl annotation for node cmp001 to 0 seconds
Nov 16 17:00:28 ctl01 kube-controller-manager[12110]: I1116 17:00:28.039805   12110 ttl_controller.go:271] Changed ttl annotation for node ctl01 to 0 seconds
Nov 16 17:00:28 ctl01 kube-controller-manager[12110]: I1116 17:00:28.040090   12110 controller_utils.go:1034] Caches are synced for namespace controller
Nov 16 17:00:28 ctl01 kube-controller-manager[12110]: I1116 17:00:28.040306   12110 ttl_controller.go:271] Changed ttl annotation for node cmp002 to 0 seconds
Nov 16 17:00:28 ctl01 kube-controller-manager[12110]: I1116 17:00:28.086834   12110 controller_utils.go:1034] Caches are synced for disruption controller
Nov 16 17:00:28 ctl01 kube-controller-manager[12110]: I1116 17:00:28.086987   12110 disruption.go:296] Sending events to api server.
Nov 16 17:00:28 ctl01 kube-controller-manager[12110]: I1116 17:00:28.108536   12110 controller_utils.go:1034] Caches are synced for resource quota controller
Nov 16 17:00:28 ctl01 kube-controller-manager[12110]: I1116 17:00:28.217402   12110 controller_utils.go:1034] Caches are synced for ClusterRoleAggregator controller
Nov 16 17:00:28 ctl01 kubelet[9716]: I1116 17:00:28.261888    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 17:00:28 ctl01 kube-controller-manager[12110]: I1116 17:00:28.511910   12110 controller_utils.go:1034] Caches are synced for garbage collector controller
Nov 16 17:00:28 ctl01 kube-controller-manager[12110]: I1116 17:00:28.511949   12110 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
Nov 16 17:00:28 ctl01 kube-controller-manager[12110]: I1116 17:00:28.533664   12110 controller_utils.go:1034] Caches are synced for garbage collector controller
Nov 16 17:00:28 ctl01 kube-controller-manager[12110]: I1116 17:00:28.533694   12110 garbagecollector.go:245] synced garbage collector
Nov 16 17:00:29 ctl01 salt-minion[4526]: [INFO    ] Loading fresh modules for state activity
Nov 16 17:00:29 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 17:00:30 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 17:00:30 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 17:00:30 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 17:00:30 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/kubeconfig.sh] at time 17:00:30.493430
Nov 16 17:00:30 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/kubeconfig.sh]
Nov 16 17:00:30 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/kubeconfig.sh'
Nov 16 17:00:30 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 17:00:30 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 17:00:30 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 17:00:30 ctl01 salt-minion[4526]: New file
Nov 16 17:00:30 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/kubeconfig.sh] at time 17:00:30.894338 duration_in_ms=400.907
Nov 16 17:00:30 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/kubeconfig.sh > /etc/kubernetes/admin-kube-config] at time 17:00:30.896231
Nov 16 17:00:30 ctl01 salt-minion[4526]: [INFO    ] Executing state cmd.run for [/etc/kubernetes/kubeconfig.sh > /etc/kubernetes/admin-kube-config]
Nov 16 17:00:30 ctl01 salt-minion[4526]: [INFO    ] Executing command '/etc/kubernetes/kubeconfig.sh > /etc/kubernetes/admin-kube-config' in directory '/root'
Nov 16 17:00:30 ctl01 salt-minion[4526]: [INFO    ] {'pid': 12693, 'retcode': 0, 'stderr': '', 'stdout': ''}
Nov 16 17:00:30 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/kubeconfig.sh > /etc/kubernetes/admin-kube-config] at time 17:00:30.955711 duration_in_ms=59.48
Nov 16 17:00:30 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/addons/namespace.yaml] at time 17:00:30.956295
Nov 16 17:00:30 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/addons/namespace.yaml]
Nov 16 17:00:30 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/kube-addon-manager/namespace.yaml'
Nov 16 17:00:30 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 17:00:30 ctl01 salt-minion[4526]: New file
Nov 16 17:00:30 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/addons/namespace.yaml] at time 17:00:30.984790 duration_in_ms=28.495
Nov 16 17:00:30 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/kubernetes/manifests/kube-addon-manager.yml] at time 17:00:30.985108
Nov 16 17:00:30 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/etc/kubernetes/manifests/kube-addon-manager.yml]
Nov 16 17:00:31 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/manifest/kube-addon-manager.yml'
Nov 16 17:00:31 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 17:00:31 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 17:00:31 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 17:00:31 ctl01 salt-minion[4526]: New file
Nov 16 17:00:31 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/kubernetes/manifests/kube-addon-manager.yml] at time 17:00:31.384410 duration_in_ms=399.302
Nov 16 17:00:31 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/default/kube-addon-manager] at time 17:00:31.384744
Nov 16 17:00:31 ctl01 salt-minion[4526]: [INFO    ] Executing state file.absent for [/etc/default/kube-addon-manager]
Nov 16 17:00:31 ctl01 salt-minion[4526]: [INFO    ] File /etc/default/kube-addon-manager is not present
Nov 16 17:00:31 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/default/kube-addon-manager] at time 17:00:31.385647 duration_in_ms=0.902
Nov 16 17:00:31 ctl01 salt-minion[4526]: [INFO    ] Running state [/usr/bin/kube-addons.sh] at time 17:00:31.385878
Nov 16 17:00:31 ctl01 salt-minion[4526]: [INFO    ] Executing state file.absent for [/usr/bin/kube-addons.sh]
Nov 16 17:00:31 ctl01 salt-minion[4526]: [INFO    ] File /usr/bin/kube-addons.sh is not present
Nov 16 17:00:31 ctl01 salt-minion[4526]: [INFO    ] Completed state [/usr/bin/kube-addons.sh] at time 17:00:31.386622 duration_in_ms=0.744
Nov 16 17:00:31 ctl01 kubelet[9716]: E1116 17:00:31.394059    9716 file.go:108] Unable to process watch event: can't process config file "/etc/kubernetes/manifests/kube-addon-manager.ymlAkr3gs": open /etc/kubernetes/manifests/kube-addon-manager.ymlAkr3gs: no such file or directory
Nov 16 17:00:31 ctl01 kubelet[9716]: I1116 17:00:31.394173    9716 kubelet.go:1908] SyncLoop (ADD, "file"): "kube-addon-manager-ctl01_kube-system(b6e9bf37167122649376fb470187a295)"
Nov 16 17:00:31 ctl01 kubelet[9716]: E1116 17:00:31.394689    9716 file.go:108] Unable to process watch event: can't process config file "/etc/kubernetes/manifests/kube-addon-manager.ymlAkr3gs": open /etc/kubernetes/manifests/kube-addon-manager.ymlAkr3gs: no such file or directory
Nov 16 17:00:31 ctl01 kubelet[9716]: I1116 17:00:31.405807    9716 kubelet.go:1918] SyncLoop (REMOVE, "file"): "kube-addon-manager-ctl01_kube-system(b6e9bf37167122649376fb470187a295)"
Nov 16 17:00:31 ctl01 kubelet[9716]: I1116 17:00:31.405852    9716 kubelet.go:2120] Failed to delete pod "kube-addon-manager-ctl01_kube-system(b6e9bf37167122649376fb470187a295)", err: pod not found
Nov 16 17:00:31 ctl01 kubelet[9716]: I1116 17:00:31.405870    9716 kubelet.go:1908] SyncLoop (ADD, "file"): "kube-addon-manager-ctl01_kube-system(b6e9bf37167122649376fb470187a295)"
Nov 16 17:00:31 ctl01 kubelet[9716]: I1116 17:00:31.416749    9716 kubelet.go:1908] SyncLoop (ADD, "api"): "kube-addon-manager-ctl01_kube-system(990bb037-0892-11ea-a35a-5254009caaa4)"
Nov 16 17:00:31 ctl01 kubelet[9716]: I1116 17:00:31.416934    9716 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "addons" (UniqueName: "kubernetes.io/host-path/b6e9bf37167122649376fb470187a295-addons") pod "kube-addon-manager-ctl01" (UID: "b6e9bf37167122649376fb470187a295")
Nov 16 17:00:31 ctl01 kubelet[9716]: I1116 17:00:31.416963    9716 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "varlog" (UniqueName: "kubernetes.io/host-path/b6e9bf37167122649376fb470187a295-varlog") pod "kube-addon-manager-ctl01" (UID: "b6e9bf37167122649376fb470187a295")
Nov 16 17:00:31 ctl01 kubelet[9716]: I1116 17:00:31.517275    9716 reconciler.go:252] operationExecutor.MountVolume started for volume "addons" (UniqueName: "kubernetes.io/host-path/b6e9bf37167122649376fb470187a295-addons") pod "kube-addon-manager-ctl01" (UID: "b6e9bf37167122649376fb470187a295")
Nov 16 17:00:31 ctl01 kubelet[9716]: I1116 17:00:31.517329    9716 reconciler.go:252] operationExecutor.MountVolume started for volume "varlog" (UniqueName: "kubernetes.io/host-path/b6e9bf37167122649376fb470187a295-varlog") pod "kube-addon-manager-ctl01" (UID: "b6e9bf37167122649376fb470187a295")
Nov 16 17:00:31 ctl01 kubelet[9716]: I1116 17:00:31.517394    9716 operation_generator.go:571] MountVolume.SetUp succeeded for volume "varlog" (UniqueName: "kubernetes.io/host-path/b6e9bf37167122649376fb470187a295-varlog") pod "kube-addon-manager-ctl01" (UID: "b6e9bf37167122649376fb470187a295")
Nov 16 17:00:31 ctl01 kubelet[9716]: I1116 17:00:31.517439    9716 operation_generator.go:571] MountVolume.SetUp succeeded for volume "addons" (UniqueName: "kubernetes.io/host-path/b6e9bf37167122649376fb470187a295-addons") pod "kube-addon-manager-ctl01" (UID: "b6e9bf37167122649376fb470187a295")
Nov 16 17:00:31 ctl01 kubelet[9716]: I1116 17:00:31.717197    9716 kuberuntime_manager.go:397] No sandbox for pod "kube-addon-manager-ctl01_kube-system(b6e9bf37167122649376fb470187a295)" can be found. Need to start a new one
Nov 16 17:00:31 ctl01 containerd[6733]: time="2019-11-16T17:00:31.717881683Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-addon-manager-ctl01,Uid:b6e9bf37167122649376fb470187a295,Namespace:kube-system,Attempt:0,}"
Nov 16 17:00:31 ctl01 salt-minion[4526]: [INFO    ] Running state [kube-addon-manager] at time 17:00:31.855141
Nov 16 17:00:31 ctl01 salt-minion[4526]: [INFO    ] Executing state service.dead for [kube-addon-manager]
Nov 16 17:00:31 ctl01 salt-minion[4526]: [INFO    ] Executing command ['systemctl', 'status', 'kube-addon-manager.service', '-n', '0'] in directory '/root'
Nov 16 17:00:31 ctl01 salt-minion[4526]: [INFO    ] The named service kube-addon-manager is not available
Nov 16 17:00:31 ctl01 salt-minion[4526]: [INFO    ] Completed state [kube-addon-manager] at time 17:00:31.889695 duration_in_ms=34.555
Nov 16 17:00:31 ctl01 salt-minion[4526]: [INFO    ] Running state [/etc/systemd/system/kube-addon-manager.service] at time 17:00:31.890160
Nov 16 17:00:31 ctl01 salt-minion[4526]: [INFO    ] Executing state file.absent for [/etc/systemd/system/kube-addon-manager.service]
Nov 16 17:00:31 ctl01 salt-minion[4526]: [INFO    ] File /etc/systemd/system/kube-addon-manager.service is not present
Nov 16 17:00:31 ctl01 salt-minion[4526]: [INFO    ] Completed state [/etc/systemd/system/kube-addon-manager.service] at time 17:00:31.891888 duration_in_ms=1.728
Nov 16 17:00:31 ctl01 salt-minion[4526]: [INFO    ] Running state [/srv/kubernetes/conformance.yml] at time 17:00:31.892256
Nov 16 17:00:31 ctl01 salt-minion[4526]: [INFO    ] Executing state file.managed for [/srv/kubernetes/conformance.yml]
Nov 16 17:00:31 ctl01 salt-minion[4526]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/conformance/conformance.yml'
Nov 16 17:00:31 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 17:00:32 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 17:00:32 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 17:00:32 ctl01 salt-minion[4526]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 17:00:32 ctl01 salt-minion[4526]: [INFO    ] File changed:
Nov 16 17:00:32 ctl01 salt-minion[4526]: New file
Nov 16 17:00:32 ctl01 salt-minion[4526]: [INFO    ] Completed state [/srv/kubernetes/conformance.yml] at time 17:00:32.658245 duration_in_ms=765.988
Nov 16 17:00:32 ctl01 salt-minion[4526]: [INFO    ] Returning information for job: 20191116170023465408
Nov 16 17:00:33 ctl01 salt-minion[4526]: [INFO    ] User sudo_ubuntu Executing command cp.push with jid 20191116170033198322
Nov 16 17:00:33 ctl01 salt-minion[4526]: [INFO    ] Starting a new job with PID 12823
Nov 16 17:00:33 ctl01 containerd[6733]: time="2019-11-16T17:00:33.232667440Z" level=info msg="ImageCreate event &ImageCreate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/kubernetes/pause-amd64:v1.13.5-3,Labels:map[string]string{},}"
Nov 16 17:00:33 ctl01 containerd[6733]: time="2019-11-16T17:00:33.249604582Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:478b5c5586708a45bfb7aab46b7a298c4eb65c3442311e7868f49ba269f358a4,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:33 ctl01 containerd[6733]: time="2019-11-16T17:00:33.250345596Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/kubernetes/pause-amd64:v1.13.5-3,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:33 ctl01 salt-minion[4526]: [INFO    ] Returning information for job: 20191116170033198322
Nov 16 17:00:33 ctl01 containerd[6733]: time="2019-11-16T17:00:33.299129641Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:478b5c5586708a45bfb7aab46b7a298c4eb65c3442311e7868f49ba269f358a4,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:33 ctl01 containerd[6733]: time="2019-11-16T17:00:33.300899237Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/kubernetes/pause-amd64:v1.13.5-3,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:33 ctl01 containerd[6733]: time="2019-11-16T17:00:33.302477439Z" level=info msg="ImageCreate event &ImageCreate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/kubernetes/pause-amd64@sha256:9e2b9d9d64bab9f4d80790ec6c6fe09cdb5714d43bf23357e0ed0d0ab512fffd,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:33 ctl01 containerd[6733]: time="2019-11-16T17:00:33.312392569Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/f5d945308d2935087d1cb192371448f884f8becb150baa12071d89f228a83764/shim.sock" debug=false pid=12828
Nov 16 17:00:33 ctl01 containerd[6733]: time="2019-11-16T17:00:33.701747712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-addon-manager-ctl01,Uid:b6e9bf37167122649376fb470187a295,Namespace:kube-system,Attempt:0,} returns sandbox id "f5d945308d2935087d1cb192371448f884f8becb150baa12071d89f228a83764""
Nov 16 17:00:33 ctl01 kubelet[9716]: I1116 17:00:33.707254    9716 provider.go:116] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider
Nov 16 17:00:33 ctl01 kubelet[9716]: I1116 17:00:33.707857    9716 kubelet.go:1953] SyncLoop (PLEG): "kube-addon-manager-ctl01_kube-system(b6e9bf37167122649376fb470187a295)", event: &pleg.PodLifecycleEvent{ID:"b6e9bf37167122649376fb470187a295", Type:"ContainerStarted", Data:"f5d945308d2935087d1cb192371448f884f8becb150baa12071d89f228a83764"}
Nov 16 17:00:33 ctl01 containerd[6733]: time="2019-11-16T17:00:33.707949413Z" level=info msg="PullImage "k8s.gcr.io/kube-addon-manager:v8.9""
Nov 16 17:00:33 ctl01 salt-minion[4526]: [INFO    ] User sudo_ubuntu Executing command cp.push_dir with jid 20191116170033971440
Nov 16 17:00:34 ctl01 salt-minion[4526]: [INFO    ] Starting a new job with PID 12890
Nov 16 17:00:34 ctl01 salt-minion[4526]: [INFO    ] Returning information for job: 20191116170033971440
Nov 16 17:00:35 ctl01 containerd[6733]: time="2019-11-16T17:00:35.157702376Z" level=info msg="ImageCreate event &ImageCreate{Name:k8s.gcr.io/kube-addon-manager:v8.9,Labels:map[string]string{},}"
Nov 16 17:00:35 ctl01 containerd[6733]: time="2019-11-16T17:00:35.165995197Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:13da3ecfc10d338d91f3fecf494fac162c704fc284006ec289d660ec7cbba3e4,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:35 ctl01 containerd[6733]: time="2019-11-16T17:00:35.166768802Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/kube-addon-manager:v8.9,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:38 ctl01 containerd[6733]: time="2019-11-16T17:00:38.026768465Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:13da3ecfc10d338d91f3fecf494fac162c704fc284006ec289d660ec7cbba3e4,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:38 ctl01 containerd[6733]: time="2019-11-16T17:00:38.028403755Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/kube-addon-manager:v8.9,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:38 ctl01 containerd[6733]: time="2019-11-16T17:00:38.030026674Z" level=info msg="ImageCreate event &ImageCreate{Name:k8s.gcr.io/kube-addon-manager@sha256:2fd1daf3d3cf0e94a753f2263b60dbb0d42b107b5cde0c75ee3fc5c830e016e4,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:38 ctl01 containerd[6733]: time="2019-11-16T17:00:38.030523693Z" level=info msg="PullImage "k8s.gcr.io/kube-addon-manager:v8.9" returns image reference "sha256:13da3ecfc10d338d91f3fecf494fac162c704fc284006ec289d660ec7cbba3e4""
Nov 16 17:00:38 ctl01 containerd[6733]: time="2019-11-16T17:00:38.033148596Z" level=info msg="CreateContainer within sandbox "f5d945308d2935087d1cb192371448f884f8becb150baa12071d89f228a83764" for container &ContainerMetadata{Name:kube-addon-manager,Attempt:0,}"
Nov 16 17:00:38 ctl01 kernel: [  296.995792] audit: type=1400 audit(1573923638.177:18): apparmor="STATUS" operation="profile_load" profile="unconfined" name="cri-containerd.apparmor.d" pid=12966 comm="apparmor_parser"
Nov 16 17:00:38 ctl01 containerd[6733]: time="2019-11-16T17:00:38.197907755Z" level=info msg="CreateContainer within sandbox "f5d945308d2935087d1cb192371448f884f8becb150baa12071d89f228a83764" for &ContainerMetadata{Name:kube-addon-manager,Attempt:0,} returns container id "4f4946972a9701580fc92c942179cc99b04d1da77f9980216546309f75cf874e""
Nov 16 17:00:38 ctl01 containerd[6733]: time="2019-11-16T17:00:38.198653079Z" level=info msg="StartContainer for "4f4946972a9701580fc92c942179cc99b04d1da77f9980216546309f75cf874e""
Nov 16 17:00:38 ctl01 containerd[6733]: time="2019-11-16T17:00:38.202888951Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/4f4946972a9701580fc92c942179cc99b04d1da77f9980216546309f75cf874e/shim.sock" debug=false pid=12968
Nov 16 17:00:38 ctl01 kubelet[9716]: I1116 17:00:38.269387    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 17:00:38 ctl01 containerd[6733]: time="2019-11-16T17:00:38.650706443Z" level=info msg="Finish piping stdout of container "4f4946972a9701580fc92c942179cc99b04d1da77f9980216546309f75cf874e""
Nov 16 17:00:38 ctl01 containerd[6733]: time="2019-11-16T17:00:38.650705356Z" level=info msg="Finish piping stderr of container "4f4946972a9701580fc92c942179cc99b04d1da77f9980216546309f75cf874e""
Nov 16 17:00:38 ctl01 containerd[6733]: time="2019-11-16T17:00:38.668749458Z" level=info msg="StartContainer for "4f4946972a9701580fc92c942179cc99b04d1da77f9980216546309f75cf874e" returns successfully"
Nov 16 17:00:38 ctl01 containerd[6733]: time="2019-11-16T17:00:38.671346609Z" level=info msg="CreateContainer within sandbox "f5d945308d2935087d1cb192371448f884f8becb150baa12071d89f228a83764" for container &ContainerMetadata{Name:kube-addon-manager,Attempt:0,}"
Nov 16 17:00:38 ctl01 containerd[6733]: time="2019-11-16T17:00:38.680348120Z" level=error msg="CreateContainer within sandbox "f5d945308d2935087d1cb192371448f884f8becb150baa12071d89f228a83764" for &ContainerMetadata{Name:kube-addon-manager,Attempt:0,} failed" error="failed to reserve container name "kube-addon-manager_kube-addon-manager-ctl01_kube-system_b6e9bf37167122649376fb470187a295_0": name "kube-addon-manager_kube-addon-manager-ctl01_kube-system_b6e9bf37167122649376fb470187a295_0" is reserved for "4f4946972a9701580fc92c942179cc99b04d1da77f9980216546309f75cf874e""
Nov 16 17:00:38 ctl01 kubelet[9716]: E1116 17:00:38.680513    9716 remote_runtime.go:191] CreateContainer in sandbox "f5d945308d2935087d1cb192371448f884f8becb150baa12071d89f228a83764" from runtime service failed: rpc error: code = Unknown desc = failed to reserve container name "kube-addon-manager_kube-addon-manager-ctl01_kube-system_b6e9bf37167122649376fb470187a295_0": name "kube-addon-manager_kube-addon-manager-ctl01_kube-system_b6e9bf37167122649376fb470187a295_0" is reserved for "4f4946972a9701580fc92c942179cc99b04d1da77f9980216546309f75cf874e"
Nov 16 17:00:38 ctl01 kubelet[9716]: E1116 17:00:38.680588    9716 kuberuntime_manager.go:749] container start failed: CreateContainerError: failed to reserve container name "kube-addon-manager_kube-addon-manager-ctl01_kube-system_b6e9bf37167122649376fb470187a295_0": name "kube-addon-manager_kube-addon-manager-ctl01_kube-system_b6e9bf37167122649376fb470187a295_0" is reserved for "4f4946972a9701580fc92c942179cc99b04d1da77f9980216546309f75cf874e"
Nov 16 17:00:38 ctl01 kubelet[9716]: E1116 17:00:38.680655    9716 pod_workers.go:190] Error syncing pod b6e9bf37167122649376fb470187a295 ("kube-addon-manager-ctl01_kube-system(b6e9bf37167122649376fb470187a295)"), skipping: failed to "StartContainer" for "kube-addon-manager" with CreateContainerError: "failed to reserve container name \"kube-addon-manager_kube-addon-manager-ctl01_kube-system_b6e9bf37167122649376fb470187a295_0\": name \"kube-addon-manager_kube-addon-manager-ctl01_kube-system_b6e9bf37167122649376fb470187a295_0\" is reserved for \"4f4946972a9701580fc92c942179cc99b04d1da77f9980216546309f75cf874e\""
Nov 16 17:00:38 ctl01 kubelet[9716]: I1116 17:00:38.714496    9716 kubelet.go:1953] SyncLoop (PLEG): "kube-addon-manager-ctl01_kube-system(b6e9bf37167122649376fb470187a295)", event: &pleg.PodLifecycleEvent{ID:"b6e9bf37167122649376fb470187a295", Type:"ContainerStarted", Data:"4f4946972a9701580fc92c942179cc99b04d1da77f9980216546309f75cf874e"}
Nov 16 17:00:39 ctl01 kube-apiserver[11919]: I1116 17:00:39.392827   11919 controller.go:608] quota admission added evaluator for: namespaces
Nov 16 17:00:40 ctl01 kube-apiserver[11919]: I1116 17:00:40.202238   11919 controller.go:608] quota admission added evaluator for: deployments.extensions
Nov 16 17:00:40 ctl01 kube-apiserver[11919]: I1116 17:00:40.217653   11919 controller.go:608] quota admission added evaluator for: replicasets.apps
Nov 16 17:00:40 ctl01 kube-controller-manager[12110]: I1116 17:00:40.220262   12110 replica_set.go:477] Too few replicas for ReplicaSet kube-system/calico-kube-controllers-996f9b774, need 1, creating 1
Nov 16 17:00:40 ctl01 kube-controller-manager[12110]: I1116 17:00:40.222515   12110 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"calico-kube-controllers", UID:"9e487ca3-0892-11ea-a35a-5254009caaa4", APIVersion:"apps/v1", ResourceVersion:"379", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set calico-kube-controllers-996f9b774 to 1
Nov 16 17:00:40 ctl01 kube-controller-manager[12110]: I1116 17:00:40.228895   12110 deployment_controller.go:484] Error syncing deployment kube-system/calico-kube-controllers: Operation cannot be fulfilled on deployments.apps "calico-kube-controllers": the object has been modified; please apply your changes to the latest version and try again
Nov 16 17:00:40 ctl01 kube-controller-manager[12110]: I1116 17:00:40.231238   12110 deployment_controller.go:484] Error syncing deployment kube-system/calico-kube-controllers: Operation cannot be fulfilled on deployments.apps "calico-kube-controllers": the object has been modified; please apply your changes to the latest version and try again
Nov 16 17:00:40 ctl01 kube-controller-manager[12110]: I1116 17:00:40.252298   12110 deployment_controller.go:484] Error syncing deployment kube-system/calico-kube-controllers: Operation cannot be fulfilled on deployments.apps "calico-kube-controllers": the object has been modified; please apply your changes to the latest version and try again
Nov 16 17:00:40 ctl01 kube-controller-manager[12110]: I1116 17:00:40.266066   12110 replica_set.go:477] Too few replicas for ReplicaSet kube-system/coredns-7f8f94c97b, need 2, creating 2
Nov 16 17:00:40 ctl01 kube-controller-manager[12110]: I1116 17:00:40.266683   12110 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"9e519a6b-0892-11ea-a35a-5254009caaa4", APIVersion:"apps/v1", ResourceVersion:"392", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-7f8f94c97b to 2
Nov 16 17:00:40 ctl01 kube-controller-manager[12110]: I1116 17:00:40.269147   12110 replica_set.go:514] Slow-start failure. Skipping creation of 2 pods, decrementing expectations for ReplicaSet kube-system/coredns-7f8f94c97b
Nov 16 17:00:40 ctl01 kube-controller-manager[12110]: I1116 17:00:40.269291   12110 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-7f8f94c97b", UID:"9e5216d4-0892-11ea-a35a-5254009caaa4", APIVersion:"apps/v1", ResourceVersion:"393", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "coredns-7f8f94c97b-" is forbidden: error looking up service account kube-system/coredns: serviceaccount "coredns" not found
Nov 16 17:00:40 ctl01 kube-controller-manager[12110]: E1116 17:00:40.276236   12110 replica_set.go:450] Sync "kube-system/coredns-7f8f94c97b" failed with pods "coredns-7f8f94c97b-" is forbidden: error looking up service account kube-system/coredns: serviceaccount "coredns" not found
Nov 16 17:00:40 ctl01 kube-controller-manager[12110]: I1116 17:00:40.276336   12110 replica_set.go:477] Too few replicas for ReplicaSet kube-system/coredns-7f8f94c97b, need 2, creating 2
Nov 16 17:00:40 ctl01 kube-controller-manager[12110]: I1116 17:00:40.277664   12110 deployment_controller.go:484] Error syncing deployment kube-system/coredns: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
Nov 16 17:00:40 ctl01 kube-proxy[9926]: I1116 17:00:40.289132    9926 service.go:309] Adding new service port "kube-system/coredns:dns" at 10.254.0.10:53/UDP
Nov 16 17:00:40 ctl01 kube-proxy[9926]: I1116 17:00:40.289188    9926 service.go:309] Adding new service port "kube-system/coredns:dns-tcp" at 10.254.0.10:53/TCP
Nov 16 17:00:40 ctl01 kube-apiserver[11919]: I1116 17:00:40.303759   11919 controller.go:608] quota admission added evaluator for: daemonsets.extensions
Nov 16 17:00:40 ctl01 kube-apiserver[11919]: I1116 17:00:40.312388   11919 controller.go:608] quota admission added evaluator for: controllerrevisions.apps
Nov 16 17:00:40 ctl01 kube-controller-manager[12110]: I1116 17:00:40.321618   12110 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"netchecker", Name:"netchecker-agent", UID:"9e57ff8d-0892-11ea-a35a-5254009caaa4", APIVersion:"apps/v1", ResourceVersion:"407", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: netchecker-agent-vs5pv
Nov 16 17:00:40 ctl01 kube-controller-manager[12110]: I1116 17:00:40.325403   12110 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"netchecker", Name:"netchecker-agent", UID:"9e57ff8d-0892-11ea-a35a-5254009caaa4", APIVersion:"apps/v1", ResourceVersion:"407", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: netchecker-agent-w98nt
Nov 16 17:00:40 ctl01 kube-controller-manager[12110]: I1116 17:00:40.326416   12110 replica_set.go:477] Too few replicas for ReplicaSet netchecker/netchecker-server-7876fb46d4, need 1, creating 1
Nov 16 17:00:40 ctl01 kube-controller-manager[12110]: I1116 17:00:40.327191   12110 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"netchecker", Name:"netchecker-agent", UID:"9e57ff8d-0892-11ea-a35a-5254009caaa4", APIVersion:"apps/v1", ResourceVersion:"407", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: netchecker-agent-2vjgr
Nov 16 17:00:40 ctl01 kube-controller-manager[12110]: I1116 17:00:40.328134   12110 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"netchecker", Name:"netchecker-server", UID:"9e5a87fb-0892-11ea-a35a-5254009caaa4", APIVersion:"apps/v1", ResourceVersion:"413", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set netchecker-server-7876fb46d4 to 1
Nov 16 17:00:40 ctl01 kubelet[9716]: I1116 17:00:40.331523    9716 kubelet.go:1908] SyncLoop (ADD, "api"): "netchecker-agent-2vjgr_netchecker(9e5ae819-0892-11ea-a35a-5254009caaa4)"
Nov 16 17:00:40 ctl01 kube-controller-manager[12110]: I1116 17:00:40.339745   12110 deployment_controller.go:484] Error syncing deployment netchecker/netchecker-server: Operation cannot be fulfilled on deployments.apps "netchecker-server": the object has been modified; please apply your changes to the latest version and try again
Nov 16 17:00:40 ctl01 kube-controller-manager[12110]: I1116 17:00:40.342115   12110 replica_set.go:514] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicaSet netchecker/netchecker-server-7876fb46d4
Nov 16 17:00:40 ctl01 kube-controller-manager[12110]: I1116 17:00:40.342141   12110 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"netchecker", Name:"netchecker-server-7876fb46d4", UID:"9e5b3858-0892-11ea-a35a-5254009caaa4", APIVersion:"apps/v1", ResourceVersion:"417", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "netchecker-server-7876fb46d4-" is forbidden: error looking up service account netchecker/netchecker: serviceaccount "netchecker" not found
Nov 16 17:00:40 ctl01 kube-proxy[9926]: I1116 17:00:40.362488    9926 service.go:309] Adding new service port "netchecker/netchecker:" at 10.254.54.193:80/TCP
Nov 16 17:00:40 ctl01 kube-controller-manager[12110]: E1116 17:00:40.363444   12110 replica_set.go:450] Sync "netchecker/netchecker-server-7876fb46d4" failed with pods "netchecker-server-7876fb46d4-" is forbidden: error looking up service account netchecker/netchecker: serviceaccount "netchecker" not found
Nov 16 17:00:40 ctl01 kube-controller-manager[12110]: I1116 17:00:40.364227   12110 replica_set.go:477] Too few replicas for ReplicaSet netchecker/netchecker-server-7876fb46d4, need 1, creating 1
Nov 16 17:00:40 ctl01 kube-controller-manager[12110]: I1116 17:00:40.367582   12110 deployment_controller.go:484] Error syncing deployment netchecker/netchecker-server: Operation cannot be fulfilled on deployments.apps "netchecker-server": the object has been modified; please apply your changes to the latest version and try again
Nov 16 17:00:40 ctl01 kube-controller-manager[12110]: I1116 17:00:40.373867   12110 deployment_controller.go:484] Error syncing deployment netchecker/netchecker-server: Operation cannot be fulfilled on deployments.apps "netchecker-server": the object has been modified; please apply your changes to the latest version and try again
Nov 16 17:00:40 ctl01 kube-controller-manager[12110]: I1116 17:00:40.386411   12110 deployment_controller.go:484] Error syncing deployment netchecker/netchecker-server: Operation cannot be fulfilled on deployments.apps "netchecker-server": the object has been modified; please apply your changes to the latest version and try again
Nov 16 17:00:40 ctl01 kube-proxy[9926]: I1116 17:00:40.387816    9926 proxier.go:1427] Opened local port "nodePort for netchecker/netchecker:" (:30276/tcp)
Nov 16 17:00:40 ctl01 kubelet[9716]: I1116 17:00:40.435978    9716 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-czmvh" (UniqueName: "kubernetes.io/secret/9e5ae819-0892-11ea-a35a-5254009caaa4-default-token-czmvh") pod "netchecker-agent-2vjgr" (UID: "9e5ae819-0892-11ea-a35a-5254009caaa4")
Nov 16 17:00:40 ctl01 kubelet[9716]: I1116 17:00:40.536384    9716 reconciler.go:252] operationExecutor.MountVolume started for volume "default-token-czmvh" (UniqueName: "kubernetes.io/secret/9e5ae819-0892-11ea-a35a-5254009caaa4-default-token-czmvh") pod "netchecker-agent-2vjgr" (UID: "9e5ae819-0892-11ea-a35a-5254009caaa4")
Nov 16 17:00:40 ctl01 systemd[1]: Started Kubernetes transient mount for /var/lib/kubelet/pods/9e5ae819-0892-11ea-a35a-5254009caaa4/volumes/kubernetes.io~secret/default-token-czmvh.
Nov 16 17:00:40 ctl01 kubelet[9716]: I1116 17:00:40.555182    9716 operation_generator.go:571] MountVolume.SetUp succeeded for volume "default-token-czmvh" (UniqueName: "kubernetes.io/secret/9e5ae819-0892-11ea-a35a-5254009caaa4-default-token-czmvh") pod "netchecker-agent-2vjgr" (UID: "9e5ae819-0892-11ea-a35a-5254009caaa4")
Nov 16 17:00:40 ctl01 kubelet[9716]: I1116 17:00:40.638962    9716 kuberuntime_manager.go:397] No sandbox for pod "netchecker-agent-2vjgr_netchecker(9e5ae819-0892-11ea-a35a-5254009caaa4)" can be found. Need to start a new one
Nov 16 17:00:40 ctl01 containerd[6733]: time="2019-11-16T17:00:40.639828999Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:netchecker-agent-2vjgr,Uid:9e5ae819-0892-11ea-a35a-5254009caaa4,Namespace:netchecker,Attempt:0,}"
Nov 16 17:00:40 ctl01 containerd[6733]: 2019-11-16 17:00:40.934 [INFO][13227] calico.go 75: Extracted identifiers EndpointIDs=&utils.WEPIdentifiers{Namespace:"netchecker", WEPName:"", WorkloadEndpointIdentifiers:names.WorkloadEndpointIdentifiers{Node:"ctl01", Orchestrator:"k8s", Endpoint:"eth0", Workload:"", Pod:"netchecker-agent-2vjgr", ContainerID:"c44af751681bda57b24f4cc26e8b212b1081adb5e335fd515d52282e01e4d667"}}
Nov 16 17:00:40 ctl01 containerd[6733]: 2019-11-16 17:00:40.999 [INFO][13227] k8s.go 60: Extracted identifiers for CmdAddK8s ContainerID="c44af751681bda57b24f4cc26e8b212b1081adb5e335fd515d52282e01e4d667" Namespace="netchecker" Pod="netchecker-agent-2vjgr" WorkloadEndpoint="ctl01-k8s-netchecker--agent--2vjgr-eth0"
Nov 16 17:00:41 ctl01 containerd[6733]: Calico CNI IPAM request count IPv4=1 IPv6=0
Nov 16 17:00:41 ctl01 containerd[6733]: Calico CNI IPAM handle=calico-k8s-network.c44af751681bda57b24f4cc26e8b212b1081adb5e335fd515d52282e01e4d667
Nov 16 17:00:41 ctl01 containerd[6733]: 2019-11-16 17:00:41.090 [INFO][13241] calico-ipam.go 186: Auto assigning IP ContainerID="c44af751681bda57b24f4cc26e8b212b1081adb5e335fd515d52282e01e4d667" HandleID="calico-k8s-network.c44af751681bda57b24f4cc26e8b212b1081adb5e335fd515d52282e01e4d667" Workload="ctl01-k8s-netchecker--agent--2vjgr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc42022f250), Attrs:map[string]string(nil), Hostname:"ctl01", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}}
Nov 16 17:00:41 ctl01 containerd[6733]: 2019-11-16 17:00:41.090 [INFO][13241] ipam.go 70: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ctl01'
Nov 16 17:00:41 ctl01 containerd[6733]: 2019-11-16 17:00:41.091 [INFO][13241] ipam.go 254: Looking up existing affinities for host handle="calico-k8s-network.c44af751681bda57b24f4cc26e8b212b1081adb5e335fd515d52282e01e4d667" host="ctl01"
Nov 16 17:00:41 ctl01 containerd[6733]: 2019-11-16 17:00:41.092 [INFO][13241] ipam.go 265: Ran out of existing affine blocks for host handle="calico-k8s-network.c44af751681bda57b24f4cc26e8b212b1081adb5e335fd515d52282e01e4d667" host="ctl01"
Nov 16 17:00:41 ctl01 containerd[6733]: 2019-11-16 17:00:41.093 [INFO][13241] ipam.go 324: No more affine blocks, but need to allocate 1 more addresses - allocate another block handle="calico-k8s-network.c44af751681bda57b24f4cc26e8b212b1081adb5e335fd515d52282e01e4d667" host="ctl01"
Nov 16 17:00:41 ctl01 containerd[6733]: 2019-11-16 17:00:41.093 [INFO][13241] ipam.go 328: Looking for an unclaimed block handle="calico-k8s-network.c44af751681bda57b24f4cc26e8b212b1081adb5e335fd515d52282e01e4d667" host="ctl01"
Nov 16 17:00:41 ctl01 containerd[6733]: 2019-11-16 17:00:41.094 [INFO][13241] ipam_block_reader_writer.go 106: Found free block: 192.168.237.0/26
Nov 16 17:00:41 ctl01 containerd[6733]: 2019-11-16 17:00:41.094 [INFO][13241] ipam.go 340: Found unclaimed block host="ctl01" subnet=192.168.237.0/26
Nov 16 17:00:41 ctl01 containerd[6733]: 2019-11-16 17:00:41.094 [INFO][13241] ipam_block_reader_writer.go 122: Trying to create affinity in pending state host="ctl01" subnet=192.168.237.0/26
Nov 16 17:00:41 ctl01 containerd[6733]: 2019-11-16 17:00:41.098 [INFO][13241] ipam_block_reader_writer.go 152: Successfully created pending affinity for block host="ctl01" subnet=192.168.237.0/26
Nov 16 17:00:41 ctl01 containerd[6733]: 2019-11-16 17:00:41.098 [INFO][13241] ipam.go 118: Attempting to load block cidr=192.168.237.0/26 host="ctl01"
Nov 16 17:00:41 ctl01 containerd[6733]: 2019-11-16 17:00:41.100 [INFO][13241] ipam.go 123: The referenced block doesn't exist, trying to create it cidr=192.168.237.0/26 host="ctl01"
Nov 16 17:00:41 ctl01 containerd[6733]: 2019-11-16 17:00:41.101 [INFO][13241] ipam.go 130: Wrote affinity as pending cidr=192.168.237.0/26 host="ctl01"
Nov 16 17:00:41 ctl01 containerd[6733]: 2019-11-16 17:00:41.102 [INFO][13241] ipam.go 139: Attempting to claim the block cidr=192.168.237.0/26 host="ctl01"
Nov 16 17:00:41 ctl01 containerd[6733]: 2019-11-16 17:00:41.102 [INFO][13241] ipam_block_reader_writer.go 175: Attempting to create a new block host="ctl01" subnet=192.168.237.0/26
Nov 16 17:00:41 ctl01 containerd[6733]: 2019-11-16 17:00:41.103 [INFO][13241] ipam_block_reader_writer.go 217: Successfully created block
Nov 16 17:00:41 ctl01 containerd[6733]: 2019-11-16 17:00:41.103 [INFO][13241] ipam_block_reader_writer.go 228: Confirming affinity host="ctl01" subnet=192.168.237.0/26
Nov 16 17:00:41 ctl01 containerd[6733]: 2019-11-16 17:00:41.105 [INFO][13241] ipam_block_reader_writer.go 243: Successfully confirmed affinity host="ctl01" subnet=192.168.237.0/26
Nov 16 17:00:41 ctl01 containerd[6733]: 2019-11-16 17:00:41.105 [INFO][13241] ipam.go 372: Claimed new block &{BlockKey(cidr=192.168.237.0/26) 0xc420696bd0 448 0s} - assigning 1 addresses host="ctl01" subnet=192.168.237.0/26
Nov 16 17:00:41 ctl01 containerd[6733]: 2019-11-16 17:00:41.105 [INFO][13241] ipam.go 677: Attempting to assign 1 addresses from block block=192.168.237.0/26 handle="calico-k8s-network.c44af751681bda57b24f4cc26e8b212b1081adb5e335fd515d52282e01e4d667" host="ctl01"
Nov 16 17:00:41 ctl01 containerd[6733]: 2019-11-16 17:00:41.106 [INFO][13241] ipam.go 1110: Creating new handle: calico-k8s-network.c44af751681bda57b24f4cc26e8b212b1081adb5e335fd515d52282e01e4d667
Nov 16 17:00:41 ctl01 containerd[6733]: 2019-11-16 17:00:41.106 [INFO][13241] ipam.go 700: Writing block in order to claim IPs block=192.168.237.0/26 handle="calico-k8s-network.c44af751681bda57b24f4cc26e8b212b1081adb5e335fd515d52282e01e4d667" host="ctl01"
Nov 16 17:00:41 ctl01 containerd[6733]: 2019-11-16 17:00:41.108 [INFO][13241] ipam.go 710: Successfully claimed IPs: [192.168.237.0] block=192.168.237.0/26 handle="calico-k8s-network.c44af751681bda57b24f4cc26e8b212b1081adb5e335fd515d52282e01e4d667" host="ctl01"
Nov 16 17:00:41 ctl01 containerd[6733]: 2019-11-16 17:00:41.108 [INFO][13241] ipam.go 456: Auto-assigned 1 out of 1 IPv4s: [192.168.237.0] handle="calico-k8s-network.c44af751681bda57b24f4cc26e8b212b1081adb5e335fd515d52282e01e4d667" host="ctl01"
Nov 16 17:00:41 ctl01 containerd[6733]: Calico CNI IPAM assigned addresses IPv4=[192.168.237.0] IPv6=[]
Nov 16 17:00:41 ctl01 containerd[6733]: 2019-11-16 17:00:41.108 [INFO][13241] calico-ipam.go 214: IPAM Result ContainerID="c44af751681bda57b24f4cc26e8b212b1081adb5e335fd515d52282e01e4d667" HandleID="calico-k8s-network.c44af751681bda57b24f4cc26e8b212b1081adb5e335fd515d52282e01e4d667" Workload="ctl01-k8s-netchecker--agent--2vjgr-eth0" result.IPs=[]*current.IPConfig{(*current.IPConfig)(0xc4200d0de0)}
Nov 16 17:00:41 ctl01 containerd[6733]: 2019-11-16 17:00:41.110 [INFO][13227] k8s.go 365: Populated endpoint ContainerID="c44af751681bda57b24f4cc26e8b212b1081adb5e335fd515d52282e01e4d667" Namespace="netchecker" Pod="netchecker-agent-2vjgr" WorkloadEndpoint="ctl01-k8s-netchecker--agent--2vjgr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ctl01-k8s-netchecker--agent--2vjgr-eth0", GenerateName:"", Namespace:"netchecker", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ctl01", ContainerID:"", Pod:"netchecker-agent-2vjgr", Endpoint:"eth0", IPNetworks:[]string{"192.168.237.0/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"calico-k8s-network"}, InterfaceName:"", MAC:"", Ports:[]v3.EndpointPort(nil)}}
Nov 16 17:00:41 ctl01 containerd[6733]: Calico CNI using IPs: [192.168.237.0/32]
Nov 16 17:00:41 ctl01 containerd[6733]: 2019-11-16 17:00:41.111 [INFO][13227] network.go 75: Setting the host side veth name to calid435d8ab847 ContainerID="c44af751681bda57b24f4cc26e8b212b1081adb5e335fd515d52282e01e4d667" Namespace="netchecker" Pod="netchecker-agent-2vjgr" WorkloadEndpoint="ctl01-k8s-netchecker--agent--2vjgr-eth0"
Nov 16 17:00:41 ctl01 containerd[6733]: 2019-11-16 17:00:41.117 [INFO][13227] network.go 380: Disabling IPv4 forwarding ContainerID="c44af751681bda57b24f4cc26e8b212b1081adb5e335fd515d52282e01e4d667" Namespace="netchecker" Pod="netchecker-agent-2vjgr" WorkloadEndpoint="ctl01-k8s-netchecker--agent--2vjgr-eth0"
Nov 16 17:00:41 ctl01 kernel: [  299.931879] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
Nov 16 17:00:41 ctl01 kernel: [  299.932132] IPv6: ADDRCONF(NETDEV_UP): calid435d8ab847: link is not ready
Nov 16 17:00:41 ctl01 kernel: [  299.932146] IPv6: ADDRCONF(NETDEV_CHANGE): calid435d8ab847: link becomes ready
Nov 16 17:00:41 ctl01 kernel: [  299.932202] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Nov 16 17:00:41 ctl01 systemd-udevd[13260]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 17:00:41 ctl01 containerd[6733]: 2019-11-16 17:00:41.153 [INFO][13227] k8s.go 392: Added Mac, interface name, and active container ID to endpoint ContainerID="c44af751681bda57b24f4cc26e8b212b1081adb5e335fd515d52282e01e4d667" Namespace="netchecker" Pod="netchecker-agent-2vjgr" WorkloadEndpoint="ctl01-k8s-netchecker--agent--2vjgr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ctl01-k8s-netchecker--agent--2vjgr-eth0", GenerateName:"", Namespace:"netchecker", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ctl01", ContainerID:"c44af751681bda57b24f4cc26e8b212b1081adb5e335fd515d52282e01e4d667", Pod:"netchecker-agent-2vjgr", Endpoint:"eth0", IPNetworks:[]string{"192.168.237.0/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"calico-k8s-network"}, InterfaceName:"calid435d8ab847", MAC:"ea:94:64:92:ff:60", Ports:[]v3.EndpointPort(nil)}}
Nov 16 17:00:41 ctl01 containerd[6733]: 2019-11-16 17:00:41.159 [INFO][13227] k8s.go 424: Wrote updated endpoint to datastore ContainerID="c44af751681bda57b24f4cc26e8b212b1081adb5e335fd515d52282e01e4d667" Namespace="netchecker" Pod="netchecker-agent-2vjgr" WorkloadEndpoint="ctl01-k8s-netchecker--agent--2vjgr-eth0"
Nov 16 17:00:41 ctl01 containerd[6733]: Calico CNI creating profile: calico-k8s-network
Nov 16 17:00:41 ctl01 containerd[6733]: 2019-11-16 17:00:41.162 [INFO][13227] calico.go 366: Creating profile ContainerID="c44af751681bda57b24f4cc26e8b212b1081adb5e335fd515d52282e01e4d667" Namespace="netchecker" Pod="netchecker-agent-2vjgr" WorkloadEndpoint="ctl01-k8s-netchecker--agent--2vjgr-" profile=&v3.Profile{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"calico-k8s-network", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.ProfileSpec{Ingress:[]v3.Rule{v3.Rule{Action:"Allow", IPVersion:(*int)(nil), Protocol:(*numorstring.Protocol)(nil), ICMP:(*v3.ICMPFields)(nil), NotProtocol:(*numorstring.Protocol)(nil), NotICMP:(*v3.ICMPFields)(nil), Source:v3.EntityRule{Nets:[]string(nil), Selector:"", NamespaceSelector:"", Ports:[]numorstring.Port(nil), NotNets:[]string(nil), NotSelector:"", NotPorts:[]numorstring.Port(nil), ServiceAccounts:(*v3.ServiceAccountMatch)(nil)}, Destination:v3.EntityRule{Nets:[]string(nil), Selector:"", NamespaceSelector:"", Ports:[]numorstring.Port(nil), NotNets:[]string(nil), NotSelector:"", NotPorts:[]numorstring.Port(nil), ServiceAccounts:(*v3.ServiceAccountMatch)(nil)}, HTTP:(*v3.HTTPMatch)(nil)}}, Egress:[]v3.Rule{v3.Rule{Action:"Allow", IPVersion:(*int)(nil), Protocol:(*numorstring.Protocol)(nil), ICMP:(*v3.ICMPFields)(nil), NotProtocol:(*numorstring.Protocol)(nil), NotICMP:(*v3.ICMPFields)(nil), Source:v3.EntityRule{Nets:[]string(nil), Selector:"", NamespaceSelector:"", Ports:[]numorstring.Port(nil), NotNets:[]string(nil), NotSelector:"", NotPorts:[]numorstring.Port(nil), ServiceAccounts:(*v3.ServiceAccountMatch)(nil)}, Destination:v3.EntityRule{Nets:[]string(nil), Selector:"", NamespaceSelector:"", Ports:[]numorstring.Port(nil), NotNets:[]string(nil), NotSelector:"", NotPorts:[]numorstring.Port(nil), ServiceAccounts:(*v3.ServiceAccountMatch)(nil)}, HTTP:(*v3.HTTPMatch)(nil)}}, LabelsToApply:map[string]string{"calico-k8s-network":""}}}
Nov 16 17:00:41 ctl01 containerd[6733]: time="2019-11-16T17:00:41.187617799Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/c44af751681bda57b24f4cc26e8b212b1081adb5e335fd515d52282e01e4d667/shim.sock" debug=false pid=13322
Nov 16 17:00:41 ctl01 kube-controller-manager[12110]: I1116 17:00:41.233480   12110 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"calico-kube-controllers-996f9b774", UID:"9e4ad7f1-0892-11ea-a35a-5254009caaa4", APIVersion:"apps/v1", ResourceVersion:"381", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: calico-kube-controllers-996f9b774-xqc8g
Nov 16 17:00:41 ctl01 kube-controller-manager[12110]: I1116 17:00:41.281995   12110 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-7f8f94c97b", UID:"9e5216d4-0892-11ea-a35a-5254009caaa4", APIVersion:"apps/v1", ResourceVersion:"398", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-7f8f94c97b-v2ccc
Nov 16 17:00:41 ctl01 kube-controller-manager[12110]: I1116 17:00:41.286294   12110 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-7f8f94c97b", UID:"9e5216d4-0892-11ea-a35a-5254009caaa4", APIVersion:"apps/v1", ResourceVersion:"398", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-7f8f94c97b-x7l26
Nov 16 17:00:41 ctl01 kube-controller-manager[12110]: I1116 17:00:41.370417   12110 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"netchecker", Name:"netchecker-server-7876fb46d4", UID:"9e5b3858-0892-11ea-a35a-5254009caaa4", APIVersion:"apps/v1", ResourceVersion:"430", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: netchecker-server-7876fb46d4-l6hmd
Nov 16 17:00:41 ctl01 kube-controller-manager[12110]: I1116 17:00:41.386983   12110 deployment_controller.go:484] Error syncing deployment netchecker/netchecker-server: Operation cannot be fulfilled on deployments.apps "netchecker-server": the object has been modified; please apply your changes to the latest version and try again
Nov 16 17:00:41 ctl01 containerd[6733]: time="2019-11-16T17:00:41.396921740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:netchecker-agent-2vjgr,Uid:9e5ae819-0892-11ea-a35a-5254009caaa4,Namespace:netchecker,Attempt:0,} returns sandbox id "c44af751681bda57b24f4cc26e8b212b1081adb5e335fd515d52282e01e4d667""
Nov 16 17:00:41 ctl01 containerd[6733]: time="2019-11-16T17:00:41.400312287Z" level=info msg="PullImage "mirantis/k8s-netchecker-agent:stable""
Nov 16 17:00:41 ctl01 kubelet[9716]: I1116 17:00:41.721048    9716 kubelet.go:1953] SyncLoop (PLEG): "netchecker-agent-2vjgr_netchecker(9e5ae819-0892-11ea-a35a-5254009caaa4)", event: &pleg.PodLifecycleEvent{ID:"9e5ae819-0892-11ea-a35a-5254009caaa4", Type:"ContainerStarted", Data:"c44af751681bda57b24f4cc26e8b212b1081adb5e335fd515d52282e01e4d667"}
Nov 16 17:00:43 ctl01 containerd[6733]: time="2019-11-16T17:00:43.222036886Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/mirantis/k8s-netchecker-agent:stable,Labels:map[string]string{},}"
Nov 16 17:00:43 ctl01 containerd[6733]: time="2019-11-16T17:00:43.227304247Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d16c8d7f4d5bdaecda098c54c655027871da52f292683759f5a32c8ecd43b3f0,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:43 ctl01 containerd[6733]: time="2019-11-16T17:00:43.227777826Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/mirantis/k8s-netchecker-agent:stable,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:43 ctl01 containerd[6733]: time="2019-11-16T17:00:43.541777370Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:d16c8d7f4d5bdaecda098c54c655027871da52f292683759f5a32c8ecd43b3f0,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:43 ctl01 containerd[6733]: time="2019-11-16T17:00:43.544391200Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/mirantis/k8s-netchecker-agent:stable,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:43 ctl01 containerd[6733]: time="2019-11-16T17:00:43.546718670Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/mirantis/k8s-netchecker-agent@sha256:4ac49ebef7eaeaa5a8a19f56faa73740bf4861979aa067d0125867a72846720a,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:43 ctl01 containerd[6733]: time="2019-11-16T17:00:43.547562713Z" level=info msg="PullImage "mirantis/k8s-netchecker-agent:stable" returns image reference "sha256:d16c8d7f4d5bdaecda098c54c655027871da52f292683759f5a32c8ecd43b3f0""
Nov 16 17:00:43 ctl01 containerd[6733]: time="2019-11-16T17:00:43.549929114Z" level=info msg="CreateContainer within sandbox "c44af751681bda57b24f4cc26e8b212b1081adb5e335fd515d52282e01e4d667" for container &ContainerMetadata{Name:netchecker-agent,Attempt:0,}"
Nov 16 17:00:43 ctl01 containerd[6733]: time="2019-11-16T17:00:43.641665308Z" level=info msg="CreateContainer within sandbox "c44af751681bda57b24f4cc26e8b212b1081adb5e335fd515d52282e01e4d667" for &ContainerMetadata{Name:netchecker-agent,Attempt:0,} returns container id "b459eb91a0af2664ec2124352ae54f8a756ae412d2d9bc209e8ea48fd47bf139""
Nov 16 17:00:43 ctl01 containerd[6733]: time="2019-11-16T17:00:43.642260924Z" level=info msg="StartContainer for "b459eb91a0af2664ec2124352ae54f8a756ae412d2d9bc209e8ea48fd47bf139""
Nov 16 17:00:43 ctl01 containerd[6733]: time="2019-11-16T17:00:43.643535404Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/b459eb91a0af2664ec2124352ae54f8a756ae412d2d9bc209e8ea48fd47bf139/shim.sock" debug=false pid=13402
Nov 16 17:00:43 ctl01 containerd[6733]: time="2019-11-16T17:00:43.840271774Z" level=info msg="StartContainer for "b459eb91a0af2664ec2124352ae54f8a756ae412d2d9bc209e8ea48fd47bf139" returns successfully"
Nov 16 17:00:43 ctl01 kubelet[9716]: I1116 17:00:43.843031    9716 kubelet.go:1953] SyncLoop (PLEG): "netchecker-agent-2vjgr_netchecker(9e5ae819-0892-11ea-a35a-5254009caaa4)", event: &pleg.PodLifecycleEvent{ID:"9e5ae819-0892-11ea-a35a-5254009caaa4", Type:"ContainerStarted", Data:"b459eb91a0af2664ec2124352ae54f8a756ae412d2d9bc209e8ea48fd47bf139"}
Nov 16 17:00:48 ctl01 kubelet[9716]: I1116 17:00:48.279036    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 17:00:48 ctl01 kube-proxy[9926]: I1116 17:00:48.483046    9926 proxier.go:659] Stale udp service kube-system/coredns:dns -> 10.254.0.10
Nov 16 17:00:48 ctl01 kernel: [  307.342351] ctnetlink v0.93: registering with nfnetlink.
Nov 16 17:00:53 ctl01 etcd[5381]: read-only range request "key:\"/registry/secrets/kube-system/coredns-token-lkbkg\" " with result "range_response_count:1 size:3675" took too long (239.578537ms) to execute
Nov 16 17:00:53 ctl01 etcd[5381]: read-only range request "key:\"/registry/configmaps/kube-system/coredns\" " with result "range_response_count:1 size:998" took too long (239.818663ms) to execute
Nov 16 17:00:53 ctl01 etcd[5381]: read-only range request "key:\"/registry/serviceaccounts/kube-system/deployment-controller\" " with result "range_response_count:1 size:260" took too long (278.230765ms) to execute
Nov 16 17:00:58 ctl01 kubelet[9716]: I1116 17:00:58.288868    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 17:01:08 ctl01 kubelet[9716]: I1116 17:01:08.297763    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 17:01:18 ctl01 kubelet[9716]: I1116 17:01:18.308047    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 17:01:28 ctl01 kubelet[9716]: I1116 17:01:28.317923    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 17:01:38 ctl01 kubelet[9716]: I1116 17:01:38.327220    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 17:01:48 ctl01 kubelet[9716]: I1116 17:01:48.336515    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 17:01:58 ctl01 kubelet[9716]: I1116 17:01:58.346522    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 17:02:08 ctl01 kubelet[9716]: I1116 17:02:08.364298    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 17:02:18 ctl01 kubelet[9716]: I1116 17:02:18.376741    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 17:02:28 ctl01 kubelet[9716]: I1116 17:02:28.386433    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 17:02:38 ctl01 kubelet[9716]: I1116 17:02:38.397504    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 17:02:48 ctl01 kubelet[9716]: I1116 17:02:48.410583    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 17:02:58 ctl01 kubelet[9716]: I1116 17:02:58.422152    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 17:03:08 ctl01 kubelet[9716]: I1116 17:03:08.431939    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 17:03:18 ctl01 kubelet[9716]: I1116 17:03:18.443136    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 17:03:28 ctl01 kubelet[9716]: I1116 17:03:28.454598    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 17:03:35 ctl01 kube-apiserver[11919]: I1116 17:03:35.126433   11919 node_authorizer.go:176] NODE DENY: ctl01.mcp-k8s-calico-noha.local &authorizer.AttributesRecord{User:(*user.DefaultInfo)(0xc00c974c80), Verb:"create", Namespace:"default", APIGroup:"", APIVersion:"v1", Resource:"configmaps", Subresource:"", Name:"", ResourceRequest:true, Path:"/api/v1/namespaces/default/configmaps"}
Nov 16 17:03:35 ctl01 kube-apiserver[11919]: I1116 17:03:35.203365   11919 node_authorizer.go:176] NODE DENY: ctl01.mcp-k8s-calico-noha.local &authorizer.AttributesRecord{User:(*user.DefaultInfo)(0xc00c719fc0), Verb:"delete", Namespace:"default", APIGroup:"", APIVersion:"v1", Resource:"configmaps", Subresource:"", Name:"k8s-2b8c15aa-key", ResourceRequest:true, Path:"/api/v1/namespaces/default/configmaps/k8s-2b8c15aa-key"}
Nov 16 17:03:35 ctl01 kube-apiserver[11919]: I1116 17:03:35.363516   11919 node_authorizer.go:176] NODE DENY: ctl01.mcp-k8s-calico-noha.local &authorizer.AttributesRecord{User:(*user.DefaultInfo)(0xc00c975e00), Verb:"delete", Namespace:"default", APIGroup:"", APIVersion:"v1", Resource:"configmaps", Subresource:"", Name:"k8s-2b8c15aa-key", ResourceRequest:true, Path:"/api/v1/namespaces/default/configmaps/k8s-2b8c15aa-key"}
Nov 16 17:03:38 ctl01 kubelet[9716]: I1116 17:03:38.468660    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 17:03:48 ctl01 kubelet[9716]: I1116 17:03:48.484256    9716 setters.go:72] Using node IP: "172.16.10.36"
Nov 16 17:03:48 ctl01 salt-minion[4526]: [INFO    ] User sudo_ubuntu Executing command cp.push_dir with jid 20191116170348759627
Nov 16 17:03:48 ctl01 salt-minion[4526]: [INFO    ] Starting a new job with PID 14224
