go tool pprof http://localhost:8001/debug/pprof/heap
go tool pprof http://localhost:8001/debug/pprof/profile?seconds=30
go tool pprof http://localhost:8001/debug/pprof/block
go tool pprof http://localhost:8001/debug/pprof/trace?seconds=5
go tool pprof http://localhost:8001/debug/pprof/mutex
UI分析工具使用起来相对比较麻烦,我们需要先导出文件,然后再使用go tool
工具起服务进行分析。
比如我们导出kube-scheduler的堆栈信息。
curl -sK -v http://localhost:10251/debug/pprof/heap > heap.out
然后使用go tool工具起一个服务,如下:
go tool pprof -http=0.0.0.0:8989 heap.out
然后就可以在浏览器上看到具体的堆栈信息图了。
注意,这里需要在服务端安装graphviz
组件,各种操作系统的安装方式见https://www.graphviz.org/download/#linux
(pprof) help
Commands:
callgrind Outputs a graph in callgrind format
comments Output all profile comments
disasm Output assembly listings annotated with samples
dot Outputs a graph in DOT format
eog Visualize graph through eog
evince Visualize graph through evince
gif Outputs a graph image in GIF format
gv Visualize graph through gv
kcachegrind Visualize report in KCachegrind
list Output annotated source for functions matching regexp
pdf Outputs a graph in PDF format
peek Output callers/callees of functions matching regexp
png Outputs a graph image in PNG format
proto Outputs the profile in compressed protobuf format
ps Outputs a graph in PS format
raw Outputs a text representation of the raw profile
svg Outputs a graph in SVG format
tags Outputs all tags in the profile
text Outputs top entries in text form
top Outputs top entries in text form
topproto Outputs top entries in compressed protobuf format
traces Outputs all profile samples in text form
tree Outputs a text rendering of call graph
web Visualize graph through web browser
weblist Display annotated source in a web browser
o/options List options and their current values
quit/exit/^D Exit pprof
Options:
call_tree Create a context-sensitive call tree
compact_labels Show minimal headers
divide_by Ratio to divide all samples before visualization
drop_negative Ignore negative differences
edgefraction Hide edges below <f>*total
focus Restricts to samples going through a node matching regexp
hide Skips nodes matching regexp
ignore Skips paths going through any nodes matching regexp
mean Average sample value over first value (count)
nodecount Max number of nodes to show
nodefraction Hide nodes below <f>*total
noinlines Ignore inlines.
normalize Scales profile based on the base profile.
output Output filename for file-based outputs
prune_from Drops any functions below the matched frame.
relative_percentages Show percentages relative to focused subgraph
sample_index Sample value to report (0-based index or name)
show Only show nodes matching regexp
show_from Drops functions above the highest matched frame.
source_path Search path for source files
tagfocus Restricts to samples with tags in range or matched by regexp
taghide Skip tags matching this regexp
tagignore Discard samples with tags in range or matched by regexp
tagshow Only consider tags matching this regexp
trim Honor nodefraction/edgefraction/nodecount defaults
trim_path Path to trim from source paths before search
unit Measurement units to display
Option groups (only set one per group):
cumulative
cum Sort entries based on cumulative weight
flat Sort entries based on own weight
granularity
addresses Aggregate at the address level.
filefunctions Aggregate at the function level.
files Aggregate at the file level.
functions Aggregate at the function level.
lines Aggregate at the source code line level.
: Clear focus/ignore/hide/tagfocus/tagignore
type "help " for more information
输入命令 top 10 查看对应资源开销 (例如 cpu就是执行耗时/开销 Memory 就是内存占用大小)排名前十的函数
(pprof) top 10
Showing nodes accounting for 10.82s, 79.97% of 13.53s total
Dropped 64 nodes (cum <= 0.07s)
Showing top 10 nodes out of 61
flat flat% sum% cum cum%
7.50s 55.43% 55.43% 7.56s 55.88% runtime.stdcall1
1.88s 13.90% 69.33% 1.90s 14.04% runtime.cgocall
0.31s 2.29% 71.62% 2.25s 16.63% runtime.timerproc
0.26s 1.92% 73.54% 0.26s 1.92% runtime.(*mcache).prepareForSweep
0.20s 1.48% 75.02% 0.53s 3.92% runtime.acquirep
0.18s 1.33% 76.35% 0.18s 1.33% runtime.casgstatus
0.15s 1.11% 77.46% 1.18s 8.72% runtime.exitsyscall
0.12s 0.89% 78.34% 0.52s 3.84% runtime.goroutineReady
0.11s 0.81% 79.16% 8.79s 64.97% runtime.systemstack
0.11s 0.81% 79.97% 0.11s 0.81% runtime.unlock
flat:当前函数上运行耗时
flat%:函数自身占用的 CPU 运行耗时总比例
sum%:函数自身累积使用 CPU 总比例
cum:当前函数及其调用函数的运行总耗时
cum%:函数自身及其调用函数占 CPU 运行耗时总比例
最后一列为函数名称
大多数情况下,我们可以得出一个应用程序的运行情况,知道当前是什么函数,正在做什么事情,占用了多少资源等。
# 为了排查 kube-controller-manager cpu 占用异常问题(达到 100% 多)
# 1. 首先更改 kube-controller-manager 配置 yaml,打开 profiling 就是设置 profili=True
# 2. 利用 pprof 采集 10252 端口信息
[root]# go tool pprof http://localhost:10252/debug/pprof/profile?seconds=60
Fetching profile over HTTP from http://localhost:10252/debug/pprof/profile?seconds=60
Saved profile in /root/pprof/pprof.kube-controller-manager.samples.cpu.002.pb.gz
File: kube-controller-manager
Type: cpu
Time: Jul 27, 2027 at 5:19am (CST)
Duration: 1mins, Total samples = 1.07mins (106.85%)
Entering interactive mode (type "help" for commands, "o" for options)
# top 显示信息比较简略
(pprof) top
Showing nodes accounting for 27.60s, 42.92% of 64.31s total
Dropped 319 nodes (cum <= 0.32s)
Showing top 10 nodes out of 101
flat flat% sum% cum cum%
5.18s 8.05% 8.05% 5.62s 8.74% runtime.nanotime
4.03s 6.27% 14.32% 4.42s 6.87% runtime.walltime
3.99s 6.20% 20.53% 3.99s 6.20% aeshashbody
2.41s 3.75% 24.27% 9.15s 14.23% runtime.nilinterhash
2.34s 3.64% 27.91% 5.09s 7.91% runtime.mallocgc
2.17s 3.37% 31.29% 6.64s 10.32% runtime.typehash
2.11s 3.28% 34.57% 2.11s 3.28% k8s.io/kubernetes/vendor/github.com/prometheus/client_golang/prometheus.(*histogram).observe
1.96s 3.05% 37.61% 7.61s 11.83% runtime.mapdelete
1.71s 2.66% 40.27% 5.78s 8.99% runtime.mapassign
1.70s 2.64% 42.92% 6.42s 9.98% runtime.mapaccess2
(pprof) top
Showing nodes accounting for 27.60s, 42.92% of 64.31s total
Dropped 319 nodes (cum <= 0.32s)
Showing top 10 nodes out of 101
flat flat% sum% cum cum%
5.18s 8.05% 8.05% 5.62s 8.74% runtime.nanotime
4.03s 6.27% 14.32% 4.42s 6.87% runtime.walltime
3.99s 6.20% 20.53% 3.99s 6.20% aeshashbody
2.41s 3.75% 24.27% 9.15s 14.23% runtime.nilinterhash
2.34s 3.64% 27.91% 5.09s 7.91% runtime.mallocgc
2.17s 3.37% 31.29% 6.64s 10.32% runtime.typehash
2.11s 3.28% 34.57% 2.11s 3.28% k8s.io/kubernetes/vendor/github.com/prometheus/client_golang/prometheus.(*histogram).observe
1.96s 3.05% 37.61% 7.61s 11.83% runtime.mapdelete
1.71s 2.66% 40.27% 5.78s 8.99% runtime.mapassign
1.70s 2.64% 42.92% 6.42s 9.98% runtime.mapaccess2
(pprof) call_tree
# tree 显示信息比较详细
(pprof) tree
Showing nodes accounting for 58.49s, 90.95% of 64.31s total
Dropped 319 nodes (cum <= 0.32s)
Showing top 80 nodes out of 101
----------------------------------------------------------+-------------
flat flat% sum% cum cum% calls calls% + context
----------------------------------------------------------+-------------
3.73s 66.37% | time.now
1.85s 32.92% | time.Since
5.18s 8.05% 8.05% 5.62s 8.74% | runtime.nanotime
0.44s 7.83% | runtime.nanotime1
----------------------------------------------------------+-------------
4.42s 100% | time.now (inline)
4.03s 6.27% 14.32% 4.42s 6.87% | runtime.walltime
0.39s 8.82% | runtime.walltime1
----------------------------------------------------------+-------------
3.99s 100% | runtime.typehash
3.99s 6.20% 20.53% 3.99s 6.20% | aeshashbody
----------------------------------------------------------+-------------
3.27s 35.74% | runtime.mapaccess2
3.09s 33.77% | runtime.mapassign
2.79s 30.49% | runtime.mapdelete
2.41s 3.75% 24.27% 9.15s 14.23% | runtime.nilinterhash
6.64s 72.57% | runtime.typehash
----------------------------------------------------------+-------------
2.47s 48.53% | runtime.convTstring
1.43s 28.09% | runtime.growslice
1.16s 22.79% | runtime.rawstring
2.34s 3.64% 27.91% 5.09s 7.91% | runtime.mallocgc
0.69s 13.56% | runtime.(*mcentral).cacheSpan
0.65s 12.77% | runtime.heapBitsSetType
0.61s 11.98% | runtime.nextFreeFast (inline)
0.09s 1.77% | runtime.scanobject
0.07s 1.38% | runtime.gentraceback
----------------------------------------------------------+-------------
6.64s 100% | runtime.nilinterhash
2.17s 3.37% 31.29% 6.64s 10.32% | runtime.typehash
3.99s 60.09% | aeshashbody
0.48s 7.23% | runtime.strhash
----------------------------------------------------------+-------------
2.11s 100% | k8s.io/kubernetes/vendor/github.com/prometheus/client_golang/prometheus.(*histogram).Observe
2.11s 3.28% 34.57% 2.11s 3.28% | k8s.io/kubernetes/vendor/github.com/prometheus/client_golang/prometheus.(*histogram).observe
----------------------------------------------------------+-------------
3.96s 52.04% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.set.delete
1.48s 19.45% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*defaultQueueMetrics).done
1.41s 18.53% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*defaultQueueMetrics).get
0.76s 9.99% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*ItemExponentialFailureRateLimiter).Forget
1.96s 3.05% 37.61% 7.61s 11.83% | runtime.mapdelete
2.79s 36.66% | runtime.nilinterhash
1.38s 18.13% | runtime.nilinterequal
1.22s 16.03% | runtime.memclrHasPointers
----------------------------------------------------------+-------------
3.01s 52.08% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.set.insert
1.41s 24.39% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*defaultQueueMetrics).add
1.36s 23.53% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*defaultQueueMetrics).get
1.71s 2.66% 40.27% 5.78s 8.99% | runtime.mapassign
3.09s 53.46% | runtime.nilinterhash
0.54s 9.34% | runtime.typedmemmove
----------------------------------------------------------+-------------
3.44s 53.58% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.set.has
1.40s 21.81% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*defaultQueueMetrics).get
1.15s 17.91% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*defaultQueueMetrics).done
0.43s 6.70% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*defaultQueueMetrics).add
1.70s 2.64% 42.92% 6.42s 9.98% | runtime.mapaccess2
3.27s 50.93% | runtime.nilinterhash
1.35s 21.03% | runtime.nilinterequal
----------------------------------------------------------+-------------
0.48s 28.57% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).ShuttingDown
0.36s 21.43% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).Add
0.34s 20.24% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).Get
0.27s 16.07% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).Done
0.23s 13.69% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*ItemExponentialFailureRateLimiter).Forget
1.68s 2.61% 45.53% 1.68s 2.61% | sync.(*Mutex).Unlock
----------------------------------------------------------+-------------
2.59s 96.64% | runtime.gcDrain
0.09s 3.36% | runtime.mallocgc
1.65s 2.57% 48.10% 2.68s 4.17% | runtime.scanobject
0.41s 15.30% | runtime.greyobject
0.39s 14.55% | runtime.findObject
----------------------------------------------------------+-------------
0.45s 28.66% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).Done
0.41s 26.11% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).Get
0.37s 23.57% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).ShuttingDown
0.33s 21.02% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).Add
1.57s 2.44% 50.54% 1.57s 2.44% | sync.(*Mutex).Lock
----------------------------------------------------------+-------------
1.19s 100% | runtime.efaceeq
0.90s 1.40% 51.94% 1.19s 1.85% | runtime.strequal
0.29s 24.37% | runtime.memequal
----------------------------------------------------------+-------------
2.53s 100% | runtime.concatstring3
0.86s 1.34% 53.27% 2.53s 3.93% | runtime.concatstrings
1.30s 51.38% | runtime.rawstringtmp
0.37s 14.62% | runtime.memmove
----------------------------------------------------------+-------------
1.38s 50.55% | runtime.mapdelete
1.35s 49.45% | runtime.mapaccess2
0.86s 1.34% 54.61% 2.73s 4.25% | runtime.nilinterequal
1.87s 68.50% | runtime.efaceeq
----------------------------------------------------------+-------------
26.82s 100% | k8s.io/kubernetes/vendor/k8s.io/kubernetes/pkg/controller/clusterstatusupdate.(*Controller).doUpdateClusterPhasePass
0.83s 1.29% 55.90% 26.82s 41.70% | k8s.io/kubernetes/vendor/k8s.io/kubernetes/pkg/controller/clusterstatusupdate.(*Controller).syncClusterPhaseWithLease
11.42s 42.58% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*delayingType).AddAfter
4.94s 18.42% | k8s.io/kubernetes/vendor/k8s.io/client-go/listers/coordination/v1.leaseNamespaceLister.Get
4.83s 18.01% | time.Now
1.64s 6.11% | k8s.io/kubernetes/vendor/k8s.io/client-go/listers/mizarcore/v1alpha1.(*clusterLister).Get
1.49s 5.56% | runtime.convTstring
0.89s 3.32% | time.Time.Sub
0.51s 1.90% | time.Time.Add
----------------------------------------------------------+-------------
0.40s 50.63% | runtime.memclrHasPointers
0.31s 39.24% | runtime.(*mcentral).cacheSpan
0.79s 1.23% 57.13% 0.79s 1.23% | runtime.memclrNoHeapPointers
----------------------------------------------------------+-------------
1.42s 100% | sort.SearchFloat64s
0.79s 1.23% 58.36% 1.42s 2.21% | sort.Search
0.63s 44.37% | sort.SearchFloat64s.func1
----------------------------------------------------------+-------------
0.75s 100% | k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*threadSafeMap).Get
0.75s 1.17% 59.52% 0.75s 1.17% | sync.(*RWMutex).RUnlock
----------------------------------------------------------+-------------
3.04s 100% | k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*cache).GetByKey
0.71s 1.10% 60.63% 3.04s 4.73% | k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*threadSafeMap).Get
0.98s 32.24% | runtime.mapaccess2_faststr
0.75s 24.67% | sync.(*RWMutex).RUnlock
0.60s 19.74% | sync.(*RWMutex).RLock (inline)
----------------------------------------------------------+-------------
13.91s 100% | k8s.io/kubernetes/vendor/k8s.io/kubernetes/pkg/controller/clusterstatusupdate.(*Controller).doUpdateClusterPhasePass
0.71s 1.10% 61.73% 13.91s 21.63% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).Done
7.02s 50.47% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*defaultQueueMetrics).done
2.06s 14.81% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.set.delete
1.74s 12.51% | runtime.growslice
1.53s 11.00% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.set.has (inline)
0.45s 3.24% | sync.(*Mutex).Lock
0.27s 1.94% | sync.(*Mutex).Unlock
----------------------------------------------------------+-------------
0.51s 50.00% | k8s.io/kubernetes/vendor/k8s.io/kubernetes/pkg/controller/clusterstatusupdate.(*Controller).syncClusterPhaseWithLease
0.51s 50.00% | time.Time.Sub
0.71s 1.10% 62.84% 1.02s 1.59% | time.Time.Add
----------------------------------------------------------+-------------
0.37s 55.22% | k8s.io/kubernetes/vendor/github.com/prometheus/client_golang/prometheus.(*gauge).Inc
0.30s 44.78% | k8s.io/kubernetes/vendor/github.com/prometheus/client_golang/prometheus.(*gauge).Dec
0.66s 1.03% 63.86% 0.67s 1.04% | k8s.io/kubernetes/vendor/github.com/prometheus/client_golang/prometheus.(*gauge).Add
----------------------------------------------------------+-------------
0.98s 100% | k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*threadSafeMap).Get
0.66s 1.03% 64.89% 0.98s 1.52% | runtime.mapaccess2_faststr
----------------------------------------------------------+-------------
0.33s 51.56% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*defaultQueueMetrics).add
0.31s 48.44% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*defaultRetryMetrics).retry
0.64s 1% 65.88% 0.64s 1% | k8s.io/kubernetes/vendor/github.com/prometheus/client_golang/prometheus.(*counter).Inc
----------------------------------------------------------+-------------
# 此处 调用函数 doUpdateClusterPhasePass 为 0.64 s,但后续的调用 cnum 统计时间为 26.82 秒,占用 cpu 时间较长,说明此处可能有问题,死循环之类
# 后续查看代码发现确实此处有问题,判断逻辑有问题,导致一直有 update event 进入队列等待处理,从而占用 cpu
0.64s 1% 66.88% 59.53s 92.57% | k8s.io/kubernetes/vendor/k8s.io/kubernetes/pkg/controller/clusterstatusupdate.(*Controller).doUpdateClusterPhasePass
26.82s 45.05% | k8s.io/kubernetes/vendor/k8s.io/kubernetes/pkg/controller/clusterstatusupdate.(*Controller).syncClusterPhaseWithLease
15.15s 25.45% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).Get
13.91s 23.37% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).Done
1.73s 2.91% | k8s.io/kubernetes/vendor/k8s.io/kubernetes/pkg/controller/clusterstatusupdate.(*Controller).handleLeaseErr
1.28s 2.15% | runtime.convTstring
----------------------------------------------------------+-------------
0.63s 100% | sort.Search
0.63s 0.98% 67.86% 0.63s 0.98% | sort.SearchFloat64s.func1
----------------------------------------------------------+-------------
0.61s 100% | runtime.mallocgc (inline)
0.61s 0.95% 68.81% 0.61s 0.95% | runtime.nextFreeFast
----------------------------------------------------------+-------------
0.60s 100% | k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*threadSafeMap).Get (inline)
0.60s 0.93% 69.74% 0.60s 0.93% | sync.(*RWMutex).RLock
----------------------------------------------------------+-------------
0.59s 100% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*defaultQueueMetrics).sinceInSeconds (inline)
0.59s 0.92% 70.66% 0.59s 0.92% | time.Duration.Seconds
----------------------------------------------------------+-------------
4.83s 52.39% | k8s.io/kubernetes/vendor/k8s.io/kubernetes/pkg/controller/clusterstatusupdate.(*Controller).syncClusterPhaseWithLease
4.39s 47.61% | k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/clock.RealClock.Now
0.55s 0.86% 71.51% 9.22s 14.34% | time.Now
8.67s 94.03% | time.now
----------------------------------------------------------+-------------
1.87s 100% | runtime.nilinterequal
0.53s 0.82% 72.34% 1.87s 2.91% | runtime.efaceeq
1.19s 63.64% | runtime.strequal
----------------------------------------------------------+-------------
8.67s 100% | time.Now
0.52s 0.81% 73.15% 8.67s 13.48% | time.now
4.42s 50.98% | runtime.walltime (inline)
3.73s 43.02% | runtime.nanotime
----------------------------------------------------------+-------------
0.37s 75.51% | runtime.concatstrings
0.10s 20.41% | runtime.typedmemmove
0.49s 0.76% 73.91% 0.49s 0.76% | runtime.memmove
----------------------------------------------------------+-------------
0.48s 100% | runtime.typehash
0.48s 0.75% 74.65% 0.48s 0.75% | runtime.strhash
----------------------------------------------------------+-------------
1.79s 52.65% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*defaultQueueMetrics).done
1.61s 47.35% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*defaultQueueMetrics).get
0.47s 0.73% 75.38% 3.40s 5.29% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*defaultQueueMetrics).sinceInSeconds
2.33s 68.53% | time.Since
0.59s 17.35% | time.Duration.Seconds (inline)
----------------------------------------------------------+-------------
0.48s 94.12% | runtime.memclrHasPointers
0.46s 0.72% 76.10% 0.51s 0.79% | runtime.bulkBarrierPreWrite
----------------------------------------------------------+-------------
10.32s 100% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).Get
0.45s 0.7% 76.80% 10.32s 16.05% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*defaultQueueMetrics).get
2.08s 20.16% | k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/clock.RealClock.Now
1.63s 15.79% | k8s.io/kubernetes/vendor/github.com/prometheus/client_golang/prometheus.(*histogram).Observe
1.61s 15.60% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*defaultQueueMetrics).sinceInSeconds
1.41s 13.66% | runtime.mapdelete
1.40s 13.57% | runtime.mapaccess2
1.36s 13.18% | runtime.mapassign
0.38s 3.68% | k8s.io/kubernetes/vendor/github.com/prometheus/client_golang/prometheus.(*gauge).Dec
----------------------------------------------------------+-------------
0.65s 100% | runtime.mallocgc
0.45s 0.7% 77.50% 0.65s 1.01% | runtime.heapBitsSetType
----------------------------------------------------------+-------------
1.44s 100% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*MaxOfRateLimiter).Forget
0.44s 0.68% 78.18% 1.44s 2.24% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*ItemExponentialFailureRateLimiter).Forget
0.76s 52.78% | runtime.mapdelete
0.23s 15.97% | sync.(*Mutex).Unlock
----------------------------------------------------------+-------------
0.44s 100% | runtime.nanotime
0.44s 0.68% 78.87% 0.44s 0.68% | runtime.nanotime1
----------------------------------------------------------+-------------
15.15s 100% | k8s.io/kubernetes/vendor/k8s.io/kubernetes/pkg/controller/clusterstatusupdate.(*Controller).doUpdateClusterPhasePass
0.41s 0.64% 79.51% 15.15s 23.56% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).Get
10.32s 68.12% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*defaultQueueMetrics).get
1.99s 13.14% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.set.delete (inline)
1.68s 11.09% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.set.insert (inline)
0.41s 2.71% | sync.(*Mutex).Lock
0.34s 2.24% | sync.(*Mutex).Unlock
----------------------------------------------------------+-------------
0.54s 100% | runtime.mapassign
0.41s 0.64% 80.14% 0.54s 0.84% | runtime.typedmemmove
0.10s 18.52% | runtime.memmove
----------------------------------------------------------+-------------
1.97s 57.27% | k8s.io/kubernetes/vendor/k8s.io/client-go/listers/coordination/v1.leaseNamespaceLister.Get
1.47s 42.73% | k8s.io/kubernetes/vendor/k8s.io/client-go/listers/mizarcore/v1alpha1.(*clusterLister).Get
0.40s 0.62% 80.77% 3.44s 5.35% | k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*cache).GetByKey
3.04s 88.37% | k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*threadSafeMap).Get
----------------------------------------------------------+-------------
0.89s 82.41% | k8s.io/kubernetes/vendor/k8s.io/kubernetes/pkg/controller/clusterstatusupdate.(*Controller).syncClusterPhaseWithLease
0.19s 17.59% | time.Since
0.40s 0.62% 81.39% 1.08s 1.68% | time.Time.Sub
0.51s 47.22% | time.Time.Add
----------------------------------------------------------+-------------
0.39s 100% | runtime.walltime
0.39s 0.61% 81.99% 0.39s 0.61% | runtime.walltime1
----------------------------------------------------------+-------------
0.39s 84.78% | runtime.scanobject
0.38s 0.59% 82.58% 0.46s 0.72% | runtime.findObject
----------------------------------------------------------+-------------
0.29s 78.38% | runtime.strequal
0.37s 0.58% 83.16% 0.37s 0.58% | runtime.memequal
----------------------------------------------------------+-------------
1.22s 100% | runtime.mapdelete
0.34s 0.53% 83.69% 1.22s 1.90% | runtime.memclrHasPointers
0.48s 39.34% | runtime.bulkBarrierPreWrite
0.40s 32.79% | runtime.memclrNoHeapPointers
----------------------------------------------------------+-------------
4.94s 100% | k8s.io/kubernetes/vendor/k8s.io/kubernetes/pkg/controller/clusterstatusupdate.(*Controller).syncClusterPhaseWithLease
0.31s 0.48% 84.17% 4.94s 7.68% | k8s.io/kubernetes/vendor/k8s.io/client-go/listers/coordination/v1.leaseNamespaceLister.Get
2.66s 53.85% | runtime.concatstring3
1.97s 39.88% | k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*cache).GetByKey
----------------------------------------------------------+-------------
1.16s 100% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*delayingType).AddAfter
0.31s 0.48% 84.65% 1.16s 1.80% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).ShuttingDown
0.48s 41.38% | sync.(*Mutex).Unlock
0.37s 31.90% | sync.(*Mutex).Lock
----------------------------------------------------------+-------------
5.29s 100% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).Add
0.30s 0.47% 85.12% 5.29s 8.23% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*defaultQueueMetrics).add
2.37s 44.80% | k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/clock.RealClock.Now
1.41s 26.65% | runtime.mapassign
0.44s 8.32% | k8s.io/kubernetes/vendor/github.com/prometheus/client_golang/prometheus.(*gauge).Inc
0.43s 8.13% | runtime.mapaccess2
0.33s 6.24% | k8s.io/kubernetes/vendor/github.com/prometheus/client_golang/prometheus.(*counter).Inc
----------------------------------------------------------+-------------
1.49s 53.79% | k8s.io/kubernetes/vendor/k8s.io/kubernetes/pkg/controller/clusterstatusupdate.(*Controller).syncClusterPhaseWithLease
1.28s 46.21% | k8s.io/kubernetes/vendor/k8s.io/kubernetes/pkg/controller/clusterstatusupdate.(*Controller).doUpdateClusterPhasePass
0.30s 0.47% 85.59% 2.77s 4.31% | runtime.convTstring
2.47s 89.17% | runtime.mallocgc
----------------------------------------------------------+-------------
2.33s 100% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*defaultQueueMetrics).sinceInSeconds
0.29s 0.45% 86.04% 2.33s 3.62% | time.Since
1.85s 79.40% | runtime.nanotime
0.19s 8.15% | time.Time.Sub
----------------------------------------------------------+-------------
7.02s 100% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).Done
0.26s 0.4% 86.44% 7.02s 10.92% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*defaultQueueMetrics).done
2.34s 33.33% | k8s.io/kubernetes/vendor/github.com/prometheus/client_golang/prometheus.(*histogram).Observe
1.79s 25.50% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*defaultQueueMetrics).sinceInSeconds
1.48s 21.08% | runtime.mapdelete
1.15s 16.38% | runtime.mapaccess2
----------------------------------------------------------+-------------
11.42s 100% | k8s.io/kubernetes/vendor/k8s.io/kubernetes/pkg/controller/clusterstatusupdate.(*Controller).syncClusterPhaseWithLease
0.25s 0.39% 86.83% 11.42s 17.76% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*delayingType).AddAfter
9.66s 84.59% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).Add
1.16s 10.16% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).ShuttingDown
0.35s 3.06% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*defaultRetryMetrics).retry
----------------------------------------------------------+-------------
9.66s 100% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*delayingType).AddAfter
0.22s 0.34% 87.17% 9.66s 15.02% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).Add
5.29s 54.76% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*defaultQueueMetrics).add
2.03s 21.01% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.set.has (inline)
1.43s 14.80% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.set.insert (inline)
0.36s 3.73% | sync.(*Mutex).Unlock
0.33s 3.42% | sync.(*Mutex).Lock
----------------------------------------------------------+-------------
0.80s 91.95% | runtime.gcDrain
0.07s 8.05% | runtime.mallocgc
0.22s 0.34% 87.51% 0.87s 1.35% | runtime.gentraceback
0.07s 8.05% | runtime.greyobject
----------------------------------------------------------+-------------
1.74s 100% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).Done
0.22s 0.34% 87.86% 1.74s 2.71% | runtime.growslice
1.43s 82.18% | runtime.mallocgc
----------------------------------------------------------+-------------
2.34s 58.94% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*defaultQueueMetrics).done
1.63s 41.06% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*defaultQueueMetrics).get
0.17s 0.26% 88.12% 3.97s 6.17% | k8s.io/kubernetes/vendor/github.com/prometheus/client_golang/prometheus.(*histogram).Observe
2.11s 53.15% | k8s.io/kubernetes/vendor/github.com/prometheus/client_golang/prometheus.(*histogram).observe
1.69s 42.57% | k8s.io/kubernetes/vendor/github.com/prometheus/client_golang/prometheus.(*histogram).findBucket
----------------------------------------------------------+-------------
1.64s 100% | k8s.io/kubernetes/vendor/k8s.io/kubernetes/pkg/controller/clusterstatusupdate.(*Controller).syncClusterPhaseWithLease
0.17s 0.26% 88.38% 1.64s 2.55% | k8s.io/kubernetes/vendor/k8s.io/client-go/listers/mizarcore/v1alpha1.(*clusterLister).Get
1.47s 89.63% | k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*cache).GetByKey
----------------------------------------------------------+-------------
1.61s 100% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*rateLimitingType).Forget
0.17s 0.26% 88.65% 1.61s 2.50% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*MaxOfRateLimiter).Forget
1.44s 89.44% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*ItemExponentialFailureRateLimiter).Forget
----------------------------------------------------------+-------------
1.59s 100% | k8s.io/kubernetes/vendor/github.com/prometheus/client_golang/prometheus.(*histogram).findBucket
0.17s 0.26% 88.91% 1.59s 2.47% | sort.SearchFloat64s
1.42s 89.31% | sort.Search
----------------------------------------------------------+-------------
2.66s 100% | k8s.io/kubernetes/vendor/k8s.io/client-go/listers/coordination/v1.leaseNamespaceLister.Get
0.13s 0.2% 89.12% 2.66s 4.14% | runtime.concatstring3
2.53s 95.11% | runtime.concatstrings
----------------------------------------------------------+-------------
2.03s 57.02% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).Add (inline)
1.53s 42.98% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).Done (inline)
0.12s 0.19% 89.30% 3.56s 5.54% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.set.has
3.44s 96.63% | runtime.mapaccess2
----------------------------------------------------------+-------------
0.12s 0.19% 89.49% 3.70s 5.75% | runtime.gcDrain
2.59s 70.00% | runtime.scanobject
0.80s 21.62% | runtime.gentraceback
----------------------------------------------------------+-------------
1.69s 100% | k8s.io/kubernetes/vendor/github.com/prometheus/client_golang/prometheus.(*histogram).Observe
0.10s 0.16% 89.64% 1.69s 2.63% | k8s.io/kubernetes/vendor/github.com/prometheus/client_golang/prometheus.(*histogram).findBucket
1.59s 94.08% | sort.SearchFloat64s
----------------------------------------------------------+-------------
1.68s 54.02% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).Get (inline)
1.43s 45.98% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).Add (inline)
0.10s 0.16% 89.80% 3.11s 4.84% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.set.insert
3.01s 96.78% | runtime.mapassign
----------------------------------------------------------+-------------
2.06s 50.86% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).Done
1.99s 49.14% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).Get (inline)
0.09s 0.14% 89.94% 4.05s 6.30% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.set.delete
3.96s 97.78% | runtime.mapdelete
----------------------------------------------------------+-------------
0.38s 100% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*defaultQueueMetrics).get
0.08s 0.12% 90.06% 0.38s 0.59% | k8s.io/kubernetes/vendor/github.com/prometheus/client_golang/prometheus.(*gauge).Dec
0.30s 78.95% | k8s.io/kubernetes/vendor/github.com/prometheus/client_golang/prometheus.(*gauge).Add
----------------------------------------------------------+-------------
1.69s 100% | k8s.io/kubernetes/vendor/k8s.io/kubernetes/pkg/controller/clusterstatusupdate.(*Controller).handleLeaseErr
0.08s 0.12% 90.19% 1.69s 2.63% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*rateLimitingType).Forget
1.61s 95.27% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*MaxOfRateLimiter).Forget
----------------------------------------------------------+-------------
0.41s 83.67% | runtime.scanobject
0.07s 14.29% | runtime.gentraceback
0.08s 0.12% 90.31% 0.49s 0.76% | runtime.greyobject
----------------------------------------------------------+-------------
1.24s 100% | runtime.rawstringtmp
0.08s 0.12% 90.44% 1.24s 1.93% | runtime.rawstring
1.16s 93.55% | runtime.mallocgc
----------------------------------------------------------+-------------
0.44s 100% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*defaultQueueMetrics).add
0.07s 0.11% 90.55% 0.44s 0.68% | k8s.io/kubernetes/vendor/github.com/prometheus/client_golang/prometheus.(*gauge).Inc
0.37s 84.09% | k8s.io/kubernetes/vendor/github.com/prometheus/client_golang/prometheus.(*gauge).Add
----------------------------------------------------------+-------------
2.37s 53.26% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*defaultQueueMetrics).add
2.08s 46.74% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*defaultQueueMetrics).get
0.06s 0.093% 90.64% 4.45s 6.92% | k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/clock.RealClock.Now
4.39s 98.65% | time.Now
----------------------------------------------------------+-------------
0.69s 100% | runtime.mallocgc
0.06s 0.093% 90.73% 0.69s 1.07% | runtime.(*mcentral).cacheSpan
0.31s 44.93% | runtime.memclrNoHeapPointers
----------------------------------------------------------+-------------
1.30s 100% | runtime.concatstrings
0.06s 0.093% 90.83% 1.30s 2.02% | runtime.rawstringtmp
1.24s 95.38% | runtime.rawstring
----------------------------------------------------------+-------------
0.35s 100% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*delayingType).AddAfter
0.04s 0.062% 90.89% 0.35s 0.54% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*defaultRetryMetrics).retry
0.31s 88.57% | k8s.io/kubernetes/vendor/github.com/prometheus/client_golang/prometheus.(*counter).Inc
----------------------------------------------------------+-------------
1.73s 100% | k8s.io/kubernetes/vendor/k8s.io/kubernetes/pkg/controller/clusterstatusupdate.(*Controller).doUpdateClusterPhasePass
0.04s 0.062% 90.95% 1.73s 2.69% | k8s.io/kubernetes/vendor/k8s.io/kubernetes/pkg/controller/clusterstatusupdate.(*Controller).handleLeaseErr
1.69s 97.69% | k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*rateLimitingType).Forget
----------------------------------------------------------+-------------