Skip to content

CPU usage displayed by the PodMetrics and NodeMetrics tables does not seem intuitive #1669

@googs1025

Description

@googs1025

like this:

~ kubectl get PodMetrics
NAME                               CPU         MEMORY    WINDOW
test-deployment-6d56d679c5-hc8lr   67131864n   71252Ki   15.009s
test-deployment-6d56d679c5-w2pvq   75122454n   77316Ki   15.01s~ kubectl get NodeMetrics
NAME       CPU          MEMORY      WINDOW
minikube   401910564n   1762588Ki   20.162s

Normally we would use mili core instead of nano core, this seems to be the relevant code for this part

func resourceUsage(last, prev MetricsPoint) (corev1.ResourceList, api.TimeInfo, error) {
if last.StartTime.Before(prev.StartTime) {
return corev1.ResourceList{}, api.TimeInfo{}, fmt.Errorf("unexpected decrease in startTime of node/container")
}
if last.CumulativeCpuUsed < prev.CumulativeCpuUsed {
return corev1.ResourceList{}, api.TimeInfo{}, fmt.Errorf("unexpected decrease in cumulative CPU usage value")
}
window := last.Timestamp.Sub(prev.Timestamp)
cpuUsage := float64(last.CumulativeCpuUsed-prev.CumulativeCpuUsed) / window.Seconds()
return corev1.ResourceList{
corev1.ResourceCPU: uint64Quantity(uint64(cpuUsage), resource.DecimalSI, -9),
corev1.ResourceMemory: uint64Quantity(last.MemoryUsage, resource.BinarySI, 0),
}, api.TimeInfo{
Timestamp: last.Timestamp,
Window: window,
}, nil
}
// uint64Quantity converts a uint64 into a Quantity, which only has constructors
// that work with int64 (except for parse, which requires costly round-trips to string).
// We lose precision until we fit in an int64 if greater than the max int64 value.
func uint64Quantity(val uint64, format resource.Format, scale resource.Scale) resource.Quantity {
q := *resource.NewScaledQuantity(int64(val), scale)
if val > math.MaxInt64 {
// lose an decimal order-of-magnitude precision,
// so we can fit into a scaled quantity
klog.V(2).InfoS("Found unexpectedly large resource value, losing precision to fit in scaled resource.Quantity", "value", val)
q = *resource.NewScaledQuantity(int64(val/10), resource.Scale(1)+scale)
}
q.Format = format
return q
}

But I am not sure if there will be any problem if we modify storage? (Because PodMetrics NodeMetrics) There are many dependent parties using it. A better way is to use a hard-coded method in the table to change the unit of resource cpu from nano to milli

row := make([]interface{}, 0, len(names)+1)
row = append(row, pod.Name)
for _, name := range names {
v := usage[v1.ResourceName(name)]
row = append(row, v.String())
}

Metadata

Metadata

Labels

triage/acceptedIndicates an issue or PR is ready to be actively worked on.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions