Display | Metric | Threshold | Explanation |
CPU | %RDY | 10 | Overprovisioning of vCPUs, excessive usage of vSMP or a limit(check %MLMTD) has been set. See Jason’s explanation for vSMP VMs |
CPU | %CSTP | 3 | Excessive usage of vSMP. Decrease amount of vCPUs for this particular VM. This should lead to increased scheduling opportunities. |
CPU | %SYS | 20 | The percentage of time spent by system services on behalf of the world. Most likely caused by high IO VM. Check other metrics and VM for possible root cause |
CPU | %MLMTD | 0 | The percentage of time the vCPU was ready to run but deliberately wasn’t scheduled because that would violate the “CPU limit” settings. If larger than 0 the world is being throttled due to the limit on CPU. |
CPU | %SWPWT | 5 | VM waiting on swapped pages to be read from disk. Possible cause: Memory overcommitment. |
MEM | MCTLSZ | 1 | If larger than 0 host is forcing VMs to inflate balloon driver to reclaim memory as host is overcommited. |
MEM | SWCUR | 1 | If larger than 0 host has swapped memory pages in the past. Possible cause: Overcommitment. |
MEM | SWR/s | 1 | If larger than 0 host is actively reading from swap(vswp). Possible cause: Excessive memory overcommitment. |
MEM | SWW/s | 1 | If larger than 0 host is actively writing to swap(vswp). Possible cause: Excessive memory overcommitment. |
MEM | CACHEUSD | 0 | If larger than 0 host has compressed memory. Possible cause: Memory overcommitment. |
MEM | ZIP/s | 0 | If larger than 0 host is actively compressing memory. Possible cause: Memory overcommitment. |
MEM | UNZIP/s | 0 | If larger than 0 host has accessing compressed memory. Possible cause: Previously host was overcommited on memory. |
MEM | N%L | 80 | If less than 80 VM experiences poor NUMA locality. If a VM has a memory size greater than the amount of memory local to each processor, the ESX scheduler does not attempt to use NUMA optimizations for that VM and “remotely” uses memory via “interconnect”. Check “GST_ND(X)” to find out which NUMA nodes are used. |
NETWORK | %DRPTX | 1 | Dropped packets transmitted, hardware overworked. Possible cause: very high network utilization |
NETWORK | %DRPRX | 1 | Dropped packets received, hardware overworked. Possible cause: very high network utilization |
DISK | GAVG | 25 | Look at “DAVG” and “KAVG” as the sum of both is GAVG. |
DISK | DAVG | 25 | Disk latency most likely to be caused by array. |
DISK | KAVG | 2 | Disk latency caused by the VMkernel, high KAVG usually means queuing. Check “QUED”. |
DISK | QUED | 1 | Queue maxed out. Possibly queue depth set to low. Check with array vendor for optimal queue depth value. |
DISK | ABRTS/s | 1 | Aborts issued by guest(VM) because storage is not responding. For Windows VMs this happens after 60 seconds by default. Can be caused for instance when paths failed or array is not accepting any IO for whatever reason. |
DISK | RESETS/s | 1 | The number of commands reset per second. |
DISK | CONS/s | 20 | SCSI Reservation Conflicts per second. If many SCSI Reservation Conflicts occur performance could be degraded due to the lock on the VMFS. |
Thursday, June 2, 2016
ESXTOP Metrics and Thresholds
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment