kvm performance tuning.

Set swappiness to 10, swap space will be used at or 90% usage of system ram, generally on linux systems this value is 60.

Checking the current swappiness value

 cat /proc/sys/vm/swappiness
To change the value at system run time use 
#sysctl -w vm.swappiness=10
vm.swappiness = 10

To keep the changes permanent we can add these lines to /etc/sysctl.conf

#sysctl -p 
kernel.sysrq = 1
vm.swappiness = 10
vm.vfs_cache_pressure = 50

vm.vfs_cache_pressure tells the reclaim value of memory used for directory and inode caches.
By default this value is 100, on my host it is set to 50

Transparent Huge Pages (thp) a feature enabled by default in rhel7 systems, provides using memoery pages greater than default 4kB pages from memory, they can be 2MB or 1GB pages called as huge pages,
kvm guests provisioned to use thp can get improved performance by increasing CPU cache hits against the Transaction Lookaside Buffer.

Transparent huge pages automatically optimize system settings for performance. By allowing all free memory used used as cache.

transparent huge pages provides only a small performance improvements, when working high i/o database like oracle,mongodb it can cause considerable performance issues due to memory swapping of hugepages.

It is recommened to disable thp
Check if the thp pages are enabled.

#cat /sys/kernel/mm/transparent_hugepage/enabled
#cat /sys/kernel/mm/transparent_hugepage/defrag

to disable at system run time use

#echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled
#echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag

we can create a systemd service to disable thp.

#here is the systemd unit file 
#cat /etc/systemd/system/disable-thp.service 
Description=Disable Transparent Huge Pages (THP)

ExecStart=/bin/sh -c "echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled && echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag"


##reload the systemctl daemon 
#sudo systemctl daemon-reload

Another of way of disabling would be adding a grub line parameter
#GRUB_CMDLINE_LINUX="rhgb quiet video=SVIDEO-1:d elevator=noopi console=tty0 console=ttyS0,115200n8 elevator=deadline transparent_hugepage=never"

##rebuild the gurb file with grub2-mkconfig.
#grub2-mkconfig -o /boot/grub2/grub.cfg

My server mainly used for hosting kvm performance testing, it quite often hits the memory limits ,for memory optimization we can use Kernel same-page merging (ksm) combines identical memory pages from multiple running processes into one memory region. Because KVM guests run as processes under Linux, KSM provides the memory overcommit feature to hypervisors for more efficient use of memory.

kvm allows over-commiting of resources for guests provisioned on kvm host. it is observed that ksm can cause performance issues, if server main use for running kvm hosts then ksm should be enabled barring a small performance compromise.

By default is turned off on rhel6/7 can be turned on with ksm, kstuned.

Checking if ksm is enabled on running kernel.

# grep KSM /boot/config-`uname -r`

if it is enabled you can find files at location /sys/kernel/mm/ksm

Checking if the feature is turned on

#cat /sys/kernel/mm/ksm/run

As we can see it is disabled , to turn it on use

#echo 1 > /sys/kernel/mm/ksm/run

# cat /sys/kernel/mm/ksm/run

Shared pages can be checked from command

#ll /sys/kernel/mm/ksm/pages_* | awk '{print $9}' && cat /sys/kernel/mm/ksm/pages_*

Different ksm terminology explanation.
pages_shared - how many shared pages are being used
pages_sharing - how many more sites are sharing them i.e. how much saved
pages_unshared - how many pages unique but repeatedly checked for merging
pages_volatile - how many pages changing too fast to be placed in a tree

if you plan to use ksm on custom configuration then install start the service.

#dnf install ksm

#systemctl enable ksm 

#systemctl status ksm 
● ksm.service - Kernel Samepage Merging
   Loaded: loaded (/usr/lib/systemd/system/ksm.service; enabled; vendor preset: enabled)
   Active: active (exited) since Sun 2018-04-08 01:01:02 IST; 8s ago
 Main PID: 22081 (code=exited, status=0/SUCCESS)
      CPU: 11ms

#systemctl start  ksmtuned

#systemctl enable   ksmtuned

#systemctl status   ksmtuned 
● ksmtuned.service - Kernel Samepage Merging (KSM) Tuning Daemon
   Loaded: loaded (/usr/lib/systemd/system/ksmtuned.service; enabled; vendor preset: enabled)
   Active: active (running) since Sun 2018-04-08 01:01:31 IST; 13s ago
 Main PID: 22247 (ksmtuned)
   CGroup: /system.slice/ksmtuned.service
           ├─22247 /bin/bash /usr/sbin/ksmtuned
           └─22248 sleep 60

ksm can be tuned using the configuration file /etc/ksmtuned.conf

KSM_SLEEP_MSEC is ksm sleep time small servers needs to have more sleep time

 #grep -v '#' /etc/ksmtuned.conf

##restart  the ksmtuned service
# systemctl restrat ksmtuned

If ksm is in use redhat recommendss etting /sys/kernel/mm/ksm/merge_across_nodes tunable to 0 to avoid merging pages across NUMA nodes

#echo '0' > /sys/kernel/mm/ksm/merge_across_nodes   
bash: echo: write error: Device or resource busy

#to fix the above error  refer https://www.centos.org/forums/viewtopic.php?f=47&t=59749 

# echo 2 > /sys/kernel/mm/ksm/run && sleep 300 && cat /sys/kernel/mm/ksm/pages_shared &&  echo 0 > /sys/kernel/mm/ksm/merge_across_nodes && echo 1 > /sys/kernel/mm/ksm/run

Non-Uniform Memory Access (NUMA) is enabled by default on redhat7.

#systemctl start numad

#systemctl enable numad

#systemctl status numad
● numad.service - numad - The NUMA daemon that manages application locality.
   Loaded: loaded (/usr/lib/systemd/system/numad.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2018-04-08 01:44:11 IST; 14s ago
     Docs: man:numad
 Main PID: 2477 (numad)
   CGroup: /system.slice/numad.service
           └─2477 /usr/bin/numad -i 15

Automatic NUMA balancing improves the performance of applications running on NUMA hardware systems
to check status


##enable automatic numa 
#echo 1 > /proc/sys/kernel/numa_balancing
#cat /proc/sys/kernel/numa_balancing

Checking the current numastatus

#numastat -cm  qemu-kvm
Found no processes containing pattern: "qemu-kvm"

Per-node system memory usage (in MBs):
Token Node not in hash table.
Token Node not in hash table.
                 Node 0 Total
                 ------ -----
MemTotal           3935  3935
MemFree              29    29
MemUsed            3906  3906
Active             1689  1689
Inactive           1540  1540
Active(anon)       1241  1241
Inactive(anon)     1049  1049
Active(file)        448   448
Inactive(file)      491   491
Unevictable          38    38
Mlocked              38    38
Dirty                 0     0
Writeback             0     0
FilePages          1084  1084
Mapped              169   169
AnonPages          2197  2197
Shmem                69    69
KernelStack          12    12
PageTables           66    66
NFS_Unstable          0     0
Bounce                0     0
WritebackTmp          0     0
Slab                378   378
SReclaimable        265   265
SUnreclaim          113   113
AnonHugePages         0     0
HugePages_Total       0     0
HugePages_Free        0     0
HugePages_Surp        0     0

NUMA topology of the process can be checked from lstopo

#dnf install hwloc-gui -y

# lstopo
Machine (3935MB)
  Package L#0 + L2 L#0 (2048KB)
    L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0 + PU L#0 (P#0)
    L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1 + PU L#1 (P#1)
  HostBridge L#0
    PCI 8086:2a02
      GPU L#0 "renderD128"
      GPU L#1 "card0"
      GPU L#2 "controlD64"
    PCI 8086:2a03
      PCI 14e4:4311
      PCI 14e4:4312
      PCI 14e4:1673
        Net L#3 "enp9s0"
    PCI 8086:2850
      Block(Disk) L#4 "sda"
    PCI 8086:2828
      Block(Disk) L#5 "sdb"
https://blacksaildivision.com/how-to-disable-transparent-huge-pages-on-centos https://blog.nelhage.com/post/transparent-hugepages/