Difference between a dream and an aim. A dream requires soundless sleep, whereas an aim requires sleepless efforts.

Search This Blog

Saturday, January 5, 2013



In high performance computing it is common for applications to have all of the data in the physical memory to meet performance criticality. When multiple processes communicate, as we know, shared memory serves as the fastest way of IPC. Any such typical application would initialize the shared memory by loading the data from the disk into the shared memory. Now a question arises - what is the maximum data size that an application can hold in the physical memory, of course, without swapping. For the sake of discussion, if we are given a 100GB of RAM, at max, how many giga byte of physical memory (RAM) can I allocate to my data, keeping in mind that the fewer I allocate, the more boxes I would require to split the data gallery. As a test, I wrote a small app to do what I described above. As it loaded around 30 GB of data into the RAM, the kernel started using swap. After loading 40 GB, the swap usage exponentially increased and at one point in time, it stopped responding and I had to physically bounce the box (plug off and plug in again). This din't make sense to me at the beginning. At first, why would the kernel swap if there is enough RAM? I did a man proc and searched for "swap". I happened to read about /proc/sys/vm/swappiness - a parameter which defines the kernel's tendency to swap. The default value of swappiness on ubuntu8.04 is 60. As the "used" RAM size reaches 60% of the total RAM size, the kernel would being to swap. In my case, 60% of 100 GB is 60 GB. But my data size was 30 GB when the kernel started to swap. Where did the remaining 30GB go, eventually leaving my box in a non-responsive state?! Again this intrigued me to do a further search . I could not find the relation between the data size and the memory required to store it. Few more runs and a close memory monitoring showed that the kernel caches all the data (yes, almost all the data that are used very recently). Thus, if an application has loaded 2GB of data into the memory, the kernel would cache 4GB. 2 GB for the actual shared memory data and 2GB of unused cache using which the data was read/copied into the shared memory. On a typical server environment (runlevel 3), you wouldn't expect this to happen, since apart from the main apps no other applications will be running (like yum-updatesd, vlc, rhythmbox, etc). One would expect the kernel to drop the unused cache immediately. proc man page again showed one other important parameter - /proc/sys/vm/drop_caches. This entry point is helpful in instructing the kernel to drop the unused cache.
To free pagecache:
 # echo 1 > /proc/sys/vm/drop_caches
To free dentries and inodes:
 # echo 2 > /proc/sys/vm/drop_caches
To free pagecache, dentries and inodes:
 # echo 3 > /proc/sys/vm/drop_caches

When an application loads all the data into the memory during its initialization and then never tends to read the disk, drop_caches is a real boon. In my case above, I was able to load 90 GB of data into the shared memory and share it with the other processes. The technique was to clear the cache frequently as the application initialized.
while :; do echo 3 > /proc/sys/vm/drop_caches sleep 30 done
As a thumb rule, swappiness must be set to 0 (echo 0 > /proc/sys/vm/swappiness or via sysctl.conf) before the application starts and the drop_caches must be set to 3 periodically to avoid any kind of swaps and performance degradations. Once the app has been initialized and all the 90GB has been loaded into the memory, the while loop to drop the unused cache is not needed and it can be terminated safely. But the moment you do a huge file read, don't forget to run the script in the background, of course, as root. The need to drop_caches entirely depends on your application. Setting swappiness to 0 is ideal in my opinion for all the server environments where you have to run only specific application on systems.

No comments:

Post a Comment