Entropic Thoughts

Limiting Process Memory with systemd-run

Limiting Process Memory with systemd-run

I have 32 gb of working memory on my laptop, of which I make 28 gb available to the Linux vm I work in. It seems like ghc sometimes eats way too much of this (when building a very template-heavy module), causing the system to start thrashing and it’s not a pleasant experience.

I haven’t figured out how to make ghc consume less memory without changing the code1 I’m thinking of biting the bullet and upgrading the machine to the full 64 gb it can take – 96 gb is also tempting but I see no reason for it. but I have found a way to avoid the unpleasantness of thrashing. We can create a script called m24gr containing

#!/bin/sh
systemd-run --scope -p MemoryMax=24G -p MemoryHigh=22G -- "$@"

and then we can run memory-hungry commands like

m24gr cabal build

This starts the command as a cgrouped systemd unit, which means it cannot allocate more than 24 gb of memory, which seems to leave enough headroom to keep the system running smoothly even if the hungry command itself gets oom killed.

I also discovered during my experimentation that Fedora sets the size of both swap and /tmp based on how much ram the computer has when it boots. Thus, even if we can configure Hyper-V with dynamic memory to assign only 2 gb to the vm at boot, we realistically probably want it to boot with a higher amount and then scale down to a minimum of 2 gb after that, to get reasonably sized swap and /tmp.