How to fill 90% of the free memory?How to test swap partitionLess RAM Available than installed on Centos 7Despite ` vm.swappiness=100`, Mint system freezes, yet swap underusedHow does cached memory work for executables?How to get free memory on AIX?Sudden burst in free memoryHow can i free up memory / manage memory on a linux box?buffer cache and free memoryWhat would happen if the amount of free memory (vm.min_free_kbytes) was too low?Is there a way to free memory?Free Memory ReportingWhat is difference between total and free memoryAIX used memory gets never free
Short story with a alien planet, government officials must wear exploding medallions
Avoiding direct proof while writing proof by induction
How to Recreate this in LaTeX? (Unsure What the Notation is Called)
Alternative to sending password over mail?
Could the museum Saturn V's be refitted for one more flight?
How does a predictive coding aid in lossless compression?
Should I cover my bicycle overnight while bikepacking?
A category-like structure without composition?
Do scales need to be in alphabetical order?
Why is consensus so controversial in Britain?
Why do bosons tend to occupy the same state?
What is the most common color to indicate the input-field is disabled?
Mathematica command that allows it to read my intentions
Why would the Red Woman birth a shadow if she worshipped the Lord of the Light?
Why is it a bad idea to hire a hitman to eliminate most corrupt politicians?
What exploit Are these user agents trying to use?
How to compactly explain secondary and tertiary characters without resorting to stereotypes?
How can saying a song's name be a copyright violation?
How to prevent "they're falling in love" trope
What's the in-universe reasoning behind sorcerers needing material components?
GFCI outlets - can they be repaired? Are they really needed at the end of a circuit?
What does the expression "A Mann!" means
How badly should I try to prevent a user from XSSing themselves?
Why are the 737's rear doors unusable in a water landing?
How to fill 90% of the free memory?
How to test swap partitionLess RAM Available than installed on Centos 7Despite ` vm.swappiness=100`, Mint system freezes, yet swap underusedHow does cached memory work for executables?How to get free memory on AIX?Sudden burst in free memoryHow can i free up memory / manage memory on a linux box?buffer cache and free memoryWhat would happen if the amount of free memory (vm.min_free_kbytes) was too low?Is there a way to free memory?Free Memory ReportingWhat is difference between total and free memoryAIX used memory gets never free
I want to do some low-resources testing and for that I need to have 90% of the free memory full.
How can I do this on a *nix
system?
memory testing
add a comment |
I want to do some low-resources testing and for that I need to have 90% of the free memory full.
How can I do this on a *nix
system?
memory testing
3
Does it really have to work on any *nix system?
– a CVn
Nov 8 '13 at 12:31
30
Instead of jut filling memory, could you instead create a VM (using docker, or vagrant, or something similar) that has a limited amount of memory?
– abendigo
Nov 8 '13 at 13:27
4
@abendigo For a QA many of the solutions presented here are useful: for a general purpose OS without a specific platform the VM or kernel boot parameters could be useful, but for a embedded system where you know the memory specification of the targeted system I would go for the filling of the free memory.
– Eduard Florinescu
Nov 9 '13 at 17:40
2
In case anyone else is a little shocked by the scoring here: meta.unix.stackexchange.com/questions/1513/…?
– goldilocks
Nov 13 '13 at 14:46
See also: unix.stackexchange.com/a/1368/52956
– Wilf
Jun 18 '15 at 18:42
add a comment |
I want to do some low-resources testing and for that I need to have 90% of the free memory full.
How can I do this on a *nix
system?
memory testing
I want to do some low-resources testing and for that I need to have 90% of the free memory full.
How can I do this on a *nix
system?
memory testing
memory testing
edited Jan 30 '14 at 12:31
Eduard Florinescu
asked Nov 8 '13 at 10:13
Eduard FlorinescuEduard Florinescu
3,536103956
3,536103956
3
Does it really have to work on any *nix system?
– a CVn
Nov 8 '13 at 12:31
30
Instead of jut filling memory, could you instead create a VM (using docker, or vagrant, or something similar) that has a limited amount of memory?
– abendigo
Nov 8 '13 at 13:27
4
@abendigo For a QA many of the solutions presented here are useful: for a general purpose OS without a specific platform the VM or kernel boot parameters could be useful, but for a embedded system where you know the memory specification of the targeted system I would go for the filling of the free memory.
– Eduard Florinescu
Nov 9 '13 at 17:40
2
In case anyone else is a little shocked by the scoring here: meta.unix.stackexchange.com/questions/1513/…?
– goldilocks
Nov 13 '13 at 14:46
See also: unix.stackexchange.com/a/1368/52956
– Wilf
Jun 18 '15 at 18:42
add a comment |
3
Does it really have to work on any *nix system?
– a CVn
Nov 8 '13 at 12:31
30
Instead of jut filling memory, could you instead create a VM (using docker, or vagrant, or something similar) that has a limited amount of memory?
– abendigo
Nov 8 '13 at 13:27
4
@abendigo For a QA many of the solutions presented here are useful: for a general purpose OS without a specific platform the VM or kernel boot parameters could be useful, but for a embedded system where you know the memory specification of the targeted system I would go for the filling of the free memory.
– Eduard Florinescu
Nov 9 '13 at 17:40
2
In case anyone else is a little shocked by the scoring here: meta.unix.stackexchange.com/questions/1513/…?
– goldilocks
Nov 13 '13 at 14:46
See also: unix.stackexchange.com/a/1368/52956
– Wilf
Jun 18 '15 at 18:42
3
3
Does it really have to work on any *nix system?
– a CVn
Nov 8 '13 at 12:31
Does it really have to work on any *nix system?
– a CVn
Nov 8 '13 at 12:31
30
30
Instead of jut filling memory, could you instead create a VM (using docker, or vagrant, or something similar) that has a limited amount of memory?
– abendigo
Nov 8 '13 at 13:27
Instead of jut filling memory, could you instead create a VM (using docker, or vagrant, or something similar) that has a limited amount of memory?
– abendigo
Nov 8 '13 at 13:27
4
4
@abendigo For a QA many of the solutions presented here are useful: for a general purpose OS without a specific platform the VM or kernel boot parameters could be useful, but for a embedded system where you know the memory specification of the targeted system I would go for the filling of the free memory.
– Eduard Florinescu
Nov 9 '13 at 17:40
@abendigo For a QA many of the solutions presented here are useful: for a general purpose OS without a specific platform the VM or kernel boot parameters could be useful, but for a embedded system where you know the memory specification of the targeted system I would go for the filling of the free memory.
– Eduard Florinescu
Nov 9 '13 at 17:40
2
2
In case anyone else is a little shocked by the scoring here: meta.unix.stackexchange.com/questions/1513/…?
– goldilocks
Nov 13 '13 at 14:46
In case anyone else is a little shocked by the scoring here: meta.unix.stackexchange.com/questions/1513/…?
– goldilocks
Nov 13 '13 at 14:46
See also: unix.stackexchange.com/a/1368/52956
– Wilf
Jun 18 '15 at 18:42
See also: unix.stackexchange.com/a/1368/52956
– Wilf
Jun 18 '15 at 18:42
add a comment |
12 Answers
12
active
oldest
votes
stress is a workload generator that simulates cpu/mem/io/hdd stress on POSIX systems. This call should do the trick on Linux < 3.14:
stress --vm-bytes $(awk '/MemFree/printf "%dn", $2 * 0.9;' < /proc/meminfo)k --vm-keep -m 1
For Linux >= 3.14, you may use MemAvailable
instead to estimate available memory for new processes without swapping:
stress --vm-bytes $(awk '/MemAvailable/printf "%dn", $2 * 0.9;' < /proc/meminfo)k --vm-keep -m 1
Adapt the /proc/meminfo
call with free(1)
/vm_stat(1)
/etc. if you need it portable.
3
stress --vm-bytes $(awk '/MemFree/printf "%dn", $2 * 0.097;' < /proc/meminfo)k --vm-keep -m 10
– Robert
Oct 23 '15 at 16:47
1
Most of MemFree is kept by OS, so I used MemAvailable instead. This gave me 92% usage on Cent OS 7.stress --vm-bytes $(awk '/MemAvailable/printf "%dn", $2 * 0.98;' < /proc/meminfo)k --vm-keep -m 1
– kujiy
Feb 8 '18 at 0:36
good to know, MemAvailable was added to "estimate of how much memory is available for starting new applications, without swapping", see git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/… and git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/…
– tkrennwa
Feb 8 '18 at 9:11
Just as an added note, providing both--vm 1 and --vm-keep
are very important. Simply--vm-bytes
does nothing and you might be misled into think you can allocate as much memory as you need/want. I got bit by this until I tried to sanity check myself by allocation 256G of memory. This is not a flaw in the answer, it provides the correct flags, just an additional caution.
– ffledgling
Mar 26 at 12:56
This is why there is-m 1
. According to the stress manpage,-m N
is short for--vm N
: spawnN
workers spinning onmalloc()/free()
– tkrennwa
Mar 27 at 3:03
add a comment |
You can write a C program to malloc()
the required memory and then use mlock()
to prevent the memory from being swapped out.
Then just let the program wait for keyboard input, and unlock the memory, free the memory and exit.
25
Long time back I had to test similar use case. I observed that until you write something to that memory it will not be actually allocated(i.e. until page fault happens) . I am not sure whether mlock() take cares of that.
– Poorna
Nov 8 '13 at 13:31
2
I concur with @siri; however, it depends on which variant UNIX you are using.
– Anthony
Nov 8 '13 at 13:34
1
Some inspiration for the code. Furthermore, I think you don't need to unlock/free the memory. The OS is going to do that for you when your process has ended.
– Sebastian
Nov 8 '13 at 13:44
8
You probably have to actually write to the memory, the kernel might just overcommit if you only malloc it. If configured to, e.g. Linux will let malloc return successfully without actually having the memory free, and only actually allocate the memory when it is being written to. See win.tue.nl/~aeb/linux/lk/lk-9.html
– Bjarke Freund-Hansen
Nov 8 '13 at 14:32
6
@Sebastian:calloc
will run into the same problem IIRC. All the memory will just point to the same read-only zeroed page. It won't actually get allocated until you try to write to it (which won't work since it is read-only). The only way of being really sure that I know is to do amemset
of the whole buffer. See the following answer for more info stackoverflow.com/a/2688522/713554
– Leo
Nov 8 '13 at 16:43
|
show 2 more comments
I would suggest running a VM with limited memory and testing the software in that would be a more efficient test than trying to fill memory on the host machine.
That method also has the advantage that if the low memory situation causes OOM errors elsewhere and hangs the whole OS, you only hang the VM you are testing in not your machine that you might have other useful processes running on.
Also if your testing is not CPU or IO intensive, you could concurrently run instances of the tests on a family of VMs with a variety of low memory sizes.
add a comment |
From this HN comment: https://news.ycombinator.com/item?id=6695581
Just fill /dev/shm via dd or similar.
swapoff -a
dd if=/dev/zero of=/dev/shm/fill bs=1k count=1024k
7
Not all *nixes have /dev/shm. Any more portable idea?
– Tadeusz A. Kadłubowski
Nov 8 '13 at 12:24
Ifpv
is installed, it helps to see the count:dd if=/dev/zero bs=1024 |pv -b -B 1024 | dd of=/dev/shm/fill bs=1024
– Otheus
Sep 26 '17 at 20:01
1
If you want speed, this method is the right choice! Because it allocates the desired amount of RAM in a matter of seconds. Don't relay on /dev/urandom, it will use 100% of CPU and take several minutes if your RAM is big. YET, /dev/shm has a relative size in modern Ubuntu/Debian distros, it has a size that defaults to 50% of physical RAM. Hopefully you can remount /dev/shm or maybe create a new mount point. Just make sure it has the actual size you want to allocate.
– develCuy
Dec 8 '17 at 19:25
add a comment |
- run linux;
- boot with
mem=nn[KMG]
kernel boot parameter
(look in linux/Documentation/kernel-parameters.txt for details).
add a comment |
If you have basic GNU tools (sh
, grep
, yes
and head
) you can do this:
yes | tr \n x | head -c $BYTES | grep n
# Protip: use `head -c $((1024*1024*2))` to calculate 2MB easily
This works because grep loads the entire line of data in RAM (I learned this in a rather unfortunate way when grepping a disk image). The line, generated by yes
, replacing newlines, will be infinitely long, but is limited by head
to $BYTES
bytes, thus grep will load $BYTES in memory. Grep itself uses like 100-200KB for me, you might need to subtract that for a more precise amount.
If you want to also add a time constraint, this can be done quite easily in bash
(will not work in sh
):
cat <(yes | tr \n x | head -c $BYTES) <(sleep $NumberOfSeconds) | grep n
The <(command)
thing seems to be little known but is often extremely useful, more info on it here: http://tldp.org/LDP/abs/html/process-sub.html
Then for the use of cat
: cat
will wait for inputs to complete until exiting, and by keeping one of the pipes open, it will keep grep alive.
If you have pv
and want to slowly increase RAM use:
yes | tr \n x | head -c $BYTES | pv -L $BYTESPERSEC | grep n
For example:
yes | tr \n x | head -c $((1024*1024*1024)) | pv -L $((1024*1024)) | grep n
Will use up to a gigabyte at a rate of 1MB per second. As an added bonus, pv
will show you the current rate of use and the total use so far. Of course this can also be done with previous variants:
yes | tr \n x | head -c $BYTES | pv | grep n
Just inserting the | pv |
part will show you the current status (throughput and total, by default, I think - otherwise see the man(ual) page).
Why another answer? The accepted answer recommends installing a package (I bet there's a release for every chipset without needing a package manager); the top voted answer recommends compiling a C program (I did not have a compiler or toolchain installed to compile for your target platform); the second top voted answer recommends running the application in a VM (yeah let me just dd this phone's internal sdcard over usb or something and create a virtualbox image); the third suggests modifying something in the boot sequence which does not fill the RAM as desired; the fourth only works in so far as the /dev/shm mountpoint (1) exists and (2) is large (remounting needs root); the fifth combines many of the above without sample code; the sixth is a great answer but I did not see this answer before coming up with my own approach, so I thought I'd add my own, also because it's shorter to remember or type over if you don't see that the memblob line is actually the crux of the matter; the seventh again does not answer the question (uses ulimit to limit a process instead); the eighth tries to get you to install python; the ninth thinks we're all very uncreative and finally the tenth wrote his own C++ program which causes the same issue as the top voted answer.
lovely solution. Only glitch is that the exit code of the construct is 1 because grep does not find a match. None of the solutions from stackoverflow.com/questions/6550484/… seem to fix it.
– Holger Brandl
May 5 '16 at 18:50
@HolgerBrandl Good point, I wouldn't know how to fix that. This is the first time I heard ofset -e
, so I just learned something :)
– Luc
May 5 '16 at 19:51
$SECONDS does not seem a good choice since it's a built in variable reflecting the time since the shell was started. see tldp.org/LDP/Bash-Beginners-Guide/html/sect_03_02.html
– Holger Brandl
May 10 '16 at 9:42
@HolgerBrandl Good catch, I didn't know that. Kinda cool to find a terminal that's open for >3 million seconds currently :D. I updated the post.
– Luc
May 11 '16 at 7:40
Cool technique!time yes | tr \n x | head -c $((1024*1024*1024*10)) | grep n
(use 10 GiB memory) takes 1 minute 46 seconds. Running julman99's eatmemory program at github.com/julman99/eatmemory takes 6 seconds. ...Well, plus the download and compile time, but it compiled with no issue... and very quickly... on my RHEL6.4 machine. Still, I like this solution. Why reinvent the wheel?
– Mike S
Apr 7 '17 at 20:53
|
show 1 more comment
I keep a function to do something similar in my dotfiles. https://github.com/sagotsky/.dotfiles/blob/master/.functions#L248
function malloc() awk 'print int($2/10)')
if [[ $N -gt $1 ]] ;then
N=$1
fi
sh -c "MEMBLOB=$(dd if=/dev/urandom bs=1MB count=$N) ; sleep 1"
fi
1
This is the nicest solution IMHO, as it essentially only needs dd to work, all the other stuff can be worked around in any shell. Note that it actually claims twice the memory than the data dd produces, at least temporarily. Tested on debian 9, dash 0.5.8-2.4. If you use bash for running the MEMBLOB part, it becomes really slow and uses four times the amount that dd produces.
– P.Péter
Oct 16 '18 at 7:46
add a comment |
How abount a simple python solution?
#!/usr/bin/env python
import sys
import time
if len(sys.argv) != 2:
print "usage: fillmem <number-of-megabytes>"
sys.exit()
count = int(sys.argv[1])
megabyte = (0,) * (1024 * 1024 / 8)
data = megabyte * count
while True:
time.sleep(1)
7
That will probably quickly be swapped out, having very little actual impact on memory pressure (unless you fill up all the swap as well, which will take a while, usually)
– Joachim Sauer
Nov 8 '13 at 13:22
1
Why would a unix swap while there is available RAM? This is actually a plausible way to evict disk cache when need be.
– Alexander Shcheblikin
Nov 8 '13 at 23:04
@AlexanderShcheblikin This question isn't about evicting disk cache (which is useful for performance testing but not for low resources testing).
– Gilles
Nov 9 '13 at 14:40
1
This solution worked to cobble up a Gig or two in my tests, though I didn't try to stress my memory. But, @JoachimSauer, one could setsysctl vm.swappiness=0
and furthermore set vm.min_free_kbytes to a small number, maybe 1024. I haven't tried it, but the docs say that this is how you control the quickness of swapping out... you should be able to make it quite slow indeed, to the point of causing an OOM condition on your machine. See kernel.org/doc/Documentation/sysctl/vm.txt and kernel.org/doc/gorman/html/understand/understand005.html
– Mike S
Apr 4 '17 at 20:03
add a comment |
How about ramfs if it exists? Mount it and copy over a large file?
If there's no /dev/shm
and no ramfs - I guess a tiny C program that does a large malloc based on some input value? Might have to run it a few times at once on a 32 bit system with a lot of memory.
add a comment |
If you want to test a particular process with limited memory you might be better off using ulimit
to restrict the amount of allocatable memory.
2
Actually this does not work on linux (dunno about other *nixes).man setrlimit
:RLIMIT_RSS Specifies the limit (in pages) of the process's resident set (the number of virtual pages resident in RAM). This limit only has effect in Linux 2.4.x, x < 30, and there only affects calls to madvise(2) specifying MADV_WILLNEED.
– Patrick
Nov 8 '13 at 13:46
add a comment |
I think this is a case of asking the wrong question and sanity being drowned out by people competing for the most creative answer. If you only need to simulate OOM conditions, you don't need to fill memory. Just use a custom allocator and have it fail after a certain number of allocations. This approach seems to work well enough for SQLite.
add a comment |
I wrote this little C++ program for that: https://github.com/rmetzger/dynamic-ballooner
The advantage of this implementation is that is periodically checks if it needs to free or re-allocate memory.
add a comment |
protected by Gilles Nov 9 '13 at 14:37
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
12 Answers
12
active
oldest
votes
12 Answers
12
active
oldest
votes
active
oldest
votes
active
oldest
votes
stress is a workload generator that simulates cpu/mem/io/hdd stress on POSIX systems. This call should do the trick on Linux < 3.14:
stress --vm-bytes $(awk '/MemFree/printf "%dn", $2 * 0.9;' < /proc/meminfo)k --vm-keep -m 1
For Linux >= 3.14, you may use MemAvailable
instead to estimate available memory for new processes without swapping:
stress --vm-bytes $(awk '/MemAvailable/printf "%dn", $2 * 0.9;' < /proc/meminfo)k --vm-keep -m 1
Adapt the /proc/meminfo
call with free(1)
/vm_stat(1)
/etc. if you need it portable.
3
stress --vm-bytes $(awk '/MemFree/printf "%dn", $2 * 0.097;' < /proc/meminfo)k --vm-keep -m 10
– Robert
Oct 23 '15 at 16:47
1
Most of MemFree is kept by OS, so I used MemAvailable instead. This gave me 92% usage on Cent OS 7.stress --vm-bytes $(awk '/MemAvailable/printf "%dn", $2 * 0.98;' < /proc/meminfo)k --vm-keep -m 1
– kujiy
Feb 8 '18 at 0:36
good to know, MemAvailable was added to "estimate of how much memory is available for starting new applications, without swapping", see git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/… and git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/…
– tkrennwa
Feb 8 '18 at 9:11
Just as an added note, providing both--vm 1 and --vm-keep
are very important. Simply--vm-bytes
does nothing and you might be misled into think you can allocate as much memory as you need/want. I got bit by this until I tried to sanity check myself by allocation 256G of memory. This is not a flaw in the answer, it provides the correct flags, just an additional caution.
– ffledgling
Mar 26 at 12:56
This is why there is-m 1
. According to the stress manpage,-m N
is short for--vm N
: spawnN
workers spinning onmalloc()/free()
– tkrennwa
Mar 27 at 3:03
add a comment |
stress is a workload generator that simulates cpu/mem/io/hdd stress on POSIX systems. This call should do the trick on Linux < 3.14:
stress --vm-bytes $(awk '/MemFree/printf "%dn", $2 * 0.9;' < /proc/meminfo)k --vm-keep -m 1
For Linux >= 3.14, you may use MemAvailable
instead to estimate available memory for new processes without swapping:
stress --vm-bytes $(awk '/MemAvailable/printf "%dn", $2 * 0.9;' < /proc/meminfo)k --vm-keep -m 1
Adapt the /proc/meminfo
call with free(1)
/vm_stat(1)
/etc. if you need it portable.
3
stress --vm-bytes $(awk '/MemFree/printf "%dn", $2 * 0.097;' < /proc/meminfo)k --vm-keep -m 10
– Robert
Oct 23 '15 at 16:47
1
Most of MemFree is kept by OS, so I used MemAvailable instead. This gave me 92% usage on Cent OS 7.stress --vm-bytes $(awk '/MemAvailable/printf "%dn", $2 * 0.98;' < /proc/meminfo)k --vm-keep -m 1
– kujiy
Feb 8 '18 at 0:36
good to know, MemAvailable was added to "estimate of how much memory is available for starting new applications, without swapping", see git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/… and git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/…
– tkrennwa
Feb 8 '18 at 9:11
Just as an added note, providing both--vm 1 and --vm-keep
are very important. Simply--vm-bytes
does nothing and you might be misled into think you can allocate as much memory as you need/want. I got bit by this until I tried to sanity check myself by allocation 256G of memory. This is not a flaw in the answer, it provides the correct flags, just an additional caution.
– ffledgling
Mar 26 at 12:56
This is why there is-m 1
. According to the stress manpage,-m N
is short for--vm N
: spawnN
workers spinning onmalloc()/free()
– tkrennwa
Mar 27 at 3:03
add a comment |
stress is a workload generator that simulates cpu/mem/io/hdd stress on POSIX systems. This call should do the trick on Linux < 3.14:
stress --vm-bytes $(awk '/MemFree/printf "%dn", $2 * 0.9;' < /proc/meminfo)k --vm-keep -m 1
For Linux >= 3.14, you may use MemAvailable
instead to estimate available memory for new processes without swapping:
stress --vm-bytes $(awk '/MemAvailable/printf "%dn", $2 * 0.9;' < /proc/meminfo)k --vm-keep -m 1
Adapt the /proc/meminfo
call with free(1)
/vm_stat(1)
/etc. if you need it portable.
stress is a workload generator that simulates cpu/mem/io/hdd stress on POSIX systems. This call should do the trick on Linux < 3.14:
stress --vm-bytes $(awk '/MemFree/printf "%dn", $2 * 0.9;' < /proc/meminfo)k --vm-keep -m 1
For Linux >= 3.14, you may use MemAvailable
instead to estimate available memory for new processes without swapping:
stress --vm-bytes $(awk '/MemAvailable/printf "%dn", $2 * 0.9;' < /proc/meminfo)k --vm-keep -m 1
Adapt the /proc/meminfo
call with free(1)
/vm_stat(1)
/etc. if you need it portable.
edited Feb 8 '18 at 9:32
answered Nov 8 '13 at 17:40
tkrennwatkrennwa
2,64511013
2,64511013
3
stress --vm-bytes $(awk '/MemFree/printf "%dn", $2 * 0.097;' < /proc/meminfo)k --vm-keep -m 10
– Robert
Oct 23 '15 at 16:47
1
Most of MemFree is kept by OS, so I used MemAvailable instead. This gave me 92% usage on Cent OS 7.stress --vm-bytes $(awk '/MemAvailable/printf "%dn", $2 * 0.98;' < /proc/meminfo)k --vm-keep -m 1
– kujiy
Feb 8 '18 at 0:36
good to know, MemAvailable was added to "estimate of how much memory is available for starting new applications, without swapping", see git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/… and git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/…
– tkrennwa
Feb 8 '18 at 9:11
Just as an added note, providing both--vm 1 and --vm-keep
are very important. Simply--vm-bytes
does nothing and you might be misled into think you can allocate as much memory as you need/want. I got bit by this until I tried to sanity check myself by allocation 256G of memory. This is not a flaw in the answer, it provides the correct flags, just an additional caution.
– ffledgling
Mar 26 at 12:56
This is why there is-m 1
. According to the stress manpage,-m N
is short for--vm N
: spawnN
workers spinning onmalloc()/free()
– tkrennwa
Mar 27 at 3:03
add a comment |
3
stress --vm-bytes $(awk '/MemFree/printf "%dn", $2 * 0.097;' < /proc/meminfo)k --vm-keep -m 10
– Robert
Oct 23 '15 at 16:47
1
Most of MemFree is kept by OS, so I used MemAvailable instead. This gave me 92% usage on Cent OS 7.stress --vm-bytes $(awk '/MemAvailable/printf "%dn", $2 * 0.98;' < /proc/meminfo)k --vm-keep -m 1
– kujiy
Feb 8 '18 at 0:36
good to know, MemAvailable was added to "estimate of how much memory is available for starting new applications, without swapping", see git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/… and git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/…
– tkrennwa
Feb 8 '18 at 9:11
Just as an added note, providing both--vm 1 and --vm-keep
are very important. Simply--vm-bytes
does nothing and you might be misled into think you can allocate as much memory as you need/want. I got bit by this until I tried to sanity check myself by allocation 256G of memory. This is not a flaw in the answer, it provides the correct flags, just an additional caution.
– ffledgling
Mar 26 at 12:56
This is why there is-m 1
. According to the stress manpage,-m N
is short for--vm N
: spawnN
workers spinning onmalloc()/free()
– tkrennwa
Mar 27 at 3:03
3
3
stress --vm-bytes $(awk '/MemFree/printf "%dn", $2 * 0.097;' < /proc/meminfo)k --vm-keep -m 10
– Robert
Oct 23 '15 at 16:47
stress --vm-bytes $(awk '/MemFree/printf "%dn", $2 * 0.097;' < /proc/meminfo)k --vm-keep -m 10
– Robert
Oct 23 '15 at 16:47
1
1
Most of MemFree is kept by OS, so I used MemAvailable instead. This gave me 92% usage on Cent OS 7.
stress --vm-bytes $(awk '/MemAvailable/printf "%dn", $2 * 0.98;' < /proc/meminfo)k --vm-keep -m 1
– kujiy
Feb 8 '18 at 0:36
Most of MemFree is kept by OS, so I used MemAvailable instead. This gave me 92% usage on Cent OS 7.
stress --vm-bytes $(awk '/MemAvailable/printf "%dn", $2 * 0.98;' < /proc/meminfo)k --vm-keep -m 1
– kujiy
Feb 8 '18 at 0:36
good to know, MemAvailable was added to "estimate of how much memory is available for starting new applications, without swapping", see git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/… and git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/…
– tkrennwa
Feb 8 '18 at 9:11
good to know, MemAvailable was added to "estimate of how much memory is available for starting new applications, without swapping", see git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/… and git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/…
– tkrennwa
Feb 8 '18 at 9:11
Just as an added note, providing both
--vm 1 and --vm-keep
are very important. Simply --vm-bytes
does nothing and you might be misled into think you can allocate as much memory as you need/want. I got bit by this until I tried to sanity check myself by allocation 256G of memory. This is not a flaw in the answer, it provides the correct flags, just an additional caution.– ffledgling
Mar 26 at 12:56
Just as an added note, providing both
--vm 1 and --vm-keep
are very important. Simply --vm-bytes
does nothing and you might be misled into think you can allocate as much memory as you need/want. I got bit by this until I tried to sanity check myself by allocation 256G of memory. This is not a flaw in the answer, it provides the correct flags, just an additional caution.– ffledgling
Mar 26 at 12:56
This is why there is
-m 1
. According to the stress manpage, -m N
is short for --vm N
: spawn N
workers spinning on malloc()/free()
– tkrennwa
Mar 27 at 3:03
This is why there is
-m 1
. According to the stress manpage, -m N
is short for --vm N
: spawn N
workers spinning on malloc()/free()
– tkrennwa
Mar 27 at 3:03
add a comment |
You can write a C program to malloc()
the required memory and then use mlock()
to prevent the memory from being swapped out.
Then just let the program wait for keyboard input, and unlock the memory, free the memory and exit.
25
Long time back I had to test similar use case. I observed that until you write something to that memory it will not be actually allocated(i.e. until page fault happens) . I am not sure whether mlock() take cares of that.
– Poorna
Nov 8 '13 at 13:31
2
I concur with @siri; however, it depends on which variant UNIX you are using.
– Anthony
Nov 8 '13 at 13:34
1
Some inspiration for the code. Furthermore, I think you don't need to unlock/free the memory. The OS is going to do that for you when your process has ended.
– Sebastian
Nov 8 '13 at 13:44
8
You probably have to actually write to the memory, the kernel might just overcommit if you only malloc it. If configured to, e.g. Linux will let malloc return successfully without actually having the memory free, and only actually allocate the memory when it is being written to. See win.tue.nl/~aeb/linux/lk/lk-9.html
– Bjarke Freund-Hansen
Nov 8 '13 at 14:32
6
@Sebastian:calloc
will run into the same problem IIRC. All the memory will just point to the same read-only zeroed page. It won't actually get allocated until you try to write to it (which won't work since it is read-only). The only way of being really sure that I know is to do amemset
of the whole buffer. See the following answer for more info stackoverflow.com/a/2688522/713554
– Leo
Nov 8 '13 at 16:43
|
show 2 more comments
You can write a C program to malloc()
the required memory and then use mlock()
to prevent the memory from being swapped out.
Then just let the program wait for keyboard input, and unlock the memory, free the memory and exit.
25
Long time back I had to test similar use case. I observed that until you write something to that memory it will not be actually allocated(i.e. until page fault happens) . I am not sure whether mlock() take cares of that.
– Poorna
Nov 8 '13 at 13:31
2
I concur with @siri; however, it depends on which variant UNIX you are using.
– Anthony
Nov 8 '13 at 13:34
1
Some inspiration for the code. Furthermore, I think you don't need to unlock/free the memory. The OS is going to do that for you when your process has ended.
– Sebastian
Nov 8 '13 at 13:44
8
You probably have to actually write to the memory, the kernel might just overcommit if you only malloc it. If configured to, e.g. Linux will let malloc return successfully without actually having the memory free, and only actually allocate the memory when it is being written to. See win.tue.nl/~aeb/linux/lk/lk-9.html
– Bjarke Freund-Hansen
Nov 8 '13 at 14:32
6
@Sebastian:calloc
will run into the same problem IIRC. All the memory will just point to the same read-only zeroed page. It won't actually get allocated until you try to write to it (which won't work since it is read-only). The only way of being really sure that I know is to do amemset
of the whole buffer. See the following answer for more info stackoverflow.com/a/2688522/713554
– Leo
Nov 8 '13 at 16:43
|
show 2 more comments
You can write a C program to malloc()
the required memory and then use mlock()
to prevent the memory from being swapped out.
Then just let the program wait for keyboard input, and unlock the memory, free the memory and exit.
You can write a C program to malloc()
the required memory and then use mlock()
to prevent the memory from being swapped out.
Then just let the program wait for keyboard input, and unlock the memory, free the memory and exit.
edited Feb 25 '16 at 8:46
heemayl
36.1k378107
36.1k378107
answered Nov 8 '13 at 12:36
ChrisChris
621143
621143
25
Long time back I had to test similar use case. I observed that until you write something to that memory it will not be actually allocated(i.e. until page fault happens) . I am not sure whether mlock() take cares of that.
– Poorna
Nov 8 '13 at 13:31
2
I concur with @siri; however, it depends on which variant UNIX you are using.
– Anthony
Nov 8 '13 at 13:34
1
Some inspiration for the code. Furthermore, I think you don't need to unlock/free the memory. The OS is going to do that for you when your process has ended.
– Sebastian
Nov 8 '13 at 13:44
8
You probably have to actually write to the memory, the kernel might just overcommit if you only malloc it. If configured to, e.g. Linux will let malloc return successfully without actually having the memory free, and only actually allocate the memory when it is being written to. See win.tue.nl/~aeb/linux/lk/lk-9.html
– Bjarke Freund-Hansen
Nov 8 '13 at 14:32
6
@Sebastian:calloc
will run into the same problem IIRC. All the memory will just point to the same read-only zeroed page. It won't actually get allocated until you try to write to it (which won't work since it is read-only). The only way of being really sure that I know is to do amemset
of the whole buffer. See the following answer for more info stackoverflow.com/a/2688522/713554
– Leo
Nov 8 '13 at 16:43
|
show 2 more comments
25
Long time back I had to test similar use case. I observed that until you write something to that memory it will not be actually allocated(i.e. until page fault happens) . I am not sure whether mlock() take cares of that.
– Poorna
Nov 8 '13 at 13:31
2
I concur with @siri; however, it depends on which variant UNIX you are using.
– Anthony
Nov 8 '13 at 13:34
1
Some inspiration for the code. Furthermore, I think you don't need to unlock/free the memory. The OS is going to do that for you when your process has ended.
– Sebastian
Nov 8 '13 at 13:44
8
You probably have to actually write to the memory, the kernel might just overcommit if you only malloc it. If configured to, e.g. Linux will let malloc return successfully without actually having the memory free, and only actually allocate the memory when it is being written to. See win.tue.nl/~aeb/linux/lk/lk-9.html
– Bjarke Freund-Hansen
Nov 8 '13 at 14:32
6
@Sebastian:calloc
will run into the same problem IIRC. All the memory will just point to the same read-only zeroed page. It won't actually get allocated until you try to write to it (which won't work since it is read-only). The only way of being really sure that I know is to do amemset
of the whole buffer. See the following answer for more info stackoverflow.com/a/2688522/713554
– Leo
Nov 8 '13 at 16:43
25
25
Long time back I had to test similar use case. I observed that until you write something to that memory it will not be actually allocated(i.e. until page fault happens) . I am not sure whether mlock() take cares of that.
– Poorna
Nov 8 '13 at 13:31
Long time back I had to test similar use case. I observed that until you write something to that memory it will not be actually allocated(i.e. until page fault happens) . I am not sure whether mlock() take cares of that.
– Poorna
Nov 8 '13 at 13:31
2
2
I concur with @siri; however, it depends on which variant UNIX you are using.
– Anthony
Nov 8 '13 at 13:34
I concur with @siri; however, it depends on which variant UNIX you are using.
– Anthony
Nov 8 '13 at 13:34
1
1
Some inspiration for the code. Furthermore, I think you don't need to unlock/free the memory. The OS is going to do that for you when your process has ended.
– Sebastian
Nov 8 '13 at 13:44
Some inspiration for the code. Furthermore, I think you don't need to unlock/free the memory. The OS is going to do that for you when your process has ended.
– Sebastian
Nov 8 '13 at 13:44
8
8
You probably have to actually write to the memory, the kernel might just overcommit if you only malloc it. If configured to, e.g. Linux will let malloc return successfully without actually having the memory free, and only actually allocate the memory when it is being written to. See win.tue.nl/~aeb/linux/lk/lk-9.html
– Bjarke Freund-Hansen
Nov 8 '13 at 14:32
You probably have to actually write to the memory, the kernel might just overcommit if you only malloc it. If configured to, e.g. Linux will let malloc return successfully without actually having the memory free, and only actually allocate the memory when it is being written to. See win.tue.nl/~aeb/linux/lk/lk-9.html
– Bjarke Freund-Hansen
Nov 8 '13 at 14:32
6
6
@Sebastian:
calloc
will run into the same problem IIRC. All the memory will just point to the same read-only zeroed page. It won't actually get allocated until you try to write to it (which won't work since it is read-only). The only way of being really sure that I know is to do a memset
of the whole buffer. See the following answer for more info stackoverflow.com/a/2688522/713554– Leo
Nov 8 '13 at 16:43
@Sebastian:
calloc
will run into the same problem IIRC. All the memory will just point to the same read-only zeroed page. It won't actually get allocated until you try to write to it (which won't work since it is read-only). The only way of being really sure that I know is to do a memset
of the whole buffer. See the following answer for more info stackoverflow.com/a/2688522/713554– Leo
Nov 8 '13 at 16:43
|
show 2 more comments
I would suggest running a VM with limited memory and testing the software in that would be a more efficient test than trying to fill memory on the host machine.
That method also has the advantage that if the low memory situation causes OOM errors elsewhere and hangs the whole OS, you only hang the VM you are testing in not your machine that you might have other useful processes running on.
Also if your testing is not CPU or IO intensive, you could concurrently run instances of the tests on a family of VMs with a variety of low memory sizes.
add a comment |
I would suggest running a VM with limited memory and testing the software in that would be a more efficient test than trying to fill memory on the host machine.
That method also has the advantage that if the low memory situation causes OOM errors elsewhere and hangs the whole OS, you only hang the VM you are testing in not your machine that you might have other useful processes running on.
Also if your testing is not CPU or IO intensive, you could concurrently run instances of the tests on a family of VMs with a variety of low memory sizes.
add a comment |
I would suggest running a VM with limited memory and testing the software in that would be a more efficient test than trying to fill memory on the host machine.
That method also has the advantage that if the low memory situation causes OOM errors elsewhere and hangs the whole OS, you only hang the VM you are testing in not your machine that you might have other useful processes running on.
Also if your testing is not CPU or IO intensive, you could concurrently run instances of the tests on a family of VMs with a variety of low memory sizes.
I would suggest running a VM with limited memory and testing the software in that would be a more efficient test than trying to fill memory on the host machine.
That method also has the advantage that if the low memory situation causes OOM errors elsewhere and hangs the whole OS, you only hang the VM you are testing in not your machine that you might have other useful processes running on.
Also if your testing is not CPU or IO intensive, you could concurrently run instances of the tests on a family of VMs with a variety of low memory sizes.
answered Nov 8 '13 at 15:25
David SpillettDavid Spillett
1,296710
1,296710
add a comment |
add a comment |
From this HN comment: https://news.ycombinator.com/item?id=6695581
Just fill /dev/shm via dd or similar.
swapoff -a
dd if=/dev/zero of=/dev/shm/fill bs=1k count=1024k
7
Not all *nixes have /dev/shm. Any more portable idea?
– Tadeusz A. Kadłubowski
Nov 8 '13 at 12:24
Ifpv
is installed, it helps to see the count:dd if=/dev/zero bs=1024 |pv -b -B 1024 | dd of=/dev/shm/fill bs=1024
– Otheus
Sep 26 '17 at 20:01
1
If you want speed, this method is the right choice! Because it allocates the desired amount of RAM in a matter of seconds. Don't relay on /dev/urandom, it will use 100% of CPU and take several minutes if your RAM is big. YET, /dev/shm has a relative size in modern Ubuntu/Debian distros, it has a size that defaults to 50% of physical RAM. Hopefully you can remount /dev/shm or maybe create a new mount point. Just make sure it has the actual size you want to allocate.
– develCuy
Dec 8 '17 at 19:25
add a comment |
From this HN comment: https://news.ycombinator.com/item?id=6695581
Just fill /dev/shm via dd or similar.
swapoff -a
dd if=/dev/zero of=/dev/shm/fill bs=1k count=1024k
7
Not all *nixes have /dev/shm. Any more portable idea?
– Tadeusz A. Kadłubowski
Nov 8 '13 at 12:24
Ifpv
is installed, it helps to see the count:dd if=/dev/zero bs=1024 |pv -b -B 1024 | dd of=/dev/shm/fill bs=1024
– Otheus
Sep 26 '17 at 20:01
1
If you want speed, this method is the right choice! Because it allocates the desired amount of RAM in a matter of seconds. Don't relay on /dev/urandom, it will use 100% of CPU and take several minutes if your RAM is big. YET, /dev/shm has a relative size in modern Ubuntu/Debian distros, it has a size that defaults to 50% of physical RAM. Hopefully you can remount /dev/shm or maybe create a new mount point. Just make sure it has the actual size you want to allocate.
– develCuy
Dec 8 '17 at 19:25
add a comment |
From this HN comment: https://news.ycombinator.com/item?id=6695581
Just fill /dev/shm via dd or similar.
swapoff -a
dd if=/dev/zero of=/dev/shm/fill bs=1k count=1024k
From this HN comment: https://news.ycombinator.com/item?id=6695581
Just fill /dev/shm via dd or similar.
swapoff -a
dd if=/dev/zero of=/dev/shm/fill bs=1k count=1024k
edited Nov 8 '13 at 12:57
Eduard Florinescu
3,536103956
3,536103956
answered Nov 8 '13 at 12:00
damiodamio
39936
39936
7
Not all *nixes have /dev/shm. Any more portable idea?
– Tadeusz A. Kadłubowski
Nov 8 '13 at 12:24
Ifpv
is installed, it helps to see the count:dd if=/dev/zero bs=1024 |pv -b -B 1024 | dd of=/dev/shm/fill bs=1024
– Otheus
Sep 26 '17 at 20:01
1
If you want speed, this method is the right choice! Because it allocates the desired amount of RAM in a matter of seconds. Don't relay on /dev/urandom, it will use 100% of CPU and take several minutes if your RAM is big. YET, /dev/shm has a relative size in modern Ubuntu/Debian distros, it has a size that defaults to 50% of physical RAM. Hopefully you can remount /dev/shm or maybe create a new mount point. Just make sure it has the actual size you want to allocate.
– develCuy
Dec 8 '17 at 19:25
add a comment |
7
Not all *nixes have /dev/shm. Any more portable idea?
– Tadeusz A. Kadłubowski
Nov 8 '13 at 12:24
Ifpv
is installed, it helps to see the count:dd if=/dev/zero bs=1024 |pv -b -B 1024 | dd of=/dev/shm/fill bs=1024
– Otheus
Sep 26 '17 at 20:01
1
If you want speed, this method is the right choice! Because it allocates the desired amount of RAM in a matter of seconds. Don't relay on /dev/urandom, it will use 100% of CPU and take several minutes if your RAM is big. YET, /dev/shm has a relative size in modern Ubuntu/Debian distros, it has a size that defaults to 50% of physical RAM. Hopefully you can remount /dev/shm or maybe create a new mount point. Just make sure it has the actual size you want to allocate.
– develCuy
Dec 8 '17 at 19:25
7
7
Not all *nixes have /dev/shm. Any more portable idea?
– Tadeusz A. Kadłubowski
Nov 8 '13 at 12:24
Not all *nixes have /dev/shm. Any more portable idea?
– Tadeusz A. Kadłubowski
Nov 8 '13 at 12:24
If
pv
is installed, it helps to see the count: dd if=/dev/zero bs=1024 |pv -b -B 1024 | dd of=/dev/shm/fill bs=1024
– Otheus
Sep 26 '17 at 20:01
If
pv
is installed, it helps to see the count: dd if=/dev/zero bs=1024 |pv -b -B 1024 | dd of=/dev/shm/fill bs=1024
– Otheus
Sep 26 '17 at 20:01
1
1
If you want speed, this method is the right choice! Because it allocates the desired amount of RAM in a matter of seconds. Don't relay on /dev/urandom, it will use 100% of CPU and take several minutes if your RAM is big. YET, /dev/shm has a relative size in modern Ubuntu/Debian distros, it has a size that defaults to 50% of physical RAM. Hopefully you can remount /dev/shm or maybe create a new mount point. Just make sure it has the actual size you want to allocate.
– develCuy
Dec 8 '17 at 19:25
If you want speed, this method is the right choice! Because it allocates the desired amount of RAM in a matter of seconds. Don't relay on /dev/urandom, it will use 100% of CPU and take several minutes if your RAM is big. YET, /dev/shm has a relative size in modern Ubuntu/Debian distros, it has a size that defaults to 50% of physical RAM. Hopefully you can remount /dev/shm or maybe create a new mount point. Just make sure it has the actual size you want to allocate.
– develCuy
Dec 8 '17 at 19:25
add a comment |
- run linux;
- boot with
mem=nn[KMG]
kernel boot parameter
(look in linux/Documentation/kernel-parameters.txt for details).
add a comment |
- run linux;
- boot with
mem=nn[KMG]
kernel boot parameter
(look in linux/Documentation/kernel-parameters.txt for details).
add a comment |
- run linux;
- boot with
mem=nn[KMG]
kernel boot parameter
(look in linux/Documentation/kernel-parameters.txt for details).
- run linux;
- boot with
mem=nn[KMG]
kernel boot parameter
(look in linux/Documentation/kernel-parameters.txt for details).
edited Nov 8 '13 at 13:01
Anthon
61.5k17107170
61.5k17107170
answered Nov 8 '13 at 12:40
AnonAnon
30122
30122
add a comment |
add a comment |
If you have basic GNU tools (sh
, grep
, yes
and head
) you can do this:
yes | tr \n x | head -c $BYTES | grep n
# Protip: use `head -c $((1024*1024*2))` to calculate 2MB easily
This works because grep loads the entire line of data in RAM (I learned this in a rather unfortunate way when grepping a disk image). The line, generated by yes
, replacing newlines, will be infinitely long, but is limited by head
to $BYTES
bytes, thus grep will load $BYTES in memory. Grep itself uses like 100-200KB for me, you might need to subtract that for a more precise amount.
If you want to also add a time constraint, this can be done quite easily in bash
(will not work in sh
):
cat <(yes | tr \n x | head -c $BYTES) <(sleep $NumberOfSeconds) | grep n
The <(command)
thing seems to be little known but is often extremely useful, more info on it here: http://tldp.org/LDP/abs/html/process-sub.html
Then for the use of cat
: cat
will wait for inputs to complete until exiting, and by keeping one of the pipes open, it will keep grep alive.
If you have pv
and want to slowly increase RAM use:
yes | tr \n x | head -c $BYTES | pv -L $BYTESPERSEC | grep n
For example:
yes | tr \n x | head -c $((1024*1024*1024)) | pv -L $((1024*1024)) | grep n
Will use up to a gigabyte at a rate of 1MB per second. As an added bonus, pv
will show you the current rate of use and the total use so far. Of course this can also be done with previous variants:
yes | tr \n x | head -c $BYTES | pv | grep n
Just inserting the | pv |
part will show you the current status (throughput and total, by default, I think - otherwise see the man(ual) page).
Why another answer? The accepted answer recommends installing a package (I bet there's a release for every chipset without needing a package manager); the top voted answer recommends compiling a C program (I did not have a compiler or toolchain installed to compile for your target platform); the second top voted answer recommends running the application in a VM (yeah let me just dd this phone's internal sdcard over usb or something and create a virtualbox image); the third suggests modifying something in the boot sequence which does not fill the RAM as desired; the fourth only works in so far as the /dev/shm mountpoint (1) exists and (2) is large (remounting needs root); the fifth combines many of the above without sample code; the sixth is a great answer but I did not see this answer before coming up with my own approach, so I thought I'd add my own, also because it's shorter to remember or type over if you don't see that the memblob line is actually the crux of the matter; the seventh again does not answer the question (uses ulimit to limit a process instead); the eighth tries to get you to install python; the ninth thinks we're all very uncreative and finally the tenth wrote his own C++ program which causes the same issue as the top voted answer.
lovely solution. Only glitch is that the exit code of the construct is 1 because grep does not find a match. None of the solutions from stackoverflow.com/questions/6550484/… seem to fix it.
– Holger Brandl
May 5 '16 at 18:50
@HolgerBrandl Good point, I wouldn't know how to fix that. This is the first time I heard ofset -e
, so I just learned something :)
– Luc
May 5 '16 at 19:51
$SECONDS does not seem a good choice since it's a built in variable reflecting the time since the shell was started. see tldp.org/LDP/Bash-Beginners-Guide/html/sect_03_02.html
– Holger Brandl
May 10 '16 at 9:42
@HolgerBrandl Good catch, I didn't know that. Kinda cool to find a terminal that's open for >3 million seconds currently :D. I updated the post.
– Luc
May 11 '16 at 7:40
Cool technique!time yes | tr \n x | head -c $((1024*1024*1024*10)) | grep n
(use 10 GiB memory) takes 1 minute 46 seconds. Running julman99's eatmemory program at github.com/julman99/eatmemory takes 6 seconds. ...Well, plus the download and compile time, but it compiled with no issue... and very quickly... on my RHEL6.4 machine. Still, I like this solution. Why reinvent the wheel?
– Mike S
Apr 7 '17 at 20:53
|
show 1 more comment
If you have basic GNU tools (sh
, grep
, yes
and head
) you can do this:
yes | tr \n x | head -c $BYTES | grep n
# Protip: use `head -c $((1024*1024*2))` to calculate 2MB easily
This works because grep loads the entire line of data in RAM (I learned this in a rather unfortunate way when grepping a disk image). The line, generated by yes
, replacing newlines, will be infinitely long, but is limited by head
to $BYTES
bytes, thus grep will load $BYTES in memory. Grep itself uses like 100-200KB for me, you might need to subtract that for a more precise amount.
If you want to also add a time constraint, this can be done quite easily in bash
(will not work in sh
):
cat <(yes | tr \n x | head -c $BYTES) <(sleep $NumberOfSeconds) | grep n
The <(command)
thing seems to be little known but is often extremely useful, more info on it here: http://tldp.org/LDP/abs/html/process-sub.html
Then for the use of cat
: cat
will wait for inputs to complete until exiting, and by keeping one of the pipes open, it will keep grep alive.
If you have pv
and want to slowly increase RAM use:
yes | tr \n x | head -c $BYTES | pv -L $BYTESPERSEC | grep n
For example:
yes | tr \n x | head -c $((1024*1024*1024)) | pv -L $((1024*1024)) | grep n
Will use up to a gigabyte at a rate of 1MB per second. As an added bonus, pv
will show you the current rate of use and the total use so far. Of course this can also be done with previous variants:
yes | tr \n x | head -c $BYTES | pv | grep n
Just inserting the | pv |
part will show you the current status (throughput and total, by default, I think - otherwise see the man(ual) page).
Why another answer? The accepted answer recommends installing a package (I bet there's a release for every chipset without needing a package manager); the top voted answer recommends compiling a C program (I did not have a compiler or toolchain installed to compile for your target platform); the second top voted answer recommends running the application in a VM (yeah let me just dd this phone's internal sdcard over usb or something and create a virtualbox image); the third suggests modifying something in the boot sequence which does not fill the RAM as desired; the fourth only works in so far as the /dev/shm mountpoint (1) exists and (2) is large (remounting needs root); the fifth combines many of the above without sample code; the sixth is a great answer but I did not see this answer before coming up with my own approach, so I thought I'd add my own, also because it's shorter to remember or type over if you don't see that the memblob line is actually the crux of the matter; the seventh again does not answer the question (uses ulimit to limit a process instead); the eighth tries to get you to install python; the ninth thinks we're all very uncreative and finally the tenth wrote his own C++ program which causes the same issue as the top voted answer.
lovely solution. Only glitch is that the exit code of the construct is 1 because grep does not find a match. None of the solutions from stackoverflow.com/questions/6550484/… seem to fix it.
– Holger Brandl
May 5 '16 at 18:50
@HolgerBrandl Good point, I wouldn't know how to fix that. This is the first time I heard ofset -e
, so I just learned something :)
– Luc
May 5 '16 at 19:51
$SECONDS does not seem a good choice since it's a built in variable reflecting the time since the shell was started. see tldp.org/LDP/Bash-Beginners-Guide/html/sect_03_02.html
– Holger Brandl
May 10 '16 at 9:42
@HolgerBrandl Good catch, I didn't know that. Kinda cool to find a terminal that's open for >3 million seconds currently :D. I updated the post.
– Luc
May 11 '16 at 7:40
Cool technique!time yes | tr \n x | head -c $((1024*1024*1024*10)) | grep n
(use 10 GiB memory) takes 1 minute 46 seconds. Running julman99's eatmemory program at github.com/julman99/eatmemory takes 6 seconds. ...Well, plus the download and compile time, but it compiled with no issue... and very quickly... on my RHEL6.4 machine. Still, I like this solution. Why reinvent the wheel?
– Mike S
Apr 7 '17 at 20:53
|
show 1 more comment
If you have basic GNU tools (sh
, grep
, yes
and head
) you can do this:
yes | tr \n x | head -c $BYTES | grep n
# Protip: use `head -c $((1024*1024*2))` to calculate 2MB easily
This works because grep loads the entire line of data in RAM (I learned this in a rather unfortunate way when grepping a disk image). The line, generated by yes
, replacing newlines, will be infinitely long, but is limited by head
to $BYTES
bytes, thus grep will load $BYTES in memory. Grep itself uses like 100-200KB for me, you might need to subtract that for a more precise amount.
If you want to also add a time constraint, this can be done quite easily in bash
(will not work in sh
):
cat <(yes | tr \n x | head -c $BYTES) <(sleep $NumberOfSeconds) | grep n
The <(command)
thing seems to be little known but is often extremely useful, more info on it here: http://tldp.org/LDP/abs/html/process-sub.html
Then for the use of cat
: cat
will wait for inputs to complete until exiting, and by keeping one of the pipes open, it will keep grep alive.
If you have pv
and want to slowly increase RAM use:
yes | tr \n x | head -c $BYTES | pv -L $BYTESPERSEC | grep n
For example:
yes | tr \n x | head -c $((1024*1024*1024)) | pv -L $((1024*1024)) | grep n
Will use up to a gigabyte at a rate of 1MB per second. As an added bonus, pv
will show you the current rate of use and the total use so far. Of course this can also be done with previous variants:
yes | tr \n x | head -c $BYTES | pv | grep n
Just inserting the | pv |
part will show you the current status (throughput and total, by default, I think - otherwise see the man(ual) page).
Why another answer? The accepted answer recommends installing a package (I bet there's a release for every chipset without needing a package manager); the top voted answer recommends compiling a C program (I did not have a compiler or toolchain installed to compile for your target platform); the second top voted answer recommends running the application in a VM (yeah let me just dd this phone's internal sdcard over usb or something and create a virtualbox image); the third suggests modifying something in the boot sequence which does not fill the RAM as desired; the fourth only works in so far as the /dev/shm mountpoint (1) exists and (2) is large (remounting needs root); the fifth combines many of the above without sample code; the sixth is a great answer but I did not see this answer before coming up with my own approach, so I thought I'd add my own, also because it's shorter to remember or type over if you don't see that the memblob line is actually the crux of the matter; the seventh again does not answer the question (uses ulimit to limit a process instead); the eighth tries to get you to install python; the ninth thinks we're all very uncreative and finally the tenth wrote his own C++ program which causes the same issue as the top voted answer.
If you have basic GNU tools (sh
, grep
, yes
and head
) you can do this:
yes | tr \n x | head -c $BYTES | grep n
# Protip: use `head -c $((1024*1024*2))` to calculate 2MB easily
This works because grep loads the entire line of data in RAM (I learned this in a rather unfortunate way when grepping a disk image). The line, generated by yes
, replacing newlines, will be infinitely long, but is limited by head
to $BYTES
bytes, thus grep will load $BYTES in memory. Grep itself uses like 100-200KB for me, you might need to subtract that for a more precise amount.
If you want to also add a time constraint, this can be done quite easily in bash
(will not work in sh
):
cat <(yes | tr \n x | head -c $BYTES) <(sleep $NumberOfSeconds) | grep n
The <(command)
thing seems to be little known but is often extremely useful, more info on it here: http://tldp.org/LDP/abs/html/process-sub.html
Then for the use of cat
: cat
will wait for inputs to complete until exiting, and by keeping one of the pipes open, it will keep grep alive.
If you have pv
and want to slowly increase RAM use:
yes | tr \n x | head -c $BYTES | pv -L $BYTESPERSEC | grep n
For example:
yes | tr \n x | head -c $((1024*1024*1024)) | pv -L $((1024*1024)) | grep n
Will use up to a gigabyte at a rate of 1MB per second. As an added bonus, pv
will show you the current rate of use and the total use so far. Of course this can also be done with previous variants:
yes | tr \n x | head -c $BYTES | pv | grep n
Just inserting the | pv |
part will show you the current status (throughput and total, by default, I think - otherwise see the man(ual) page).
Why another answer? The accepted answer recommends installing a package (I bet there's a release for every chipset without needing a package manager); the top voted answer recommends compiling a C program (I did not have a compiler or toolchain installed to compile for your target platform); the second top voted answer recommends running the application in a VM (yeah let me just dd this phone's internal sdcard over usb or something and create a virtualbox image); the third suggests modifying something in the boot sequence which does not fill the RAM as desired; the fourth only works in so far as the /dev/shm mountpoint (1) exists and (2) is large (remounting needs root); the fifth combines many of the above without sample code; the sixth is a great answer but I did not see this answer before coming up with my own approach, so I thought I'd add my own, also because it's shorter to remember or type over if you don't see that the memblob line is actually the crux of the matter; the seventh again does not answer the question (uses ulimit to limit a process instead); the eighth tries to get you to install python; the ninth thinks we're all very uncreative and finally the tenth wrote his own C++ program which causes the same issue as the top voted answer.
edited May 11 '16 at 7:28
answered Jan 12 '16 at 23:43
LucLuc
9941921
9941921
lovely solution. Only glitch is that the exit code of the construct is 1 because grep does not find a match. None of the solutions from stackoverflow.com/questions/6550484/… seem to fix it.
– Holger Brandl
May 5 '16 at 18:50
@HolgerBrandl Good point, I wouldn't know how to fix that. This is the first time I heard ofset -e
, so I just learned something :)
– Luc
May 5 '16 at 19:51
$SECONDS does not seem a good choice since it's a built in variable reflecting the time since the shell was started. see tldp.org/LDP/Bash-Beginners-Guide/html/sect_03_02.html
– Holger Brandl
May 10 '16 at 9:42
@HolgerBrandl Good catch, I didn't know that. Kinda cool to find a terminal that's open for >3 million seconds currently :D. I updated the post.
– Luc
May 11 '16 at 7:40
Cool technique!time yes | tr \n x | head -c $((1024*1024*1024*10)) | grep n
(use 10 GiB memory) takes 1 minute 46 seconds. Running julman99's eatmemory program at github.com/julman99/eatmemory takes 6 seconds. ...Well, plus the download and compile time, but it compiled with no issue... and very quickly... on my RHEL6.4 machine. Still, I like this solution. Why reinvent the wheel?
– Mike S
Apr 7 '17 at 20:53
|
show 1 more comment
lovely solution. Only glitch is that the exit code of the construct is 1 because grep does not find a match. None of the solutions from stackoverflow.com/questions/6550484/… seem to fix it.
– Holger Brandl
May 5 '16 at 18:50
@HolgerBrandl Good point, I wouldn't know how to fix that. This is the first time I heard ofset -e
, so I just learned something :)
– Luc
May 5 '16 at 19:51
$SECONDS does not seem a good choice since it's a built in variable reflecting the time since the shell was started. see tldp.org/LDP/Bash-Beginners-Guide/html/sect_03_02.html
– Holger Brandl
May 10 '16 at 9:42
@HolgerBrandl Good catch, I didn't know that. Kinda cool to find a terminal that's open for >3 million seconds currently :D. I updated the post.
– Luc
May 11 '16 at 7:40
Cool technique!time yes | tr \n x | head -c $((1024*1024*1024*10)) | grep n
(use 10 GiB memory) takes 1 minute 46 seconds. Running julman99's eatmemory program at github.com/julman99/eatmemory takes 6 seconds. ...Well, plus the download and compile time, but it compiled with no issue... and very quickly... on my RHEL6.4 machine. Still, I like this solution. Why reinvent the wheel?
– Mike S
Apr 7 '17 at 20:53
lovely solution. Only glitch is that the exit code of the construct is 1 because grep does not find a match. None of the solutions from stackoverflow.com/questions/6550484/… seem to fix it.
– Holger Brandl
May 5 '16 at 18:50
lovely solution. Only glitch is that the exit code of the construct is 1 because grep does not find a match. None of the solutions from stackoverflow.com/questions/6550484/… seem to fix it.
– Holger Brandl
May 5 '16 at 18:50
@HolgerBrandl Good point, I wouldn't know how to fix that. This is the first time I heard of
set -e
, so I just learned something :)– Luc
May 5 '16 at 19:51
@HolgerBrandl Good point, I wouldn't know how to fix that. This is the first time I heard of
set -e
, so I just learned something :)– Luc
May 5 '16 at 19:51
$SECONDS does not seem a good choice since it's a built in variable reflecting the time since the shell was started. see tldp.org/LDP/Bash-Beginners-Guide/html/sect_03_02.html
– Holger Brandl
May 10 '16 at 9:42
$SECONDS does not seem a good choice since it's a built in variable reflecting the time since the shell was started. see tldp.org/LDP/Bash-Beginners-Guide/html/sect_03_02.html
– Holger Brandl
May 10 '16 at 9:42
@HolgerBrandl Good catch, I didn't know that. Kinda cool to find a terminal that's open for >3 million seconds currently :D. I updated the post.
– Luc
May 11 '16 at 7:40
@HolgerBrandl Good catch, I didn't know that. Kinda cool to find a terminal that's open for >3 million seconds currently :D. I updated the post.
– Luc
May 11 '16 at 7:40
Cool technique!
time yes | tr \n x | head -c $((1024*1024*1024*10)) | grep n
(use 10 GiB memory) takes 1 minute 46 seconds. Running julman99's eatmemory program at github.com/julman99/eatmemory takes 6 seconds. ...Well, plus the download and compile time, but it compiled with no issue... and very quickly... on my RHEL6.4 machine. Still, I like this solution. Why reinvent the wheel?– Mike S
Apr 7 '17 at 20:53
Cool technique!
time yes | tr \n x | head -c $((1024*1024*1024*10)) | grep n
(use 10 GiB memory) takes 1 minute 46 seconds. Running julman99's eatmemory program at github.com/julman99/eatmemory takes 6 seconds. ...Well, plus the download and compile time, but it compiled with no issue... and very quickly... on my RHEL6.4 machine. Still, I like this solution. Why reinvent the wheel?– Mike S
Apr 7 '17 at 20:53
|
show 1 more comment
I keep a function to do something similar in my dotfiles. https://github.com/sagotsky/.dotfiles/blob/master/.functions#L248
function malloc() awk 'print int($2/10)')
if [[ $N -gt $1 ]] ;then
N=$1
fi
sh -c "MEMBLOB=$(dd if=/dev/urandom bs=1MB count=$N) ; sleep 1"
fi
1
This is the nicest solution IMHO, as it essentially only needs dd to work, all the other stuff can be worked around in any shell. Note that it actually claims twice the memory than the data dd produces, at least temporarily. Tested on debian 9, dash 0.5.8-2.4. If you use bash for running the MEMBLOB part, it becomes really slow and uses four times the amount that dd produces.
– P.Péter
Oct 16 '18 at 7:46
add a comment |
I keep a function to do something similar in my dotfiles. https://github.com/sagotsky/.dotfiles/blob/master/.functions#L248
function malloc() awk 'print int($2/10)')
if [[ $N -gt $1 ]] ;then
N=$1
fi
sh -c "MEMBLOB=$(dd if=/dev/urandom bs=1MB count=$N) ; sleep 1"
fi
1
This is the nicest solution IMHO, as it essentially only needs dd to work, all the other stuff can be worked around in any shell. Note that it actually claims twice the memory than the data dd produces, at least temporarily. Tested on debian 9, dash 0.5.8-2.4. If you use bash for running the MEMBLOB part, it becomes really slow and uses four times the amount that dd produces.
– P.Péter
Oct 16 '18 at 7:46
add a comment |
I keep a function to do something similar in my dotfiles. https://github.com/sagotsky/.dotfiles/blob/master/.functions#L248
function malloc() awk 'print int($2/10)')
if [[ $N -gt $1 ]] ;then
N=$1
fi
sh -c "MEMBLOB=$(dd if=/dev/urandom bs=1MB count=$N) ; sleep 1"
fi
I keep a function to do something similar in my dotfiles. https://github.com/sagotsky/.dotfiles/blob/master/.functions#L248
function malloc() awk 'print int($2/10)')
if [[ $N -gt $1 ]] ;then
N=$1
fi
sh -c "MEMBLOB=$(dd if=/dev/urandom bs=1MB count=$N) ; sleep 1"
fi
answered Nov 8 '13 at 14:06
valadilvaladil
26115
26115
1
This is the nicest solution IMHO, as it essentially only needs dd to work, all the other stuff can be worked around in any shell. Note that it actually claims twice the memory than the data dd produces, at least temporarily. Tested on debian 9, dash 0.5.8-2.4. If you use bash for running the MEMBLOB part, it becomes really slow and uses four times the amount that dd produces.
– P.Péter
Oct 16 '18 at 7:46
add a comment |
1
This is the nicest solution IMHO, as it essentially only needs dd to work, all the other stuff can be worked around in any shell. Note that it actually claims twice the memory than the data dd produces, at least temporarily. Tested on debian 9, dash 0.5.8-2.4. If you use bash for running the MEMBLOB part, it becomes really slow and uses four times the amount that dd produces.
– P.Péter
Oct 16 '18 at 7:46
1
1
This is the nicest solution IMHO, as it essentially only needs dd to work, all the other stuff can be worked around in any shell. Note that it actually claims twice the memory than the data dd produces, at least temporarily. Tested on debian 9, dash 0.5.8-2.4. If you use bash for running the MEMBLOB part, it becomes really slow and uses four times the amount that dd produces.
– P.Péter
Oct 16 '18 at 7:46
This is the nicest solution IMHO, as it essentially only needs dd to work, all the other stuff can be worked around in any shell. Note that it actually claims twice the memory than the data dd produces, at least temporarily. Tested on debian 9, dash 0.5.8-2.4. If you use bash for running the MEMBLOB part, it becomes really slow and uses four times the amount that dd produces.
– P.Péter
Oct 16 '18 at 7:46
add a comment |
How abount a simple python solution?
#!/usr/bin/env python
import sys
import time
if len(sys.argv) != 2:
print "usage: fillmem <number-of-megabytes>"
sys.exit()
count = int(sys.argv[1])
megabyte = (0,) * (1024 * 1024 / 8)
data = megabyte * count
while True:
time.sleep(1)
7
That will probably quickly be swapped out, having very little actual impact on memory pressure (unless you fill up all the swap as well, which will take a while, usually)
– Joachim Sauer
Nov 8 '13 at 13:22
1
Why would a unix swap while there is available RAM? This is actually a plausible way to evict disk cache when need be.
– Alexander Shcheblikin
Nov 8 '13 at 23:04
@AlexanderShcheblikin This question isn't about evicting disk cache (which is useful for performance testing but not for low resources testing).
– Gilles
Nov 9 '13 at 14:40
1
This solution worked to cobble up a Gig or two in my tests, though I didn't try to stress my memory. But, @JoachimSauer, one could setsysctl vm.swappiness=0
and furthermore set vm.min_free_kbytes to a small number, maybe 1024. I haven't tried it, but the docs say that this is how you control the quickness of swapping out... you should be able to make it quite slow indeed, to the point of causing an OOM condition on your machine. See kernel.org/doc/Documentation/sysctl/vm.txt and kernel.org/doc/gorman/html/understand/understand005.html
– Mike S
Apr 4 '17 at 20:03
add a comment |
How abount a simple python solution?
#!/usr/bin/env python
import sys
import time
if len(sys.argv) != 2:
print "usage: fillmem <number-of-megabytes>"
sys.exit()
count = int(sys.argv[1])
megabyte = (0,) * (1024 * 1024 / 8)
data = megabyte * count
while True:
time.sleep(1)
7
That will probably quickly be swapped out, having very little actual impact on memory pressure (unless you fill up all the swap as well, which will take a while, usually)
– Joachim Sauer
Nov 8 '13 at 13:22
1
Why would a unix swap while there is available RAM? This is actually a plausible way to evict disk cache when need be.
– Alexander Shcheblikin
Nov 8 '13 at 23:04
@AlexanderShcheblikin This question isn't about evicting disk cache (which is useful for performance testing but not for low resources testing).
– Gilles
Nov 9 '13 at 14:40
1
This solution worked to cobble up a Gig or two in my tests, though I didn't try to stress my memory. But, @JoachimSauer, one could setsysctl vm.swappiness=0
and furthermore set vm.min_free_kbytes to a small number, maybe 1024. I haven't tried it, but the docs say that this is how you control the quickness of swapping out... you should be able to make it quite slow indeed, to the point of causing an OOM condition on your machine. See kernel.org/doc/Documentation/sysctl/vm.txt and kernel.org/doc/gorman/html/understand/understand005.html
– Mike S
Apr 4 '17 at 20:03
add a comment |
How abount a simple python solution?
#!/usr/bin/env python
import sys
import time
if len(sys.argv) != 2:
print "usage: fillmem <number-of-megabytes>"
sys.exit()
count = int(sys.argv[1])
megabyte = (0,) * (1024 * 1024 / 8)
data = megabyte * count
while True:
time.sleep(1)
How abount a simple python solution?
#!/usr/bin/env python
import sys
import time
if len(sys.argv) != 2:
print "usage: fillmem <number-of-megabytes>"
sys.exit()
count = int(sys.argv[1])
megabyte = (0,) * (1024 * 1024 / 8)
data = megabyte * count
while True:
time.sleep(1)
edited Nov 8 '13 at 13:02
Community♦
1
1
answered Nov 8 '13 at 12:33
swiftcoderswiftcoder
1512
1512
7
That will probably quickly be swapped out, having very little actual impact on memory pressure (unless you fill up all the swap as well, which will take a while, usually)
– Joachim Sauer
Nov 8 '13 at 13:22
1
Why would a unix swap while there is available RAM? This is actually a plausible way to evict disk cache when need be.
– Alexander Shcheblikin
Nov 8 '13 at 23:04
@AlexanderShcheblikin This question isn't about evicting disk cache (which is useful for performance testing but not for low resources testing).
– Gilles
Nov 9 '13 at 14:40
1
This solution worked to cobble up a Gig or two in my tests, though I didn't try to stress my memory. But, @JoachimSauer, one could setsysctl vm.swappiness=0
and furthermore set vm.min_free_kbytes to a small number, maybe 1024. I haven't tried it, but the docs say that this is how you control the quickness of swapping out... you should be able to make it quite slow indeed, to the point of causing an OOM condition on your machine. See kernel.org/doc/Documentation/sysctl/vm.txt and kernel.org/doc/gorman/html/understand/understand005.html
– Mike S
Apr 4 '17 at 20:03
add a comment |
7
That will probably quickly be swapped out, having very little actual impact on memory pressure (unless you fill up all the swap as well, which will take a while, usually)
– Joachim Sauer
Nov 8 '13 at 13:22
1
Why would a unix swap while there is available RAM? This is actually a plausible way to evict disk cache when need be.
– Alexander Shcheblikin
Nov 8 '13 at 23:04
@AlexanderShcheblikin This question isn't about evicting disk cache (which is useful for performance testing but not for low resources testing).
– Gilles
Nov 9 '13 at 14:40
1
This solution worked to cobble up a Gig or two in my tests, though I didn't try to stress my memory. But, @JoachimSauer, one could setsysctl vm.swappiness=0
and furthermore set vm.min_free_kbytes to a small number, maybe 1024. I haven't tried it, but the docs say that this is how you control the quickness of swapping out... you should be able to make it quite slow indeed, to the point of causing an OOM condition on your machine. See kernel.org/doc/Documentation/sysctl/vm.txt and kernel.org/doc/gorman/html/understand/understand005.html
– Mike S
Apr 4 '17 at 20:03
7
7
That will probably quickly be swapped out, having very little actual impact on memory pressure (unless you fill up all the swap as well, which will take a while, usually)
– Joachim Sauer
Nov 8 '13 at 13:22
That will probably quickly be swapped out, having very little actual impact on memory pressure (unless you fill up all the swap as well, which will take a while, usually)
– Joachim Sauer
Nov 8 '13 at 13:22
1
1
Why would a unix swap while there is available RAM? This is actually a plausible way to evict disk cache when need be.
– Alexander Shcheblikin
Nov 8 '13 at 23:04
Why would a unix swap while there is available RAM? This is actually a plausible way to evict disk cache when need be.
– Alexander Shcheblikin
Nov 8 '13 at 23:04
@AlexanderShcheblikin This question isn't about evicting disk cache (which is useful for performance testing but not for low resources testing).
– Gilles
Nov 9 '13 at 14:40
@AlexanderShcheblikin This question isn't about evicting disk cache (which is useful for performance testing but not for low resources testing).
– Gilles
Nov 9 '13 at 14:40
1
1
This solution worked to cobble up a Gig or two in my tests, though I didn't try to stress my memory. But, @JoachimSauer, one could set
sysctl vm.swappiness=0
and furthermore set vm.min_free_kbytes to a small number, maybe 1024. I haven't tried it, but the docs say that this is how you control the quickness of swapping out... you should be able to make it quite slow indeed, to the point of causing an OOM condition on your machine. See kernel.org/doc/Documentation/sysctl/vm.txt and kernel.org/doc/gorman/html/understand/understand005.html– Mike S
Apr 4 '17 at 20:03
This solution worked to cobble up a Gig or two in my tests, though I didn't try to stress my memory. But, @JoachimSauer, one could set
sysctl vm.swappiness=0
and furthermore set vm.min_free_kbytes to a small number, maybe 1024. I haven't tried it, but the docs say that this is how you control the quickness of swapping out... you should be able to make it quite slow indeed, to the point of causing an OOM condition on your machine. See kernel.org/doc/Documentation/sysctl/vm.txt and kernel.org/doc/gorman/html/understand/understand005.html– Mike S
Apr 4 '17 at 20:03
add a comment |
How about ramfs if it exists? Mount it and copy over a large file?
If there's no /dev/shm
and no ramfs - I guess a tiny C program that does a large malloc based on some input value? Might have to run it a few times at once on a 32 bit system with a lot of memory.
add a comment |
How about ramfs if it exists? Mount it and copy over a large file?
If there's no /dev/shm
and no ramfs - I guess a tiny C program that does a large malloc based on some input value? Might have to run it a few times at once on a 32 bit system with a lot of memory.
add a comment |
How about ramfs if it exists? Mount it and copy over a large file?
If there's no /dev/shm
and no ramfs - I guess a tiny C program that does a large malloc based on some input value? Might have to run it a few times at once on a 32 bit system with a lot of memory.
How about ramfs if it exists? Mount it and copy over a large file?
If there's no /dev/shm
and no ramfs - I guess a tiny C program that does a large malloc based on some input value? Might have to run it a few times at once on a 32 bit system with a lot of memory.
edited Nov 8 '13 at 13:01
Anthon
61.5k17107170
61.5k17107170
answered Nov 8 '13 at 12:30
nemonemo
1012
1012
add a comment |
add a comment |
If you want to test a particular process with limited memory you might be better off using ulimit
to restrict the amount of allocatable memory.
2
Actually this does not work on linux (dunno about other *nixes).man setrlimit
:RLIMIT_RSS Specifies the limit (in pages) of the process's resident set (the number of virtual pages resident in RAM). This limit only has effect in Linux 2.4.x, x < 30, and there only affects calls to madvise(2) specifying MADV_WILLNEED.
– Patrick
Nov 8 '13 at 13:46
add a comment |
If you want to test a particular process with limited memory you might be better off using ulimit
to restrict the amount of allocatable memory.
2
Actually this does not work on linux (dunno about other *nixes).man setrlimit
:RLIMIT_RSS Specifies the limit (in pages) of the process's resident set (the number of virtual pages resident in RAM). This limit only has effect in Linux 2.4.x, x < 30, and there only affects calls to madvise(2) specifying MADV_WILLNEED.
– Patrick
Nov 8 '13 at 13:46
add a comment |
If you want to test a particular process with limited memory you might be better off using ulimit
to restrict the amount of allocatable memory.
If you want to test a particular process with limited memory you might be better off using ulimit
to restrict the amount of allocatable memory.
edited 2 days ago
GAD3R
27.9k1958114
27.9k1958114
answered Nov 8 '13 at 13:19
sj26sj26
1814
1814
2
Actually this does not work on linux (dunno about other *nixes).man setrlimit
:RLIMIT_RSS Specifies the limit (in pages) of the process's resident set (the number of virtual pages resident in RAM). This limit only has effect in Linux 2.4.x, x < 30, and there only affects calls to madvise(2) specifying MADV_WILLNEED.
– Patrick
Nov 8 '13 at 13:46
add a comment |
2
Actually this does not work on linux (dunno about other *nixes).man setrlimit
:RLIMIT_RSS Specifies the limit (in pages) of the process's resident set (the number of virtual pages resident in RAM). This limit only has effect in Linux 2.4.x, x < 30, and there only affects calls to madvise(2) specifying MADV_WILLNEED.
– Patrick
Nov 8 '13 at 13:46
2
2
Actually this does not work on linux (dunno about other *nixes).
man setrlimit
: RLIMIT_RSS Specifies the limit (in pages) of the process's resident set (the number of virtual pages resident in RAM). This limit only has effect in Linux 2.4.x, x < 30, and there only affects calls to madvise(2) specifying MADV_WILLNEED.
– Patrick
Nov 8 '13 at 13:46
Actually this does not work on linux (dunno about other *nixes).
man setrlimit
: RLIMIT_RSS Specifies the limit (in pages) of the process's resident set (the number of virtual pages resident in RAM). This limit only has effect in Linux 2.4.x, x < 30, and there only affects calls to madvise(2) specifying MADV_WILLNEED.
– Patrick
Nov 8 '13 at 13:46
add a comment |
I think this is a case of asking the wrong question and sanity being drowned out by people competing for the most creative answer. If you only need to simulate OOM conditions, you don't need to fill memory. Just use a custom allocator and have it fail after a certain number of allocations. This approach seems to work well enough for SQLite.
add a comment |
I think this is a case of asking the wrong question and sanity being drowned out by people competing for the most creative answer. If you only need to simulate OOM conditions, you don't need to fill memory. Just use a custom allocator and have it fail after a certain number of allocations. This approach seems to work well enough for SQLite.
add a comment |
I think this is a case of asking the wrong question and sanity being drowned out by people competing for the most creative answer. If you only need to simulate OOM conditions, you don't need to fill memory. Just use a custom allocator and have it fail after a certain number of allocations. This approach seems to work well enough for SQLite.
I think this is a case of asking the wrong question and sanity being drowned out by people competing for the most creative answer. If you only need to simulate OOM conditions, you don't need to fill memory. Just use a custom allocator and have it fail after a certain number of allocations. This approach seems to work well enough for SQLite.
answered Nov 8 '13 at 22:01
Craig BarnesCraig Barnes
411
411
add a comment |
add a comment |
I wrote this little C++ program for that: https://github.com/rmetzger/dynamic-ballooner
The advantage of this implementation is that is periodically checks if it needs to free or re-allocate memory.
add a comment |
I wrote this little C++ program for that: https://github.com/rmetzger/dynamic-ballooner
The advantage of this implementation is that is periodically checks if it needs to free or re-allocate memory.
add a comment |
I wrote this little C++ program for that: https://github.com/rmetzger/dynamic-ballooner
The advantage of this implementation is that is periodically checks if it needs to free or re-allocate memory.
I wrote this little C++ program for that: https://github.com/rmetzger/dynamic-ballooner
The advantage of this implementation is that is periodically checks if it needs to free or re-allocate memory.
answered Nov 8 '13 at 13:27
Robert MetzgerRobert Metzger
1313
1313
add a comment |
add a comment |
protected by Gilles Nov 9 '13 at 14:37
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
3
Does it really have to work on any *nix system?
– a CVn
Nov 8 '13 at 12:31
30
Instead of jut filling memory, could you instead create a VM (using docker, or vagrant, or something similar) that has a limited amount of memory?
– abendigo
Nov 8 '13 at 13:27
4
@abendigo For a QA many of the solutions presented here are useful: for a general purpose OS without a specific platform the VM or kernel boot parameters could be useful, but for a embedded system where you know the memory specification of the targeted system I would go for the filling of the free memory.
– Eduard Florinescu
Nov 9 '13 at 17:40
2
In case anyone else is a little shocked by the scoring here: meta.unix.stackexchange.com/questions/1513/…?
– goldilocks
Nov 13 '13 at 14:46
See also: unix.stackexchange.com/a/1368/52956
– Wilf
Jun 18 '15 at 18:42