How can i calculate the size of shared memory available to the system Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern) 2019 Community Moderator Election Results Why I closed the “Why is Kali so hard” questionHow does the OOM killer decide which process to kill first?Tracking down “missing” memory usage in linuxHow to solve this memory issue gracefully?Can I limit the size of the linux file cache?Embedded Linux OOM - help with lost RAMWhy CentOS uses half of the memory for devtmpfs or tmpfs?Memory fragmentation on RHEL7What happens when Huge Pages are set higher that System RAM?Where else could memory be used other than process, cache and tmpfs?SYSV shared memory object has more blocks allocated than its size?Will swap file engage automatically when I write too much to /dev/shm
Converted a Scalar function to a TVF function for parallel execution-Still running in Serial mode
What order were files/directories outputted in dir?
How would a mousetrap for use in space work?
What does Turing mean by this statement?
How often does castling occur in grandmaster games?
What's the meaning of "fortified infraction restraint"?
Would it be easier to apply for a UK visa if there is a host family to sponsor for you in going there?
Sentence order: Where to put もう
Exposing GRASS GIS add-on in QGIS Processing framework?
How to get all distinct words within a set of lines?
Project Euler #1 in C++
Trademark violation for app?
Maximum summed subsequences with non-adjacent items
Most bit efficient text communication method?
How to make a Field only accept Numbers in Magento 2
QGIS virtual layer functionality does not seem to support memory layers
Co-worker has annoying ringtone
Can you explain what "processes and tools" means in the first Agile principle?
How to run automated tests after each commit?
Can a new player join a group only when a new campaign starts?
Amount of permutations on an NxNxN Rubik's Cube
When a candle burns, why does the top of wick glow if bottom of flame is hottest?
Crossing US/Canada Border for less than 24 hours
What does it mean that physics no longer uses mechanical models to describe phenomena?
How can i calculate the size of shared memory available to the system
Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern)
2019 Community Moderator Election Results
Why I closed the “Why is Kali so hard” questionHow does the OOM killer decide which process to kill first?Tracking down “missing” memory usage in linuxHow to solve this memory issue gracefully?Can I limit the size of the linux file cache?Embedded Linux OOM - help with lost RAMWhy CentOS uses half of the memory for devtmpfs or tmpfs?Memory fragmentation on RHEL7What happens when Huge Pages are set higher that System RAM?Where else could memory be used other than process, cache and tmpfs?SYSV shared memory object has more blocks allocated than its size?Will swap file engage automatically when I write too much to /dev/shm
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
According to the rhel document, the total amount of shared memory available on the system equals to shmall*PAGE_SIZE.
After I completed the installation of RHEL6, the value of the shmall kernel parameter defaults to 4294967296,which means that the total amount of shared memory pages that can be used system wide is 4294967296, and the page size is 4096B. So, based on the formula, the size of shared memory is
4294967296*4096/1024/1024/1024/1024=16TB
which is much more than the size of RAM(8GB) the operating system has.How an os can find 16TB shared memory to allocate?
So, is the size of /dev/shm
actually equal to the size of shared memory? If not, how can I get the actual size of shared memory?
memory shared-memory
add a comment |
According to the rhel document, the total amount of shared memory available on the system equals to shmall*PAGE_SIZE.
After I completed the installation of RHEL6, the value of the shmall kernel parameter defaults to 4294967296,which means that the total amount of shared memory pages that can be used system wide is 4294967296, and the page size is 4096B. So, based on the formula, the size of shared memory is
4294967296*4096/1024/1024/1024/1024=16TB
which is much more than the size of RAM(8GB) the operating system has.How an os can find 16TB shared memory to allocate?
So, is the size of /dev/shm
actually equal to the size of shared memory? If not, how can I get the actual size of shared memory?
memory shared-memory
Please don't ask multiple questions in a single post. I have removed the second one, you can ask it as a separate question. Also, please edit your question and i) clarify why you say the system has less memory than what you show. What kind of memory are you referring to? How do you measure it? ii) How do you get 16TB from your formula? What you show is 16 gigabits, not 16 terabytes.
– terdon♦
Nov 2 '16 at 9:24
add a comment |
According to the rhel document, the total amount of shared memory available on the system equals to shmall*PAGE_SIZE.
After I completed the installation of RHEL6, the value of the shmall kernel parameter defaults to 4294967296,which means that the total amount of shared memory pages that can be used system wide is 4294967296, and the page size is 4096B. So, based on the formula, the size of shared memory is
4294967296*4096/1024/1024/1024/1024=16TB
which is much more than the size of RAM(8GB) the operating system has.How an os can find 16TB shared memory to allocate?
So, is the size of /dev/shm
actually equal to the size of shared memory? If not, how can I get the actual size of shared memory?
memory shared-memory
According to the rhel document, the total amount of shared memory available on the system equals to shmall*PAGE_SIZE.
After I completed the installation of RHEL6, the value of the shmall kernel parameter defaults to 4294967296,which means that the total amount of shared memory pages that can be used system wide is 4294967296, and the page size is 4096B. So, based on the formula, the size of shared memory is
4294967296*4096/1024/1024/1024/1024=16TB
which is much more than the size of RAM(8GB) the operating system has.How an os can find 16TB shared memory to allocate?
So, is the size of /dev/shm
actually equal to the size of shared memory? If not, how can I get the actual size of shared memory?
memory shared-memory
memory shared-memory
edited Nov 2 '16 at 9:44
user4535727
asked Nov 2 '16 at 8:52
user4535727user4535727
214
214
Please don't ask multiple questions in a single post. I have removed the second one, you can ask it as a separate question. Also, please edit your question and i) clarify why you say the system has less memory than what you show. What kind of memory are you referring to? How do you measure it? ii) How do you get 16TB from your formula? What you show is 16 gigabits, not 16 terabytes.
– terdon♦
Nov 2 '16 at 9:24
add a comment |
Please don't ask multiple questions in a single post. I have removed the second one, you can ask it as a separate question. Also, please edit your question and i) clarify why you say the system has less memory than what you show. What kind of memory are you referring to? How do you measure it? ii) How do you get 16TB from your formula? What you show is 16 gigabits, not 16 terabytes.
– terdon♦
Nov 2 '16 at 9:24
Please don't ask multiple questions in a single post. I have removed the second one, you can ask it as a separate question. Also, please edit your question and i) clarify why you say the system has less memory than what you show. What kind of memory are you referring to? How do you measure it? ii) How do you get 16TB from your formula? What you show is 16 gigabits, not 16 terabytes.
– terdon♦
Nov 2 '16 at 9:24
Please don't ask multiple questions in a single post. I have removed the second one, you can ask it as a separate question. Also, please edit your question and i) clarify why you say the system has less memory than what you show. What kind of memory are you referring to? How do you measure it? ii) How do you get 16TB from your formula? What you show is 16 gigabits, not 16 terabytes.
– terdon♦
Nov 2 '16 at 9:24
add a comment |
1 Answer
1
active
oldest
votes
Your calculation is correct. shmall can be set higher than the available virtual memory. If you would try to use all of it then it would not fail because of shmall is exceeded but because of other reasons.
BTW there are also commands to find these IPC limits:
ipcs -l
lsipc # util-linux>=2.27
Note that even the virtual memory is unlimited on Linux by default, greater-than RAM+swap. See
https://serverfault.com/questions/606185/how-does-vm-overcommit-memory-work
How the OOM killer decides which process to kill first?
On the other hand you could limit the virtual memory per process using ulimt -v
which wouldn't affect kernel's /proc/sys/kernel/shmall
neither.
the result of ipcs -l is depend on the setting shmall,it does not show the real limits
– user4535727
Nov 2 '16 at 11:42
@user4535727 Probably a bug because of integer overflow. Maybe it's correct for smaller valuesecho $(( 1024*1024*1024 )) > /proc/sys/kernel/shmall; ipcs -l
– rudimeier
Nov 2 '16 at 11:59
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "106"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f320491%2fhow-can-i-calculate-the-size-of-shared-memory-available-to-the-system%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
Your calculation is correct. shmall can be set higher than the available virtual memory. If you would try to use all of it then it would not fail because of shmall is exceeded but because of other reasons.
BTW there are also commands to find these IPC limits:
ipcs -l
lsipc # util-linux>=2.27
Note that even the virtual memory is unlimited on Linux by default, greater-than RAM+swap. See
https://serverfault.com/questions/606185/how-does-vm-overcommit-memory-work
How the OOM killer decides which process to kill first?
On the other hand you could limit the virtual memory per process using ulimt -v
which wouldn't affect kernel's /proc/sys/kernel/shmall
neither.
the result of ipcs -l is depend on the setting shmall,it does not show the real limits
– user4535727
Nov 2 '16 at 11:42
@user4535727 Probably a bug because of integer overflow. Maybe it's correct for smaller valuesecho $(( 1024*1024*1024 )) > /proc/sys/kernel/shmall; ipcs -l
– rudimeier
Nov 2 '16 at 11:59
add a comment |
Your calculation is correct. shmall can be set higher than the available virtual memory. If you would try to use all of it then it would not fail because of shmall is exceeded but because of other reasons.
BTW there are also commands to find these IPC limits:
ipcs -l
lsipc # util-linux>=2.27
Note that even the virtual memory is unlimited on Linux by default, greater-than RAM+swap. See
https://serverfault.com/questions/606185/how-does-vm-overcommit-memory-work
How the OOM killer decides which process to kill first?
On the other hand you could limit the virtual memory per process using ulimt -v
which wouldn't affect kernel's /proc/sys/kernel/shmall
neither.
the result of ipcs -l is depend on the setting shmall,it does not show the real limits
– user4535727
Nov 2 '16 at 11:42
@user4535727 Probably a bug because of integer overflow. Maybe it's correct for smaller valuesecho $(( 1024*1024*1024 )) > /proc/sys/kernel/shmall; ipcs -l
– rudimeier
Nov 2 '16 at 11:59
add a comment |
Your calculation is correct. shmall can be set higher than the available virtual memory. If you would try to use all of it then it would not fail because of shmall is exceeded but because of other reasons.
BTW there are also commands to find these IPC limits:
ipcs -l
lsipc # util-linux>=2.27
Note that even the virtual memory is unlimited on Linux by default, greater-than RAM+swap. See
https://serverfault.com/questions/606185/how-does-vm-overcommit-memory-work
How the OOM killer decides which process to kill first?
On the other hand you could limit the virtual memory per process using ulimt -v
which wouldn't affect kernel's /proc/sys/kernel/shmall
neither.
Your calculation is correct. shmall can be set higher than the available virtual memory. If you would try to use all of it then it would not fail because of shmall is exceeded but because of other reasons.
BTW there are also commands to find these IPC limits:
ipcs -l
lsipc # util-linux>=2.27
Note that even the virtual memory is unlimited on Linux by default, greater-than RAM+swap. See
https://serverfault.com/questions/606185/how-does-vm-overcommit-memory-work
How the OOM killer decides which process to kill first?
On the other hand you could limit the virtual memory per process using ulimt -v
which wouldn't affect kernel's /proc/sys/kernel/shmall
neither.
edited Apr 13 '17 at 12:36
Community♦
1
1
answered Nov 2 '16 at 10:25
rudimeierrudimeier
5,7321832
5,7321832
the result of ipcs -l is depend on the setting shmall,it does not show the real limits
– user4535727
Nov 2 '16 at 11:42
@user4535727 Probably a bug because of integer overflow. Maybe it's correct for smaller valuesecho $(( 1024*1024*1024 )) > /proc/sys/kernel/shmall; ipcs -l
– rudimeier
Nov 2 '16 at 11:59
add a comment |
the result of ipcs -l is depend on the setting shmall,it does not show the real limits
– user4535727
Nov 2 '16 at 11:42
@user4535727 Probably a bug because of integer overflow. Maybe it's correct for smaller valuesecho $(( 1024*1024*1024 )) > /proc/sys/kernel/shmall; ipcs -l
– rudimeier
Nov 2 '16 at 11:59
the result of ipcs -l is depend on the setting shmall,it does not show the real limits
– user4535727
Nov 2 '16 at 11:42
the result of ipcs -l is depend on the setting shmall,it does not show the real limits
– user4535727
Nov 2 '16 at 11:42
@user4535727 Probably a bug because of integer overflow. Maybe it's correct for smaller values
echo $(( 1024*1024*1024 )) > /proc/sys/kernel/shmall; ipcs -l
– rudimeier
Nov 2 '16 at 11:59
@user4535727 Probably a bug because of integer overflow. Maybe it's correct for smaller values
echo $(( 1024*1024*1024 )) > /proc/sys/kernel/shmall; ipcs -l
– rudimeier
Nov 2 '16 at 11:59
add a comment |
Thanks for contributing an answer to Unix & Linux Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f320491%2fhow-can-i-calculate-the-size-of-shared-memory-available-to-the-system%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Please don't ask multiple questions in a single post. I have removed the second one, you can ask it as a separate question. Also, please edit your question and i) clarify why you say the system has less memory than what you show. What kind of memory are you referring to? How do you measure it? ii) How do you get 16TB from your formula? What you show is 16 gigabits, not 16 terabytes.
– terdon♦
Nov 2 '16 at 9:24