Interruption of system calls when a signal is caughtWhat's the implementation of system call interruption? Why system call can be interrupted by signals?What is the relationship between system calls, message passing, and interrupts?signal handling in the unix kernelNew signal arrives while execution is in a signal handler, how to decide which is first?Are multiple signals given to a process served sequentially or their signal handler can cascade?Starting an interactive shell as an asynchronous process (signal delivery)How does keyboard interrupt ends up as process signalWhat do asynchronous and synchronous mean in notifying processes of system events, and in process reacting to a signal delivery?Resize window in multi-thread ncurses programDoes the default action of SIGCONT resume the execution of a stopped process before or after first handling any pending unblocked signals?How can we tell if a signal can interrupt the execution of a system call?
Does an object always see its latest internal state irrespective of thread?
Why doesn't H₄O²⁺ exist?
What is the word for reserving something for yourself before others do?
Intersection point of 2 lines defined by 2 points each
Is it unprofessional to ask if a job posting on GlassDoor is real?
How does quantile regression compare to logistic regression with the variable split at the quantile?
Is it tax fraud for an individual to declare non-taxable revenue as taxable income? (US tax laws)
How to format long polynomial?
Why can't I see bouncing of a switch on an oscilloscope?
LWC SFDX source push error TypeError: LWC1009: decl.moveTo is not a function
Can you really stack all of this on an Opportunity Attack?
Can a vampire attack twice with their claws using Multiattack?
Maximum likelihood parameters deviate from posterior distributions
How is the claim "I am in New York only if I am in America" the same as "If I am in New York, then I am in America?
Perform and show arithmetic with LuaLaTeX
Languages that we cannot (dis)prove to be Context-Free
What is a clear way to write a bar that has an extra beat?
How is it possible to have an ability score that is less than 3?
What would happen to a modern skyscraper if it rains micro blackholes?
Codimension of non-flat locus
What's the point of deactivating Num Lock on login screens?
How do I deal with an unproductive colleague in a small company?
strTok function (thread safe, supports empty tokens, doesn't change string)
How do I gain back my faith in my PhD degree?
Interruption of system calls when a signal is caught
What's the implementation of system call interruption? Why system call can be interrupted by signals?What is the relationship between system calls, message passing, and interrupts?signal handling in the unix kernelNew signal arrives while execution is in a signal handler, how to decide which is first?Are multiple signals given to a process served sequentially or their signal handler can cascade?Starting an interactive shell as an asynchronous process (signal delivery)How does keyboard interrupt ends up as process signalWhat do asynchronous and synchronous mean in notifying processes of system events, and in process reacting to a signal delivery?Resize window in multi-thread ncurses programDoes the default action of SIGCONT resume the execution of a stopped process before or after first handling any pending unblocked signals?How can we tell if a signal can interrupt the execution of a system call?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
From reading the man pages on the read()
and write()
calls it appears that these calls get interrupted by signals regardless of whether they have to block or not.
In particular, assume
- a process establishes a handler for some signal.
- a device is opened (say, a terminal) with the
O_NONBLOCK
not set (i.e. operating in blocking mode) - the process then makes a
read()
system call to read from the device and as a result executes a kernel control path in kernel-space. - while the precess is executing its
read()
in kernel-space, the signal for which the handler was installed earlier is delivered to that process and its signal handler is invoked.
Reading the man pages and the appropriate sections in SUSv3 'System Interfaces volume (XSH)', one finds that:
i. If a read()
is interrupted by a signal before it reads any data (i.e. it had to block because no data was available), it returns -1 with errno
set to [EINTR].
ii. If a read()
is interrupted by a signal after it has successfully read some data (i.e. it was possible to start servicing the request immediately), it returns the number of bytes read.
Question A):
Am I correct to assume that in either case (block/no block) the delivery and handling of the signal is not entirely transparent to the read()
?
Case i. seems understandable since the blocking read()
would normally place the process in the TASK_INTERRUPTIBLE
state so that when a signal is delivered, the kernel places the process into TASK_RUNNING
state.
However when the read()
doesn't need to block (case ii.) and is processing the request in kernel-space, I would have thought that the arrival of a signal and its handling would be transparent much like the arrival and proper handling of a HW interrupt would be. In particular I would have assumed that upon delivery of the signal, the process would be temporarily placed into user mode to execute its signal handler from which it would return eventually to finish off processing the interrupted read()
(in kernel-space) so that the read()
runs its course to completion after which the process returns back to the point just after the call to read()
(in user-space), with all of the available bytes read as a result.
But ii. seems to imply that the read()
is interrupted, since data is available immediately, but it returns returns only some of the data (instead of all).
This brings me to my second (and final) question:
Question B):
If my assumption under A) is correct, why does the read()
get interrupted, even though it does not need to block because there is data available to satisfy the request immediately?
In other words, why is the read()
not resumed after executing the signal handler, eventually resulting in all of the available data (which was available after all) to be returned?
kernel signals architecture system-calls
add a comment |
From reading the man pages on the read()
and write()
calls it appears that these calls get interrupted by signals regardless of whether they have to block or not.
In particular, assume
- a process establishes a handler for some signal.
- a device is opened (say, a terminal) with the
O_NONBLOCK
not set (i.e. operating in blocking mode) - the process then makes a
read()
system call to read from the device and as a result executes a kernel control path in kernel-space. - while the precess is executing its
read()
in kernel-space, the signal for which the handler was installed earlier is delivered to that process and its signal handler is invoked.
Reading the man pages and the appropriate sections in SUSv3 'System Interfaces volume (XSH)', one finds that:
i. If a read()
is interrupted by a signal before it reads any data (i.e. it had to block because no data was available), it returns -1 with errno
set to [EINTR].
ii. If a read()
is interrupted by a signal after it has successfully read some data (i.e. it was possible to start servicing the request immediately), it returns the number of bytes read.
Question A):
Am I correct to assume that in either case (block/no block) the delivery and handling of the signal is not entirely transparent to the read()
?
Case i. seems understandable since the blocking read()
would normally place the process in the TASK_INTERRUPTIBLE
state so that when a signal is delivered, the kernel places the process into TASK_RUNNING
state.
However when the read()
doesn't need to block (case ii.) and is processing the request in kernel-space, I would have thought that the arrival of a signal and its handling would be transparent much like the arrival and proper handling of a HW interrupt would be. In particular I would have assumed that upon delivery of the signal, the process would be temporarily placed into user mode to execute its signal handler from which it would return eventually to finish off processing the interrupted read()
(in kernel-space) so that the read()
runs its course to completion after which the process returns back to the point just after the call to read()
(in user-space), with all of the available bytes read as a result.
But ii. seems to imply that the read()
is interrupted, since data is available immediately, but it returns returns only some of the data (instead of all).
This brings me to my second (and final) question:
Question B):
If my assumption under A) is correct, why does the read()
get interrupted, even though it does not need to block because there is data available to satisfy the request immediately?
In other words, why is the read()
not resumed after executing the signal handler, eventually resulting in all of the available data (which was available after all) to be returned?
kernel signals architecture system-calls
add a comment |
From reading the man pages on the read()
and write()
calls it appears that these calls get interrupted by signals regardless of whether they have to block or not.
In particular, assume
- a process establishes a handler for some signal.
- a device is opened (say, a terminal) with the
O_NONBLOCK
not set (i.e. operating in blocking mode) - the process then makes a
read()
system call to read from the device and as a result executes a kernel control path in kernel-space. - while the precess is executing its
read()
in kernel-space, the signal for which the handler was installed earlier is delivered to that process and its signal handler is invoked.
Reading the man pages and the appropriate sections in SUSv3 'System Interfaces volume (XSH)', one finds that:
i. If a read()
is interrupted by a signal before it reads any data (i.e. it had to block because no data was available), it returns -1 with errno
set to [EINTR].
ii. If a read()
is interrupted by a signal after it has successfully read some data (i.e. it was possible to start servicing the request immediately), it returns the number of bytes read.
Question A):
Am I correct to assume that in either case (block/no block) the delivery and handling of the signal is not entirely transparent to the read()
?
Case i. seems understandable since the blocking read()
would normally place the process in the TASK_INTERRUPTIBLE
state so that when a signal is delivered, the kernel places the process into TASK_RUNNING
state.
However when the read()
doesn't need to block (case ii.) and is processing the request in kernel-space, I would have thought that the arrival of a signal and its handling would be transparent much like the arrival and proper handling of a HW interrupt would be. In particular I would have assumed that upon delivery of the signal, the process would be temporarily placed into user mode to execute its signal handler from which it would return eventually to finish off processing the interrupted read()
(in kernel-space) so that the read()
runs its course to completion after which the process returns back to the point just after the call to read()
(in user-space), with all of the available bytes read as a result.
But ii. seems to imply that the read()
is interrupted, since data is available immediately, but it returns returns only some of the data (instead of all).
This brings me to my second (and final) question:
Question B):
If my assumption under A) is correct, why does the read()
get interrupted, even though it does not need to block because there is data available to satisfy the request immediately?
In other words, why is the read()
not resumed after executing the signal handler, eventually resulting in all of the available data (which was available after all) to be returned?
kernel signals architecture system-calls
From reading the man pages on the read()
and write()
calls it appears that these calls get interrupted by signals regardless of whether they have to block or not.
In particular, assume
- a process establishes a handler for some signal.
- a device is opened (say, a terminal) with the
O_NONBLOCK
not set (i.e. operating in blocking mode) - the process then makes a
read()
system call to read from the device and as a result executes a kernel control path in kernel-space. - while the precess is executing its
read()
in kernel-space, the signal for which the handler was installed earlier is delivered to that process and its signal handler is invoked.
Reading the man pages and the appropriate sections in SUSv3 'System Interfaces volume (XSH)', one finds that:
i. If a read()
is interrupted by a signal before it reads any data (i.e. it had to block because no data was available), it returns -1 with errno
set to [EINTR].
ii. If a read()
is interrupted by a signal after it has successfully read some data (i.e. it was possible to start servicing the request immediately), it returns the number of bytes read.
Question A):
Am I correct to assume that in either case (block/no block) the delivery and handling of the signal is not entirely transparent to the read()
?
Case i. seems understandable since the blocking read()
would normally place the process in the TASK_INTERRUPTIBLE
state so that when a signal is delivered, the kernel places the process into TASK_RUNNING
state.
However when the read()
doesn't need to block (case ii.) and is processing the request in kernel-space, I would have thought that the arrival of a signal and its handling would be transparent much like the arrival and proper handling of a HW interrupt would be. In particular I would have assumed that upon delivery of the signal, the process would be temporarily placed into user mode to execute its signal handler from which it would return eventually to finish off processing the interrupted read()
(in kernel-space) so that the read()
runs its course to completion after which the process returns back to the point just after the call to read()
(in user-space), with all of the available bytes read as a result.
But ii. seems to imply that the read()
is interrupted, since data is available immediately, but it returns returns only some of the data (instead of all).
This brings me to my second (and final) question:
Question B):
If my assumption under A) is correct, why does the read()
get interrupted, even though it does not need to block because there is data available to satisfy the request immediately?
In other words, why is the read()
not resumed after executing the signal handler, eventually resulting in all of the available data (which was available after all) to be returned?
kernel signals architecture system-calls
kernel signals architecture system-calls
edited Jul 11 '11 at 21:12
Gilles
546k12911101624
546k12911101624
asked Jul 11 '11 at 5:30
darbehdardarbehdar
419259
419259
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
Summary: you're correct that receiving a signal is not transparent, neither in case i (interrupted without having read anything) nor in case ii (interrupted after a partial read). To do otherwise in case i would require making fundamental changes both to the architecture of the operating system and the architecture of applications.
The OS implementation view
Consider what happens if a system call is interrupted by a signal. The signal handler will execute user-mode code. But the syscall handler is kernel code and does not trust any user-mode code. So let's explore the choices for the syscall handler:
- Terminate the system call; report how much was done to the user code. It's up to the application code to restart the system call in some way, if desired. That's how unix works.
- Save the state of the system call, and allow the user code to resume the call. This is problematic for several reasons:
- While the user code is running, something could happen to invalidate the saved state. For example, if reading from a file, the file might be truncated. So the kernel code would need a lot of logic to handle these cases.
- The saved state can't be allowed to keep any lock, because there's no guarantee that the user code will ever resume the syscall, and then the lock would be held forever.
- The kernel must expose new interfaces to resume or cancel ongoing syscalls, in addition to the normal interface to start a syscall. This is a lot of complication for a rare case.
- The saved state would need to use resources (memory, at least); those resources would need to be allocated and held by the kernel but be counted against the process's allotment. This isn't insurmountable, but it is a complication.
- Note that the signal handler might make system calls that themselves get interrupted; so you can't just have a static resource allotment that covers all possible syscalls.
- And what if the resources cannot be allocated? Then the syscall would have to fail anyway. Which means the application would need to have code to handle this case, so this design would not simplify the application code.
- Remain in progress (but suspended), create a new thread for the signal handler. This, again, is problematic:
- Early unix implementations had a single thread per process.
- The signal handler would risk overstepping on the syscall's shoes. This is an issue anyway, but in the current unix design, it's contained.
- Resources would need to be allocated for the new thread; see above.
The main difference with an interrupt is that the interrupt code is trusted, and highly constrained. It's usually not allowed to allocate resources, or run forever, or take locks and not release them, or do any other kind of nasty things; since the interrupt handler is written by the OS implementer himself, he knows that it won't do anything bad. On the other hand, application code can do anything.
The application design view
When an application is interrupted in the middle of a system call, should the syscall continue to completion? Not always. For example, consider a program like a shell that's reading a line from the terminal, and the user presses Ctrl+C
, triggering SIGINT. The read must not complete, that's what the signal is all about. Note that this example shows that the read
syscall must be interruptible even if no byte has been read yet.
So there must be a way for the application to tell the kernel to cancel the system call. Under the unix design, that happens automatically: the signal makes the syscall return. Other designs would require a way for the application to resume or cancel the syscall at its leasure.
The read
system call is the way it is because it's the primitive that makes sense, given the general design of the operating system. What it means is, roughly, “read as much as you can, up to a limit (the buffer size), but stop if something else happens”. To actually read a full buffer involves running read
in a loop until as many bytes as possible have been read; this is a higher-level function, fread(3)
. Unlike read(2)
which is a system call, fread
is a library function, implemented in user space on top of read
. It's suitable for an application that reads for a file or dies trying; it's not suitable for a command line interpreter or for a networked program that must throttle connections cleanly, nor for a networked program that has concurrent connections and doesn't use threads.
The example of read in a loop is provided in Robert Love's Linux System Programming:
ssize_t ret;
while (len != 0 && (ret = read (fd, buf, len)) != 0)
if (ret == -1)
if (errno == EINTR)
continue;
perror ("read");
break;
len -= ret;
buf += ret;
It takes care of case i
and case ii
and few more.
Thanks very much Gilles for a very concise and clear answer which corroborates similar views put forward in an article on the UNIX design philosophy. Seems very convincing to me that the syscall interruption behaviour has to do with the UNIX design philosopy rather than technical constraints or impediments
– darbehdar
Jul 12 '11 at 5:32
@darbehdar It's all three: unix design philosophy (here mainly that processes are less trusted than the kernel and can run arbitrary code, also that processes and threads are not created implicitly), technical constraints (on resource allocations), and application design (there are cases when the signal must cancel the syscall).
– Gilles
Jul 12 '11 at 7:11
add a comment |
To answer question A:
Yes, the delivery and handling of the signal is not entirely transparent to the read()
.
The read()
running halfway may be occupying some resources while it's interrupted by the signal. And the signal handler of the signal may call another read()
(or any other async-signal safe syscalls) as well. So the read()
interrupted by the signal must be stopped first in order to release the resources it uses, otherwise the read()
called from the signal handler will access the same resources and cause reentrant issues.
Because system calls other than read()
could be called from the signal handler and they may also occupy identical set of resources as read()
does. To avoid reentrant issues above, the simplest, safest design is to stop the interrupted read()
every time when a signal happens during its run.
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "106"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f16455%2finterruption-of-system-calls-when-a-signal-is-caught%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
Summary: you're correct that receiving a signal is not transparent, neither in case i (interrupted without having read anything) nor in case ii (interrupted after a partial read). To do otherwise in case i would require making fundamental changes both to the architecture of the operating system and the architecture of applications.
The OS implementation view
Consider what happens if a system call is interrupted by a signal. The signal handler will execute user-mode code. But the syscall handler is kernel code and does not trust any user-mode code. So let's explore the choices for the syscall handler:
- Terminate the system call; report how much was done to the user code. It's up to the application code to restart the system call in some way, if desired. That's how unix works.
- Save the state of the system call, and allow the user code to resume the call. This is problematic for several reasons:
- While the user code is running, something could happen to invalidate the saved state. For example, if reading from a file, the file might be truncated. So the kernel code would need a lot of logic to handle these cases.
- The saved state can't be allowed to keep any lock, because there's no guarantee that the user code will ever resume the syscall, and then the lock would be held forever.
- The kernel must expose new interfaces to resume or cancel ongoing syscalls, in addition to the normal interface to start a syscall. This is a lot of complication for a rare case.
- The saved state would need to use resources (memory, at least); those resources would need to be allocated and held by the kernel but be counted against the process's allotment. This isn't insurmountable, but it is a complication.
- Note that the signal handler might make system calls that themselves get interrupted; so you can't just have a static resource allotment that covers all possible syscalls.
- And what if the resources cannot be allocated? Then the syscall would have to fail anyway. Which means the application would need to have code to handle this case, so this design would not simplify the application code.
- Remain in progress (but suspended), create a new thread for the signal handler. This, again, is problematic:
- Early unix implementations had a single thread per process.
- The signal handler would risk overstepping on the syscall's shoes. This is an issue anyway, but in the current unix design, it's contained.
- Resources would need to be allocated for the new thread; see above.
The main difference with an interrupt is that the interrupt code is trusted, and highly constrained. It's usually not allowed to allocate resources, or run forever, or take locks and not release them, or do any other kind of nasty things; since the interrupt handler is written by the OS implementer himself, he knows that it won't do anything bad. On the other hand, application code can do anything.
The application design view
When an application is interrupted in the middle of a system call, should the syscall continue to completion? Not always. For example, consider a program like a shell that's reading a line from the terminal, and the user presses Ctrl+C
, triggering SIGINT. The read must not complete, that's what the signal is all about. Note that this example shows that the read
syscall must be interruptible even if no byte has been read yet.
So there must be a way for the application to tell the kernel to cancel the system call. Under the unix design, that happens automatically: the signal makes the syscall return. Other designs would require a way for the application to resume or cancel the syscall at its leasure.
The read
system call is the way it is because it's the primitive that makes sense, given the general design of the operating system. What it means is, roughly, “read as much as you can, up to a limit (the buffer size), but stop if something else happens”. To actually read a full buffer involves running read
in a loop until as many bytes as possible have been read; this is a higher-level function, fread(3)
. Unlike read(2)
which is a system call, fread
is a library function, implemented in user space on top of read
. It's suitable for an application that reads for a file or dies trying; it's not suitable for a command line interpreter or for a networked program that must throttle connections cleanly, nor for a networked program that has concurrent connections and doesn't use threads.
The example of read in a loop is provided in Robert Love's Linux System Programming:
ssize_t ret;
while (len != 0 && (ret = read (fd, buf, len)) != 0)
if (ret == -1)
if (errno == EINTR)
continue;
perror ("read");
break;
len -= ret;
buf += ret;
It takes care of case i
and case ii
and few more.
Thanks very much Gilles for a very concise and clear answer which corroborates similar views put forward in an article on the UNIX design philosophy. Seems very convincing to me that the syscall interruption behaviour has to do with the UNIX design philosopy rather than technical constraints or impediments
– darbehdar
Jul 12 '11 at 5:32
@darbehdar It's all three: unix design philosophy (here mainly that processes are less trusted than the kernel and can run arbitrary code, also that processes and threads are not created implicitly), technical constraints (on resource allocations), and application design (there are cases when the signal must cancel the syscall).
– Gilles
Jul 12 '11 at 7:11
add a comment |
Summary: you're correct that receiving a signal is not transparent, neither in case i (interrupted without having read anything) nor in case ii (interrupted after a partial read). To do otherwise in case i would require making fundamental changes both to the architecture of the operating system and the architecture of applications.
The OS implementation view
Consider what happens if a system call is interrupted by a signal. The signal handler will execute user-mode code. But the syscall handler is kernel code and does not trust any user-mode code. So let's explore the choices for the syscall handler:
- Terminate the system call; report how much was done to the user code. It's up to the application code to restart the system call in some way, if desired. That's how unix works.
- Save the state of the system call, and allow the user code to resume the call. This is problematic for several reasons:
- While the user code is running, something could happen to invalidate the saved state. For example, if reading from a file, the file might be truncated. So the kernel code would need a lot of logic to handle these cases.
- The saved state can't be allowed to keep any lock, because there's no guarantee that the user code will ever resume the syscall, and then the lock would be held forever.
- The kernel must expose new interfaces to resume or cancel ongoing syscalls, in addition to the normal interface to start a syscall. This is a lot of complication for a rare case.
- The saved state would need to use resources (memory, at least); those resources would need to be allocated and held by the kernel but be counted against the process's allotment. This isn't insurmountable, but it is a complication.
- Note that the signal handler might make system calls that themselves get interrupted; so you can't just have a static resource allotment that covers all possible syscalls.
- And what if the resources cannot be allocated? Then the syscall would have to fail anyway. Which means the application would need to have code to handle this case, so this design would not simplify the application code.
- Remain in progress (but suspended), create a new thread for the signal handler. This, again, is problematic:
- Early unix implementations had a single thread per process.
- The signal handler would risk overstepping on the syscall's shoes. This is an issue anyway, but in the current unix design, it's contained.
- Resources would need to be allocated for the new thread; see above.
The main difference with an interrupt is that the interrupt code is trusted, and highly constrained. It's usually not allowed to allocate resources, or run forever, or take locks and not release them, or do any other kind of nasty things; since the interrupt handler is written by the OS implementer himself, he knows that it won't do anything bad. On the other hand, application code can do anything.
The application design view
When an application is interrupted in the middle of a system call, should the syscall continue to completion? Not always. For example, consider a program like a shell that's reading a line from the terminal, and the user presses Ctrl+C
, triggering SIGINT. The read must not complete, that's what the signal is all about. Note that this example shows that the read
syscall must be interruptible even if no byte has been read yet.
So there must be a way for the application to tell the kernel to cancel the system call. Under the unix design, that happens automatically: the signal makes the syscall return. Other designs would require a way for the application to resume or cancel the syscall at its leasure.
The read
system call is the way it is because it's the primitive that makes sense, given the general design of the operating system. What it means is, roughly, “read as much as you can, up to a limit (the buffer size), but stop if something else happens”. To actually read a full buffer involves running read
in a loop until as many bytes as possible have been read; this is a higher-level function, fread(3)
. Unlike read(2)
which is a system call, fread
is a library function, implemented in user space on top of read
. It's suitable for an application that reads for a file or dies trying; it's not suitable for a command line interpreter or for a networked program that must throttle connections cleanly, nor for a networked program that has concurrent connections and doesn't use threads.
The example of read in a loop is provided in Robert Love's Linux System Programming:
ssize_t ret;
while (len != 0 && (ret = read (fd, buf, len)) != 0)
if (ret == -1)
if (errno == EINTR)
continue;
perror ("read");
break;
len -= ret;
buf += ret;
It takes care of case i
and case ii
and few more.
Thanks very much Gilles for a very concise and clear answer which corroborates similar views put forward in an article on the UNIX design philosophy. Seems very convincing to me that the syscall interruption behaviour has to do with the UNIX design philosopy rather than technical constraints or impediments
– darbehdar
Jul 12 '11 at 5:32
@darbehdar It's all three: unix design philosophy (here mainly that processes are less trusted than the kernel and can run arbitrary code, also that processes and threads are not created implicitly), technical constraints (on resource allocations), and application design (there are cases when the signal must cancel the syscall).
– Gilles
Jul 12 '11 at 7:11
add a comment |
Summary: you're correct that receiving a signal is not transparent, neither in case i (interrupted without having read anything) nor in case ii (interrupted after a partial read). To do otherwise in case i would require making fundamental changes both to the architecture of the operating system and the architecture of applications.
The OS implementation view
Consider what happens if a system call is interrupted by a signal. The signal handler will execute user-mode code. But the syscall handler is kernel code and does not trust any user-mode code. So let's explore the choices for the syscall handler:
- Terminate the system call; report how much was done to the user code. It's up to the application code to restart the system call in some way, if desired. That's how unix works.
- Save the state of the system call, and allow the user code to resume the call. This is problematic for several reasons:
- While the user code is running, something could happen to invalidate the saved state. For example, if reading from a file, the file might be truncated. So the kernel code would need a lot of logic to handle these cases.
- The saved state can't be allowed to keep any lock, because there's no guarantee that the user code will ever resume the syscall, and then the lock would be held forever.
- The kernel must expose new interfaces to resume or cancel ongoing syscalls, in addition to the normal interface to start a syscall. This is a lot of complication for a rare case.
- The saved state would need to use resources (memory, at least); those resources would need to be allocated and held by the kernel but be counted against the process's allotment. This isn't insurmountable, but it is a complication.
- Note that the signal handler might make system calls that themselves get interrupted; so you can't just have a static resource allotment that covers all possible syscalls.
- And what if the resources cannot be allocated? Then the syscall would have to fail anyway. Which means the application would need to have code to handle this case, so this design would not simplify the application code.
- Remain in progress (but suspended), create a new thread for the signal handler. This, again, is problematic:
- Early unix implementations had a single thread per process.
- The signal handler would risk overstepping on the syscall's shoes. This is an issue anyway, but in the current unix design, it's contained.
- Resources would need to be allocated for the new thread; see above.
The main difference with an interrupt is that the interrupt code is trusted, and highly constrained. It's usually not allowed to allocate resources, or run forever, or take locks and not release them, or do any other kind of nasty things; since the interrupt handler is written by the OS implementer himself, he knows that it won't do anything bad. On the other hand, application code can do anything.
The application design view
When an application is interrupted in the middle of a system call, should the syscall continue to completion? Not always. For example, consider a program like a shell that's reading a line from the terminal, and the user presses Ctrl+C
, triggering SIGINT. The read must not complete, that's what the signal is all about. Note that this example shows that the read
syscall must be interruptible even if no byte has been read yet.
So there must be a way for the application to tell the kernel to cancel the system call. Under the unix design, that happens automatically: the signal makes the syscall return. Other designs would require a way for the application to resume or cancel the syscall at its leasure.
The read
system call is the way it is because it's the primitive that makes sense, given the general design of the operating system. What it means is, roughly, “read as much as you can, up to a limit (the buffer size), but stop if something else happens”. To actually read a full buffer involves running read
in a loop until as many bytes as possible have been read; this is a higher-level function, fread(3)
. Unlike read(2)
which is a system call, fread
is a library function, implemented in user space on top of read
. It's suitable for an application that reads for a file or dies trying; it's not suitable for a command line interpreter or for a networked program that must throttle connections cleanly, nor for a networked program that has concurrent connections and doesn't use threads.
The example of read in a loop is provided in Robert Love's Linux System Programming:
ssize_t ret;
while (len != 0 && (ret = read (fd, buf, len)) != 0)
if (ret == -1)
if (errno == EINTR)
continue;
perror ("read");
break;
len -= ret;
buf += ret;
It takes care of case i
and case ii
and few more.
Summary: you're correct that receiving a signal is not transparent, neither in case i (interrupted without having read anything) nor in case ii (interrupted after a partial read). To do otherwise in case i would require making fundamental changes both to the architecture of the operating system and the architecture of applications.
The OS implementation view
Consider what happens if a system call is interrupted by a signal. The signal handler will execute user-mode code. But the syscall handler is kernel code and does not trust any user-mode code. So let's explore the choices for the syscall handler:
- Terminate the system call; report how much was done to the user code. It's up to the application code to restart the system call in some way, if desired. That's how unix works.
- Save the state of the system call, and allow the user code to resume the call. This is problematic for several reasons:
- While the user code is running, something could happen to invalidate the saved state. For example, if reading from a file, the file might be truncated. So the kernel code would need a lot of logic to handle these cases.
- The saved state can't be allowed to keep any lock, because there's no guarantee that the user code will ever resume the syscall, and then the lock would be held forever.
- The kernel must expose new interfaces to resume or cancel ongoing syscalls, in addition to the normal interface to start a syscall. This is a lot of complication for a rare case.
- The saved state would need to use resources (memory, at least); those resources would need to be allocated and held by the kernel but be counted against the process's allotment. This isn't insurmountable, but it is a complication.
- Note that the signal handler might make system calls that themselves get interrupted; so you can't just have a static resource allotment that covers all possible syscalls.
- And what if the resources cannot be allocated? Then the syscall would have to fail anyway. Which means the application would need to have code to handle this case, so this design would not simplify the application code.
- Remain in progress (but suspended), create a new thread for the signal handler. This, again, is problematic:
- Early unix implementations had a single thread per process.
- The signal handler would risk overstepping on the syscall's shoes. This is an issue anyway, but in the current unix design, it's contained.
- Resources would need to be allocated for the new thread; see above.
The main difference with an interrupt is that the interrupt code is trusted, and highly constrained. It's usually not allowed to allocate resources, or run forever, or take locks and not release them, or do any other kind of nasty things; since the interrupt handler is written by the OS implementer himself, he knows that it won't do anything bad. On the other hand, application code can do anything.
The application design view
When an application is interrupted in the middle of a system call, should the syscall continue to completion? Not always. For example, consider a program like a shell that's reading a line from the terminal, and the user presses Ctrl+C
, triggering SIGINT. The read must not complete, that's what the signal is all about. Note that this example shows that the read
syscall must be interruptible even if no byte has been read yet.
So there must be a way for the application to tell the kernel to cancel the system call. Under the unix design, that happens automatically: the signal makes the syscall return. Other designs would require a way for the application to resume or cancel the syscall at its leasure.
The read
system call is the way it is because it's the primitive that makes sense, given the general design of the operating system. What it means is, roughly, “read as much as you can, up to a limit (the buffer size), but stop if something else happens”. To actually read a full buffer involves running read
in a loop until as many bytes as possible have been read; this is a higher-level function, fread(3)
. Unlike read(2)
which is a system call, fread
is a library function, implemented in user space on top of read
. It's suitable for an application that reads for a file or dies trying; it's not suitable for a command line interpreter or for a networked program that must throttle connections cleanly, nor for a networked program that has concurrent connections and doesn't use threads.
The example of read in a loop is provided in Robert Love's Linux System Programming:
ssize_t ret;
while (len != 0 && (ret = read (fd, buf, len)) != 0)
if (ret == -1)
if (errno == EINTR)
continue;
perror ("read");
break;
len -= ret;
buf += ret;
It takes care of case i
and case ii
and few more.
edited Nov 2 '13 at 7:03
Anubhav
1407
1407
answered Jul 11 '11 at 21:12
GillesGilles
546k12911101624
546k12911101624
Thanks very much Gilles for a very concise and clear answer which corroborates similar views put forward in an article on the UNIX design philosophy. Seems very convincing to me that the syscall interruption behaviour has to do with the UNIX design philosopy rather than technical constraints or impediments
– darbehdar
Jul 12 '11 at 5:32
@darbehdar It's all three: unix design philosophy (here mainly that processes are less trusted than the kernel and can run arbitrary code, also that processes and threads are not created implicitly), technical constraints (on resource allocations), and application design (there are cases when the signal must cancel the syscall).
– Gilles
Jul 12 '11 at 7:11
add a comment |
Thanks very much Gilles for a very concise and clear answer which corroborates similar views put forward in an article on the UNIX design philosophy. Seems very convincing to me that the syscall interruption behaviour has to do with the UNIX design philosopy rather than technical constraints or impediments
– darbehdar
Jul 12 '11 at 5:32
@darbehdar It's all three: unix design philosophy (here mainly that processes are less trusted than the kernel and can run arbitrary code, also that processes and threads are not created implicitly), technical constraints (on resource allocations), and application design (there are cases when the signal must cancel the syscall).
– Gilles
Jul 12 '11 at 7:11
Thanks very much Gilles for a very concise and clear answer which corroborates similar views put forward in an article on the UNIX design philosophy. Seems very convincing to me that the syscall interruption behaviour has to do with the UNIX design philosopy rather than technical constraints or impediments
– darbehdar
Jul 12 '11 at 5:32
Thanks very much Gilles for a very concise and clear answer which corroborates similar views put forward in an article on the UNIX design philosophy. Seems very convincing to me that the syscall interruption behaviour has to do with the UNIX design philosopy rather than technical constraints or impediments
– darbehdar
Jul 12 '11 at 5:32
@darbehdar It's all three: unix design philosophy (here mainly that processes are less trusted than the kernel and can run arbitrary code, also that processes and threads are not created implicitly), technical constraints (on resource allocations), and application design (there are cases when the signal must cancel the syscall).
– Gilles
Jul 12 '11 at 7:11
@darbehdar It's all three: unix design philosophy (here mainly that processes are less trusted than the kernel and can run arbitrary code, also that processes and threads are not created implicitly), technical constraints (on resource allocations), and application design (there are cases when the signal must cancel the syscall).
– Gilles
Jul 12 '11 at 7:11
add a comment |
To answer question A:
Yes, the delivery and handling of the signal is not entirely transparent to the read()
.
The read()
running halfway may be occupying some resources while it's interrupted by the signal. And the signal handler of the signal may call another read()
(or any other async-signal safe syscalls) as well. So the read()
interrupted by the signal must be stopped first in order to release the resources it uses, otherwise the read()
called from the signal handler will access the same resources and cause reentrant issues.
Because system calls other than read()
could be called from the signal handler and they may also occupy identical set of resources as read()
does. To avoid reentrant issues above, the simplest, safest design is to stop the interrupted read()
every time when a signal happens during its run.
add a comment |
To answer question A:
Yes, the delivery and handling of the signal is not entirely transparent to the read()
.
The read()
running halfway may be occupying some resources while it's interrupted by the signal. And the signal handler of the signal may call another read()
(or any other async-signal safe syscalls) as well. So the read()
interrupted by the signal must be stopped first in order to release the resources it uses, otherwise the read()
called from the signal handler will access the same resources and cause reentrant issues.
Because system calls other than read()
could be called from the signal handler and they may also occupy identical set of resources as read()
does. To avoid reentrant issues above, the simplest, safest design is to stop the interrupted read()
every time when a signal happens during its run.
add a comment |
To answer question A:
Yes, the delivery and handling of the signal is not entirely transparent to the read()
.
The read()
running halfway may be occupying some resources while it's interrupted by the signal. And the signal handler of the signal may call another read()
(or any other async-signal safe syscalls) as well. So the read()
interrupted by the signal must be stopped first in order to release the resources it uses, otherwise the read()
called from the signal handler will access the same resources and cause reentrant issues.
Because system calls other than read()
could be called from the signal handler and they may also occupy identical set of resources as read()
does. To avoid reentrant issues above, the simplest, safest design is to stop the interrupted read()
every time when a signal happens during its run.
To answer question A:
Yes, the delivery and handling of the signal is not entirely transparent to the read()
.
The read()
running halfway may be occupying some resources while it's interrupted by the signal. And the signal handler of the signal may call another read()
(or any other async-signal safe syscalls) as well. So the read()
interrupted by the signal must be stopped first in order to release the resources it uses, otherwise the read()
called from the signal handler will access the same resources and cause reentrant issues.
Because system calls other than read()
could be called from the signal handler and they may also occupy identical set of resources as read()
does. To avoid reentrant issues above, the simplest, safest design is to stop the interrupted read()
every time when a signal happens during its run.
edited May 16 '13 at 19:34
answered May 16 '13 at 18:56
JustinJustin
163128
163128
add a comment |
add a comment |
Thanks for contributing an answer to Unix & Linux Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f16455%2finterruption-of-system-calls-when-a-signal-is-caught%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown