Why is `sed` no op much faster than `awk` in this case2019 Community Moderator ElectionReplacing multiple lines in sed or awkWhy isn't sed greedy in this simple case?Merge two lists while removing duplicatesawk, sed, grep, perl… which to print out in this case?Remove duplicate lines while keeping the order of the linesawk or sed command to match regex at specific line, exit true if success, false otherwiseUsing Perl for Even/Odd numbersWhy this awk command prints output twiceWhy is fio seq_writes so much faster than dd?Why is my very old USB2 Harddisk faster than expected?
Trouble reading roman numeral notation with flats
What is the meaning of "You've never met a graph you didn't like?"
Why didn't Voldemort know what Grindelwald looked like?
Pre-Employment Background Check With Consent For Future Checks
New Order #2: Turn My Way
Writing in a Christian voice
Can you take a "free object interaction" while incapacitated?
categorizing a variable turns it from insignificant to significant
Not hide and seek
Capacitor electron flow
Why do Radio Buttons not fill the entire outer circle?
Why didn’t Eve recognize the little cockroach as a living organism?
1 John in Luther’s Bibel
Weird lines in Microsoft Word
How would a solely written language work mechanically
Sort with assumptions
Should I warn a new PhD Student?
I keep switching characters, how do I stop?
Center page as a whole without centering each element individually
Mortal danger in mid-grade literature
PTIJ: Which Dr. Seuss books should one obtain?
Is there a distance limit for minecart tracks?
"Marked down as someone wanting to sell shares." What does that mean?
Reason why a kingside attack is not justified
Why is `sed` no op much faster than `awk` in this case
2019 Community Moderator ElectionReplacing multiple lines in sed or awkWhy isn't sed greedy in this simple case?Merge two lists while removing duplicatesawk, sed, grep, perl… which to print out in this case?Remove duplicate lines while keeping the order of the linesawk or sed command to match regex at specific line, exit true if success, false otherwiseUsing Perl for Even/Odd numbersWhy this awk command prints output twiceWhy is fio seq_writes so much faster than dd?Why is my very old USB2 Harddisk faster than expected?
I am trying to understand some performance issues related to sed
and awk
, and I did the following experiment,
$ seq 100000 > test
$ yes 'NR==100001print' | head -n 5000 > test.awk
$ yes '100001p;b' | head -n 5000 > test.sed
$ time sed -nf test.sed test
real 0m3.436s
user 0m3.428s
sys 0m0.004s
$ time awk -F@ -f test.awk test
real 0m11.615s
user 0m11.582s
sys 0m0.007s
$ sed --version
sed (GNU sed) 4.5
$ awk --version
GNU Awk 4.2.1, API: 2.0 (GNU MPFR 3.1.6-p2, GNU MP 6.1.2)
Here, since the test file only contains 100000 lines, all the commands in test.sed
and test.awk
are no-ops. Both programs only need to match the line number with the address (in sed
) or NR
(in awk
) to decide that the command does not need to be executed, but there is still a huge difference in the time cost. Why is it the case? Are there anyone with different versions of sed
and awk
installed that gives a different result on this test?
Edit:
The results for mawk
(as suggested by @mosvy), original-awk
(the name for "one true awk" at debian based systems, suggested by @GregA.Woods) and perl
are given below,
$ time mawk -F@ -f test.awk test
real 0m5.934s
user 0m5.919s
sys 0m0.004s
$ time original-awk -F@ -f test.awk test
real 0m8.132s
user 0m8.128s
sys 0m0.004s
$ yes 'print if $.==100001;' | head -n 5000 > test.pl
$ time perl -n test.pl test
real 0m33.245s
user 0m33.110s
sys 0m0.019s
$ mawk -W version
mawk 1.3.4 20171017
$ perl --version
This is perl 5, version 28, subversion 1 (v5.28.1) built for x86_64-linux-thread-multi
Replacing -F@
with -F ''
does not make observable changes in the case of gawk
and mawk
. original-awk
does not support empty FS
.
Edit 2
The test by @mosvy gives different results, 21s for sed
and 11s for mawk
, see the comment below for details.
awk sed perl performance
|
show 4 more comments
I am trying to understand some performance issues related to sed
and awk
, and I did the following experiment,
$ seq 100000 > test
$ yes 'NR==100001print' | head -n 5000 > test.awk
$ yes '100001p;b' | head -n 5000 > test.sed
$ time sed -nf test.sed test
real 0m3.436s
user 0m3.428s
sys 0m0.004s
$ time awk -F@ -f test.awk test
real 0m11.615s
user 0m11.582s
sys 0m0.007s
$ sed --version
sed (GNU sed) 4.5
$ awk --version
GNU Awk 4.2.1, API: 2.0 (GNU MPFR 3.1.6-p2, GNU MP 6.1.2)
Here, since the test file only contains 100000 lines, all the commands in test.sed
and test.awk
are no-ops. Both programs only need to match the line number with the address (in sed
) or NR
(in awk
) to decide that the command does not need to be executed, but there is still a huge difference in the time cost. Why is it the case? Are there anyone with different versions of sed
and awk
installed that gives a different result on this test?
Edit:
The results for mawk
(as suggested by @mosvy), original-awk
(the name for "one true awk" at debian based systems, suggested by @GregA.Woods) and perl
are given below,
$ time mawk -F@ -f test.awk test
real 0m5.934s
user 0m5.919s
sys 0m0.004s
$ time original-awk -F@ -f test.awk test
real 0m8.132s
user 0m8.128s
sys 0m0.004s
$ yes 'print if $.==100001;' | head -n 5000 > test.pl
$ time perl -n test.pl test
real 0m33.245s
user 0m33.110s
sys 0m0.019s
$ mawk -W version
mawk 1.3.4 20171017
$ perl --version
This is perl 5, version 28, subversion 1 (v5.28.1) built for x86_64-linux-thread-multi
Replacing -F@
with -F ''
does not make observable changes in the case of gawk
and mawk
. original-awk
does not support empty FS
.
Edit 2
The test by @mosvy gives different results, 21s for sed
and 11s for mawk
, see the comment below for details.
awk sed perl performance
2
I also suggest you try it withmawk
;-)
– mosvy
2 days ago
2
Without any testing, I wonder if awk is doing more work per line because of the -F@ field splitting.
– Jeff Schaller
2 days ago
One should always test Awk performance and compatability against The One True Awk. github.com/onetrueawk/awk
– Greg A. Woods
yesterday
@JeffSchaller I try to figure out a way so that awk does not do any field splitting at all, but at least failed forGNU awk
. SettingFS
to empty string seems to causeawk
to split each individual character as a field.
– Weijun Zhou
23 hours ago
@GregA.Woods Updated.
– Weijun Zhou
16 hours ago
|
show 4 more comments
I am trying to understand some performance issues related to sed
and awk
, and I did the following experiment,
$ seq 100000 > test
$ yes 'NR==100001print' | head -n 5000 > test.awk
$ yes '100001p;b' | head -n 5000 > test.sed
$ time sed -nf test.sed test
real 0m3.436s
user 0m3.428s
sys 0m0.004s
$ time awk -F@ -f test.awk test
real 0m11.615s
user 0m11.582s
sys 0m0.007s
$ sed --version
sed (GNU sed) 4.5
$ awk --version
GNU Awk 4.2.1, API: 2.0 (GNU MPFR 3.1.6-p2, GNU MP 6.1.2)
Here, since the test file only contains 100000 lines, all the commands in test.sed
and test.awk
are no-ops. Both programs only need to match the line number with the address (in sed
) or NR
(in awk
) to decide that the command does not need to be executed, but there is still a huge difference in the time cost. Why is it the case? Are there anyone with different versions of sed
and awk
installed that gives a different result on this test?
Edit:
The results for mawk
(as suggested by @mosvy), original-awk
(the name for "one true awk" at debian based systems, suggested by @GregA.Woods) and perl
are given below,
$ time mawk -F@ -f test.awk test
real 0m5.934s
user 0m5.919s
sys 0m0.004s
$ time original-awk -F@ -f test.awk test
real 0m8.132s
user 0m8.128s
sys 0m0.004s
$ yes 'print if $.==100001;' | head -n 5000 > test.pl
$ time perl -n test.pl test
real 0m33.245s
user 0m33.110s
sys 0m0.019s
$ mawk -W version
mawk 1.3.4 20171017
$ perl --version
This is perl 5, version 28, subversion 1 (v5.28.1) built for x86_64-linux-thread-multi
Replacing -F@
with -F ''
does not make observable changes in the case of gawk
and mawk
. original-awk
does not support empty FS
.
Edit 2
The test by @mosvy gives different results, 21s for sed
and 11s for mawk
, see the comment below for details.
awk sed perl performance
I am trying to understand some performance issues related to sed
and awk
, and I did the following experiment,
$ seq 100000 > test
$ yes 'NR==100001print' | head -n 5000 > test.awk
$ yes '100001p;b' | head -n 5000 > test.sed
$ time sed -nf test.sed test
real 0m3.436s
user 0m3.428s
sys 0m0.004s
$ time awk -F@ -f test.awk test
real 0m11.615s
user 0m11.582s
sys 0m0.007s
$ sed --version
sed (GNU sed) 4.5
$ awk --version
GNU Awk 4.2.1, API: 2.0 (GNU MPFR 3.1.6-p2, GNU MP 6.1.2)
Here, since the test file only contains 100000 lines, all the commands in test.sed
and test.awk
are no-ops. Both programs only need to match the line number with the address (in sed
) or NR
(in awk
) to decide that the command does not need to be executed, but there is still a huge difference in the time cost. Why is it the case? Are there anyone with different versions of sed
and awk
installed that gives a different result on this test?
Edit:
The results for mawk
(as suggested by @mosvy), original-awk
(the name for "one true awk" at debian based systems, suggested by @GregA.Woods) and perl
are given below,
$ time mawk -F@ -f test.awk test
real 0m5.934s
user 0m5.919s
sys 0m0.004s
$ time original-awk -F@ -f test.awk test
real 0m8.132s
user 0m8.128s
sys 0m0.004s
$ yes 'print if $.==100001;' | head -n 5000 > test.pl
$ time perl -n test.pl test
real 0m33.245s
user 0m33.110s
sys 0m0.019s
$ mawk -W version
mawk 1.3.4 20171017
$ perl --version
This is perl 5, version 28, subversion 1 (v5.28.1) built for x86_64-linux-thread-multi
Replacing -F@
with -F ''
does not make observable changes in the case of gawk
and mawk
. original-awk
does not support empty FS
.
Edit 2
The test by @mosvy gives different results, 21s for sed
and 11s for mawk
, see the comment below for details.
awk sed perl performance
awk sed perl performance
edited 14 hours ago
Weijun Zhou
asked 2 days ago
Weijun ZhouWeijun Zhou
1,583425
1,583425
2
I also suggest you try it withmawk
;-)
– mosvy
2 days ago
2
Without any testing, I wonder if awk is doing more work per line because of the -F@ field splitting.
– Jeff Schaller
2 days ago
One should always test Awk performance and compatability against The One True Awk. github.com/onetrueawk/awk
– Greg A. Woods
yesterday
@JeffSchaller I try to figure out a way so that awk does not do any field splitting at all, but at least failed forGNU awk
. SettingFS
to empty string seems to causeawk
to split each individual character as a field.
– Weijun Zhou
23 hours ago
@GregA.Woods Updated.
– Weijun Zhou
16 hours ago
|
show 4 more comments
2
I also suggest you try it withmawk
;-)
– mosvy
2 days ago
2
Without any testing, I wonder if awk is doing more work per line because of the -F@ field splitting.
– Jeff Schaller
2 days ago
One should always test Awk performance and compatability against The One True Awk. github.com/onetrueawk/awk
– Greg A. Woods
yesterday
@JeffSchaller I try to figure out a way so that awk does not do any field splitting at all, but at least failed forGNU awk
. SettingFS
to empty string seems to causeawk
to split each individual character as a field.
– Weijun Zhou
23 hours ago
@GregA.Woods Updated.
– Weijun Zhou
16 hours ago
2
2
I also suggest you try it with
mawk
;-)– mosvy
2 days ago
I also suggest you try it with
mawk
;-)– mosvy
2 days ago
2
2
Without any testing, I wonder if awk is doing more work per line because of the -F@ field splitting.
– Jeff Schaller
2 days ago
Without any testing, I wonder if awk is doing more work per line because of the -F@ field splitting.
– Jeff Schaller
2 days ago
One should always test Awk performance and compatability against The One True Awk. github.com/onetrueawk/awk
– Greg A. Woods
yesterday
One should always test Awk performance and compatability against The One True Awk. github.com/onetrueawk/awk
– Greg A. Woods
yesterday
@JeffSchaller I try to figure out a way so that awk does not do any field splitting at all, but at least failed for
GNU awk
. Setting FS
to empty string seems to cause awk
to split each individual character as a field.– Weijun Zhou
23 hours ago
@JeffSchaller I try to figure out a way so that awk does not do any field splitting at all, but at least failed for
GNU awk
. Setting FS
to empty string seems to cause awk
to split each individual character as a field.– Weijun Zhou
23 hours ago
@GregA.Woods Updated.
– Weijun Zhou
16 hours ago
@GregA.Woods Updated.
– Weijun Zhou
16 hours ago
|
show 4 more comments
2 Answers
2
active
oldest
votes
awk
has a wider feature set than sed
, with a more flexible syntax. So it's not unreasonable that it'll take longer both to parse its scripts, and to execute them.
As your example command (the part inside the braces) never runs, the time-sensitive part should be your test expression.
awk
First, look at the test in the awk
example:
NR==100001
and see the effects of that in gprof
(GNU awk 4.0.1):
% cumulative self self total
time seconds seconds calls s/call s/call name
55.89 19.73 19.73 1 19.73 35.04 interpret
8.90 22.87 3.14 500000000 0.00 0.00 cmp_scalar
8.64 25.92 3.05 1000305023 0.00 0.00 free_wstr
8.61 28.96 3.04 500105014 0.00 0.00 mk_number
6.09 31.11 2.15 500000001 0.00 0.00 cmp_nodes
4.18 32.59 1.48 500200013 0.00 0.00 unref
3.68 33.89 1.30 500000000 0.00 0.00 eval_condition
2.21 34.67 0.78 500000000 0.00 0.00 update_NR
~50% of the time is spent in "interpret", the top-level loop to run the opcodes resulting from the parsed script.
Every time the test is run (ie. 5000 script lines * 100000 input lines), awk
has to:
- Fetch the built-in variable "NR" (
update_NR
). - Convert the string "100001" (
mk_number
). - Compare them (
cmp_nodes
,cmp_scalar
,eval_condition
). - Discard any temporary objects needed for the comparison (
free_wstr
,unref
)
Other awk
implementations won't have the exact same call flow, but they will still have to retrieve variables, automatically convert, then compare.
sed
By comparison, in sed
, the "test" is much more limited. It can only be a single address, an address range, or nothing (when the command is the first thing on the line), and sed
can tell from the first character whether it's an address or command. In the example, it's
100001
...a single numerical address. The profile (GNU sed 4.2.2) shows
% cumulative self self total
time seconds seconds calls s/call s/call name
52.01 2.98 2.98 100000 0.00 0.00 execute_program
44.16 5.51 2.53 1000000000 0.00 0.00 match_address_p
3.84 5.73 0.22 match_an_address_p
[...]
0.00 5.73 0.00 5000 0.00 0.00 in_integer
Again, ~50% of the time is in the top-level execute_program
. In this case, it's called once per input line, then loops over the parsed commands. The loop starts with an address check.
The important comparison, match_address_p
, is also called 2*5000*100000 times[^1], but it only compares integers that are already available (through structs and pointers).
The line numbers in the input script were parsed at compile-time (in_integer
). That only has to be done once for each address number in the input, ie. 5000 times, and doesn't make a significant contribution to the overall running time.
[^1]: I'm not entirely sure why 2x yet, but the braces make a difference. When the lines read 100001p
, it's only once per script*input line. 100001p
uses 2x.
JiggilyNaga, what was the command to get the output like that, please?
– Tagwint
11 hours ago
@Tagwint I recompiledawk
andsed
with profiling enabled, then usedgprof
(part of binutils). Though the large numbers meant I had to realign the columns manually.
– JigglyNaga
11 hours ago
add a comment |
Actually the above script is not a noop for awk:
Even if you do not use the contents of the fields, according to GAWK manual for each record that is read in the following steps are inevitably performed:
- scanning for all occurrences of the FS
- field splitting
- updating th NF variable
If you are not using this information it just gets discarded afterwards.
If a field separator does not occur within the record, awk still has to assign text to $0 (and in your case to $1, too), and set NF to the actual number of obtained fields (1 in the sample above)
2
all that doesn't really make a difference -- trytime gawk '$1=$1+$1' test >/dev/null
; it's really the big unrealistic script that's blowing it up. Also notice that (at least the original awk) does not do splitting until the$1
, ... fields are first used.
– mosvy
14 hours ago
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "106"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f506892%2fwhy-is-sed-no-op-much-faster-than-awk-in-this-case%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
awk
has a wider feature set than sed
, with a more flexible syntax. So it's not unreasonable that it'll take longer both to parse its scripts, and to execute them.
As your example command (the part inside the braces) never runs, the time-sensitive part should be your test expression.
awk
First, look at the test in the awk
example:
NR==100001
and see the effects of that in gprof
(GNU awk 4.0.1):
% cumulative self self total
time seconds seconds calls s/call s/call name
55.89 19.73 19.73 1 19.73 35.04 interpret
8.90 22.87 3.14 500000000 0.00 0.00 cmp_scalar
8.64 25.92 3.05 1000305023 0.00 0.00 free_wstr
8.61 28.96 3.04 500105014 0.00 0.00 mk_number
6.09 31.11 2.15 500000001 0.00 0.00 cmp_nodes
4.18 32.59 1.48 500200013 0.00 0.00 unref
3.68 33.89 1.30 500000000 0.00 0.00 eval_condition
2.21 34.67 0.78 500000000 0.00 0.00 update_NR
~50% of the time is spent in "interpret", the top-level loop to run the opcodes resulting from the parsed script.
Every time the test is run (ie. 5000 script lines * 100000 input lines), awk
has to:
- Fetch the built-in variable "NR" (
update_NR
). - Convert the string "100001" (
mk_number
). - Compare them (
cmp_nodes
,cmp_scalar
,eval_condition
). - Discard any temporary objects needed for the comparison (
free_wstr
,unref
)
Other awk
implementations won't have the exact same call flow, but they will still have to retrieve variables, automatically convert, then compare.
sed
By comparison, in sed
, the "test" is much more limited. It can only be a single address, an address range, or nothing (when the command is the first thing on the line), and sed
can tell from the first character whether it's an address or command. In the example, it's
100001
...a single numerical address. The profile (GNU sed 4.2.2) shows
% cumulative self self total
time seconds seconds calls s/call s/call name
52.01 2.98 2.98 100000 0.00 0.00 execute_program
44.16 5.51 2.53 1000000000 0.00 0.00 match_address_p
3.84 5.73 0.22 match_an_address_p
[...]
0.00 5.73 0.00 5000 0.00 0.00 in_integer
Again, ~50% of the time is in the top-level execute_program
. In this case, it's called once per input line, then loops over the parsed commands. The loop starts with an address check.
The important comparison, match_address_p
, is also called 2*5000*100000 times[^1], but it only compares integers that are already available (through structs and pointers).
The line numbers in the input script were parsed at compile-time (in_integer
). That only has to be done once for each address number in the input, ie. 5000 times, and doesn't make a significant contribution to the overall running time.
[^1]: I'm not entirely sure why 2x yet, but the braces make a difference. When the lines read 100001p
, it's only once per script*input line. 100001p
uses 2x.
JiggilyNaga, what was the command to get the output like that, please?
– Tagwint
11 hours ago
@Tagwint I recompiledawk
andsed
with profiling enabled, then usedgprof
(part of binutils). Though the large numbers meant I had to realign the columns manually.
– JigglyNaga
11 hours ago
add a comment |
awk
has a wider feature set than sed
, with a more flexible syntax. So it's not unreasonable that it'll take longer both to parse its scripts, and to execute them.
As your example command (the part inside the braces) never runs, the time-sensitive part should be your test expression.
awk
First, look at the test in the awk
example:
NR==100001
and see the effects of that in gprof
(GNU awk 4.0.1):
% cumulative self self total
time seconds seconds calls s/call s/call name
55.89 19.73 19.73 1 19.73 35.04 interpret
8.90 22.87 3.14 500000000 0.00 0.00 cmp_scalar
8.64 25.92 3.05 1000305023 0.00 0.00 free_wstr
8.61 28.96 3.04 500105014 0.00 0.00 mk_number
6.09 31.11 2.15 500000001 0.00 0.00 cmp_nodes
4.18 32.59 1.48 500200013 0.00 0.00 unref
3.68 33.89 1.30 500000000 0.00 0.00 eval_condition
2.21 34.67 0.78 500000000 0.00 0.00 update_NR
~50% of the time is spent in "interpret", the top-level loop to run the opcodes resulting from the parsed script.
Every time the test is run (ie. 5000 script lines * 100000 input lines), awk
has to:
- Fetch the built-in variable "NR" (
update_NR
). - Convert the string "100001" (
mk_number
). - Compare them (
cmp_nodes
,cmp_scalar
,eval_condition
). - Discard any temporary objects needed for the comparison (
free_wstr
,unref
)
Other awk
implementations won't have the exact same call flow, but they will still have to retrieve variables, automatically convert, then compare.
sed
By comparison, in sed
, the "test" is much more limited. It can only be a single address, an address range, or nothing (when the command is the first thing on the line), and sed
can tell from the first character whether it's an address or command. In the example, it's
100001
...a single numerical address. The profile (GNU sed 4.2.2) shows
% cumulative self self total
time seconds seconds calls s/call s/call name
52.01 2.98 2.98 100000 0.00 0.00 execute_program
44.16 5.51 2.53 1000000000 0.00 0.00 match_address_p
3.84 5.73 0.22 match_an_address_p
[...]
0.00 5.73 0.00 5000 0.00 0.00 in_integer
Again, ~50% of the time is in the top-level execute_program
. In this case, it's called once per input line, then loops over the parsed commands. The loop starts with an address check.
The important comparison, match_address_p
, is also called 2*5000*100000 times[^1], but it only compares integers that are already available (through structs and pointers).
The line numbers in the input script were parsed at compile-time (in_integer
). That only has to be done once for each address number in the input, ie. 5000 times, and doesn't make a significant contribution to the overall running time.
[^1]: I'm not entirely sure why 2x yet, but the braces make a difference. When the lines read 100001p
, it's only once per script*input line. 100001p
uses 2x.
JiggilyNaga, what was the command to get the output like that, please?
– Tagwint
11 hours ago
@Tagwint I recompiledawk
andsed
with profiling enabled, then usedgprof
(part of binutils). Though the large numbers meant I had to realign the columns manually.
– JigglyNaga
11 hours ago
add a comment |
awk
has a wider feature set than sed
, with a more flexible syntax. So it's not unreasonable that it'll take longer both to parse its scripts, and to execute them.
As your example command (the part inside the braces) never runs, the time-sensitive part should be your test expression.
awk
First, look at the test in the awk
example:
NR==100001
and see the effects of that in gprof
(GNU awk 4.0.1):
% cumulative self self total
time seconds seconds calls s/call s/call name
55.89 19.73 19.73 1 19.73 35.04 interpret
8.90 22.87 3.14 500000000 0.00 0.00 cmp_scalar
8.64 25.92 3.05 1000305023 0.00 0.00 free_wstr
8.61 28.96 3.04 500105014 0.00 0.00 mk_number
6.09 31.11 2.15 500000001 0.00 0.00 cmp_nodes
4.18 32.59 1.48 500200013 0.00 0.00 unref
3.68 33.89 1.30 500000000 0.00 0.00 eval_condition
2.21 34.67 0.78 500000000 0.00 0.00 update_NR
~50% of the time is spent in "interpret", the top-level loop to run the opcodes resulting from the parsed script.
Every time the test is run (ie. 5000 script lines * 100000 input lines), awk
has to:
- Fetch the built-in variable "NR" (
update_NR
). - Convert the string "100001" (
mk_number
). - Compare them (
cmp_nodes
,cmp_scalar
,eval_condition
). - Discard any temporary objects needed for the comparison (
free_wstr
,unref
)
Other awk
implementations won't have the exact same call flow, but they will still have to retrieve variables, automatically convert, then compare.
sed
By comparison, in sed
, the "test" is much more limited. It can only be a single address, an address range, or nothing (when the command is the first thing on the line), and sed
can tell from the first character whether it's an address or command. In the example, it's
100001
...a single numerical address. The profile (GNU sed 4.2.2) shows
% cumulative self self total
time seconds seconds calls s/call s/call name
52.01 2.98 2.98 100000 0.00 0.00 execute_program
44.16 5.51 2.53 1000000000 0.00 0.00 match_address_p
3.84 5.73 0.22 match_an_address_p
[...]
0.00 5.73 0.00 5000 0.00 0.00 in_integer
Again, ~50% of the time is in the top-level execute_program
. In this case, it's called once per input line, then loops over the parsed commands. The loop starts with an address check.
The important comparison, match_address_p
, is also called 2*5000*100000 times[^1], but it only compares integers that are already available (through structs and pointers).
The line numbers in the input script were parsed at compile-time (in_integer
). That only has to be done once for each address number in the input, ie. 5000 times, and doesn't make a significant contribution to the overall running time.
[^1]: I'm not entirely sure why 2x yet, but the braces make a difference. When the lines read 100001p
, it's only once per script*input line. 100001p
uses 2x.
awk
has a wider feature set than sed
, with a more flexible syntax. So it's not unreasonable that it'll take longer both to parse its scripts, and to execute them.
As your example command (the part inside the braces) never runs, the time-sensitive part should be your test expression.
awk
First, look at the test in the awk
example:
NR==100001
and see the effects of that in gprof
(GNU awk 4.0.1):
% cumulative self self total
time seconds seconds calls s/call s/call name
55.89 19.73 19.73 1 19.73 35.04 interpret
8.90 22.87 3.14 500000000 0.00 0.00 cmp_scalar
8.64 25.92 3.05 1000305023 0.00 0.00 free_wstr
8.61 28.96 3.04 500105014 0.00 0.00 mk_number
6.09 31.11 2.15 500000001 0.00 0.00 cmp_nodes
4.18 32.59 1.48 500200013 0.00 0.00 unref
3.68 33.89 1.30 500000000 0.00 0.00 eval_condition
2.21 34.67 0.78 500000000 0.00 0.00 update_NR
~50% of the time is spent in "interpret", the top-level loop to run the opcodes resulting from the parsed script.
Every time the test is run (ie. 5000 script lines * 100000 input lines), awk
has to:
- Fetch the built-in variable "NR" (
update_NR
). - Convert the string "100001" (
mk_number
). - Compare them (
cmp_nodes
,cmp_scalar
,eval_condition
). - Discard any temporary objects needed for the comparison (
free_wstr
,unref
)
Other awk
implementations won't have the exact same call flow, but they will still have to retrieve variables, automatically convert, then compare.
sed
By comparison, in sed
, the "test" is much more limited. It can only be a single address, an address range, or nothing (when the command is the first thing on the line), and sed
can tell from the first character whether it's an address or command. In the example, it's
100001
...a single numerical address. The profile (GNU sed 4.2.2) shows
% cumulative self self total
time seconds seconds calls s/call s/call name
52.01 2.98 2.98 100000 0.00 0.00 execute_program
44.16 5.51 2.53 1000000000 0.00 0.00 match_address_p
3.84 5.73 0.22 match_an_address_p
[...]
0.00 5.73 0.00 5000 0.00 0.00 in_integer
Again, ~50% of the time is in the top-level execute_program
. In this case, it's called once per input line, then loops over the parsed commands. The loop starts with an address check.
The important comparison, match_address_p
, is also called 2*5000*100000 times[^1], but it only compares integers that are already available (through structs and pointers).
The line numbers in the input script were parsed at compile-time (in_integer
). That only has to be done once for each address number in the input, ie. 5000 times, and doesn't make a significant contribution to the overall running time.
[^1]: I'm not entirely sure why 2x yet, but the braces make a difference. When the lines read 100001p
, it's only once per script*input line. 100001p
uses 2x.
edited 11 hours ago
answered 12 hours ago
JigglyNagaJigglyNaga
3,9671035
3,9671035
JiggilyNaga, what was the command to get the output like that, please?
– Tagwint
11 hours ago
@Tagwint I recompiledawk
andsed
with profiling enabled, then usedgprof
(part of binutils). Though the large numbers meant I had to realign the columns manually.
– JigglyNaga
11 hours ago
add a comment |
JiggilyNaga, what was the command to get the output like that, please?
– Tagwint
11 hours ago
@Tagwint I recompiledawk
andsed
with profiling enabled, then usedgprof
(part of binutils). Though the large numbers meant I had to realign the columns manually.
– JigglyNaga
11 hours ago
JiggilyNaga, what was the command to get the output like that, please?
– Tagwint
11 hours ago
JiggilyNaga, what was the command to get the output like that, please?
– Tagwint
11 hours ago
@Tagwint I recompiled
awk
and sed
with profiling enabled, then used gprof
(part of binutils). Though the large numbers meant I had to realign the columns manually.– JigglyNaga
11 hours ago
@Tagwint I recompiled
awk
and sed
with profiling enabled, then used gprof
(part of binutils). Though the large numbers meant I had to realign the columns manually.– JigglyNaga
11 hours ago
add a comment |
Actually the above script is not a noop for awk:
Even if you do not use the contents of the fields, according to GAWK manual for each record that is read in the following steps are inevitably performed:
- scanning for all occurrences of the FS
- field splitting
- updating th NF variable
If you are not using this information it just gets discarded afterwards.
If a field separator does not occur within the record, awk still has to assign text to $0 (and in your case to $1, too), and set NF to the actual number of obtained fields (1 in the sample above)
2
all that doesn't really make a difference -- trytime gawk '$1=$1+$1' test >/dev/null
; it's really the big unrealistic script that's blowing it up. Also notice that (at least the original awk) does not do splitting until the$1
, ... fields are first used.
– mosvy
14 hours ago
add a comment |
Actually the above script is not a noop for awk:
Even if you do not use the contents of the fields, according to GAWK manual for each record that is read in the following steps are inevitably performed:
- scanning for all occurrences of the FS
- field splitting
- updating th NF variable
If you are not using this information it just gets discarded afterwards.
If a field separator does not occur within the record, awk still has to assign text to $0 (and in your case to $1, too), and set NF to the actual number of obtained fields (1 in the sample above)
2
all that doesn't really make a difference -- trytime gawk '$1=$1+$1' test >/dev/null
; it's really the big unrealistic script that's blowing it up. Also notice that (at least the original awk) does not do splitting until the$1
, ... fields are first used.
– mosvy
14 hours ago
add a comment |
Actually the above script is not a noop for awk:
Even if you do not use the contents of the fields, according to GAWK manual for each record that is read in the following steps are inevitably performed:
- scanning for all occurrences of the FS
- field splitting
- updating th NF variable
If you are not using this information it just gets discarded afterwards.
If a field separator does not occur within the record, awk still has to assign text to $0 (and in your case to $1, too), and set NF to the actual number of obtained fields (1 in the sample above)
Actually the above script is not a noop for awk:
Even if you do not use the contents of the fields, according to GAWK manual for each record that is read in the following steps are inevitably performed:
- scanning for all occurrences of the FS
- field splitting
- updating th NF variable
If you are not using this information it just gets discarded afterwards.
If a field separator does not occur within the record, awk still has to assign text to $0 (and in your case to $1, too), and set NF to the actual number of obtained fields (1 in the sample above)
edited 14 hours ago
answered 14 hours ago
jf1jf1
1745
1745
2
all that doesn't really make a difference -- trytime gawk '$1=$1+$1' test >/dev/null
; it's really the big unrealistic script that's blowing it up. Also notice that (at least the original awk) does not do splitting until the$1
, ... fields are first used.
– mosvy
14 hours ago
add a comment |
2
all that doesn't really make a difference -- trytime gawk '$1=$1+$1' test >/dev/null
; it's really the big unrealistic script that's blowing it up. Also notice that (at least the original awk) does not do splitting until the$1
, ... fields are first used.
– mosvy
14 hours ago
2
2
all that doesn't really make a difference -- try
time gawk '$1=$1+$1' test >/dev/null
; it's really the big unrealistic script that's blowing it up. Also notice that (at least the original awk) does not do splitting until the $1
, ... fields are first used.– mosvy
14 hours ago
all that doesn't really make a difference -- try
time gawk '$1=$1+$1' test >/dev/null
; it's really the big unrealistic script that's blowing it up. Also notice that (at least the original awk) does not do splitting until the $1
, ... fields are first used.– mosvy
14 hours ago
add a comment |
Thanks for contributing an answer to Unix & Linux Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f506892%2fwhy-is-sed-no-op-much-faster-than-awk-in-this-case%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
2
I also suggest you try it with
mawk
;-)– mosvy
2 days ago
2
Without any testing, I wonder if awk is doing more work per line because of the -F@ field splitting.
– Jeff Schaller
2 days ago
One should always test Awk performance and compatability against The One True Awk. github.com/onetrueawk/awk
– Greg A. Woods
yesterday
@JeffSchaller I try to figure out a way so that awk does not do any field splitting at all, but at least failed for
GNU awk
. SettingFS
to empty string seems to causeawk
to split each individual character as a field.– Weijun Zhou
23 hours ago
@GregA.Woods Updated.
– Weijun Zhou
16 hours ago