Why is `sed` no op much faster than `awk` in this case2019 Community Moderator ElectionReplacing multiple lines in sed or awkWhy isn't sed greedy in this simple case?Merge two lists while removing duplicatesawk, sed, grep, perl… which to print out in this case?Remove duplicate lines while keeping the order of the linesawk or sed command to match regex at specific line, exit true if success, false otherwiseUsing Perl for Even/Odd numbersWhy this awk command prints output twiceWhy is fio seq_writes so much faster than dd?Why is my very old USB2 Harddisk faster than expected?

Trouble reading roman numeral notation with flats

What is the meaning of "You've never met a graph you didn't like?"

Why didn't Voldemort know what Grindelwald looked like?

Pre-Employment Background Check With Consent For Future Checks

New Order #2: Turn My Way

Writing in a Christian voice

Can you take a "free object interaction" while incapacitated?

categorizing a variable turns it from insignificant to significant

Not hide and seek

Capacitor electron flow

Why do Radio Buttons not fill the entire outer circle?

Why didn’t Eve recognize the little cockroach as a living organism?

1 John in Luther’s Bibel

Weird lines in Microsoft Word

How would a solely written language work mechanically

Sort with assumptions

Should I warn a new PhD Student?

I keep switching characters, how do I stop?

Center page as a whole without centering each element individually

Mortal danger in mid-grade literature

PTIJ: Which Dr. Seuss books should one obtain?

Is there a distance limit for minecart tracks?

"Marked down as someone wanting to sell shares." What does that mean?

Reason why a kingside attack is not justified



Why is `sed` no op much faster than `awk` in this case



2019 Community Moderator ElectionReplacing multiple lines in sed or awkWhy isn't sed greedy in this simple case?Merge two lists while removing duplicatesawk, sed, grep, perl… which to print out in this case?Remove duplicate lines while keeping the order of the linesawk or sed command to match regex at specific line, exit true if success, false otherwiseUsing Perl for Even/Odd numbersWhy this awk command prints output twiceWhy is fio seq_writes so much faster than dd?Why is my very old USB2 Harddisk faster than expected?










4















I am trying to understand some performance issues related to sed and awk, and I did the following experiment,



$ seq 100000 > test
$ yes 'NR==100001print' | head -n 5000 > test.awk
$ yes '100001p;b' | head -n 5000 > test.sed
$ time sed -nf test.sed test
real 0m3.436s
user 0m3.428s
sys 0m0.004s
$ time awk -F@ -f test.awk test
real 0m11.615s
user 0m11.582s
sys 0m0.007s
$ sed --version
sed (GNU sed) 4.5
$ awk --version
GNU Awk 4.2.1, API: 2.0 (GNU MPFR 3.1.6-p2, GNU MP 6.1.2)


Here, since the test file only contains 100000 lines, all the commands in test.sed and test.awk are no-ops. Both programs only need to match the line number with the address (in sed) or NR(in awk) to decide that the command does not need to be executed, but there is still a huge difference in the time cost. Why is it the case? Are there anyone with different versions of sed and awk installed that gives a different result on this test?



Edit:
The results for mawk (as suggested by @mosvy), original-awk(the name for "one true awk" at debian based systems, suggested by @GregA.Woods) and perl are given below,



$ time mawk -F@ -f test.awk test
real 0m5.934s
user 0m5.919s
sys 0m0.004s
$ time original-awk -F@ -f test.awk test
real 0m8.132s
user 0m8.128s
sys 0m0.004s
$ yes 'print if $.==100001;' | head -n 5000 > test.pl
$ time perl -n test.pl test
real 0m33.245s
user 0m33.110s
sys 0m0.019s
$ mawk -W version
mawk 1.3.4 20171017
$ perl --version
This is perl 5, version 28, subversion 1 (v5.28.1) built for x86_64-linux-thread-multi


Replacing -F@ with -F '' does not make observable changes in the case of gawk and mawk. original-awk does not support empty FS.



Edit 2
The test by @mosvy gives different results, 21s for sed and 11s for mawk, see the comment below for details.










share|improve this question



















  • 2





    I also suggest you try it with mawk ;-)

    – mosvy
    2 days ago






  • 2





    Without any testing, I wonder if awk is doing more work per line because of the -F@ field splitting.

    – Jeff Schaller
    2 days ago











  • One should always test Awk performance and compatability against The One True Awk. github.com/onetrueawk/awk

    – Greg A. Woods
    yesterday











  • @JeffSchaller I try to figure out a way so that awk does not do any field splitting at all, but at least failed for GNU awk. Setting FS to empty string seems to cause awk to split each individual character as a field.

    – Weijun Zhou
    23 hours ago











  • @GregA.Woods Updated.

    – Weijun Zhou
    16 hours ago















4















I am trying to understand some performance issues related to sed and awk, and I did the following experiment,



$ seq 100000 > test
$ yes 'NR==100001print' | head -n 5000 > test.awk
$ yes '100001p;b' | head -n 5000 > test.sed
$ time sed -nf test.sed test
real 0m3.436s
user 0m3.428s
sys 0m0.004s
$ time awk -F@ -f test.awk test
real 0m11.615s
user 0m11.582s
sys 0m0.007s
$ sed --version
sed (GNU sed) 4.5
$ awk --version
GNU Awk 4.2.1, API: 2.0 (GNU MPFR 3.1.6-p2, GNU MP 6.1.2)


Here, since the test file only contains 100000 lines, all the commands in test.sed and test.awk are no-ops. Both programs only need to match the line number with the address (in sed) or NR(in awk) to decide that the command does not need to be executed, but there is still a huge difference in the time cost. Why is it the case? Are there anyone with different versions of sed and awk installed that gives a different result on this test?



Edit:
The results for mawk (as suggested by @mosvy), original-awk(the name for "one true awk" at debian based systems, suggested by @GregA.Woods) and perl are given below,



$ time mawk -F@ -f test.awk test
real 0m5.934s
user 0m5.919s
sys 0m0.004s
$ time original-awk -F@ -f test.awk test
real 0m8.132s
user 0m8.128s
sys 0m0.004s
$ yes 'print if $.==100001;' | head -n 5000 > test.pl
$ time perl -n test.pl test
real 0m33.245s
user 0m33.110s
sys 0m0.019s
$ mawk -W version
mawk 1.3.4 20171017
$ perl --version
This is perl 5, version 28, subversion 1 (v5.28.1) built for x86_64-linux-thread-multi


Replacing -F@ with -F '' does not make observable changes in the case of gawk and mawk. original-awk does not support empty FS.



Edit 2
The test by @mosvy gives different results, 21s for sed and 11s for mawk, see the comment below for details.










share|improve this question



















  • 2





    I also suggest you try it with mawk ;-)

    – mosvy
    2 days ago






  • 2





    Without any testing, I wonder if awk is doing more work per line because of the -F@ field splitting.

    – Jeff Schaller
    2 days ago











  • One should always test Awk performance and compatability against The One True Awk. github.com/onetrueawk/awk

    – Greg A. Woods
    yesterday











  • @JeffSchaller I try to figure out a way so that awk does not do any field splitting at all, but at least failed for GNU awk. Setting FS to empty string seems to cause awk to split each individual character as a field.

    – Weijun Zhou
    23 hours ago











  • @GregA.Woods Updated.

    – Weijun Zhou
    16 hours ago













4












4








4


1






I am trying to understand some performance issues related to sed and awk, and I did the following experiment,



$ seq 100000 > test
$ yes 'NR==100001print' | head -n 5000 > test.awk
$ yes '100001p;b' | head -n 5000 > test.sed
$ time sed -nf test.sed test
real 0m3.436s
user 0m3.428s
sys 0m0.004s
$ time awk -F@ -f test.awk test
real 0m11.615s
user 0m11.582s
sys 0m0.007s
$ sed --version
sed (GNU sed) 4.5
$ awk --version
GNU Awk 4.2.1, API: 2.0 (GNU MPFR 3.1.6-p2, GNU MP 6.1.2)


Here, since the test file only contains 100000 lines, all the commands in test.sed and test.awk are no-ops. Both programs only need to match the line number with the address (in sed) or NR(in awk) to decide that the command does not need to be executed, but there is still a huge difference in the time cost. Why is it the case? Are there anyone with different versions of sed and awk installed that gives a different result on this test?



Edit:
The results for mawk (as suggested by @mosvy), original-awk(the name for "one true awk" at debian based systems, suggested by @GregA.Woods) and perl are given below,



$ time mawk -F@ -f test.awk test
real 0m5.934s
user 0m5.919s
sys 0m0.004s
$ time original-awk -F@ -f test.awk test
real 0m8.132s
user 0m8.128s
sys 0m0.004s
$ yes 'print if $.==100001;' | head -n 5000 > test.pl
$ time perl -n test.pl test
real 0m33.245s
user 0m33.110s
sys 0m0.019s
$ mawk -W version
mawk 1.3.4 20171017
$ perl --version
This is perl 5, version 28, subversion 1 (v5.28.1) built for x86_64-linux-thread-multi


Replacing -F@ with -F '' does not make observable changes in the case of gawk and mawk. original-awk does not support empty FS.



Edit 2
The test by @mosvy gives different results, 21s for sed and 11s for mawk, see the comment below for details.










share|improve this question
















I am trying to understand some performance issues related to sed and awk, and I did the following experiment,



$ seq 100000 > test
$ yes 'NR==100001print' | head -n 5000 > test.awk
$ yes '100001p;b' | head -n 5000 > test.sed
$ time sed -nf test.sed test
real 0m3.436s
user 0m3.428s
sys 0m0.004s
$ time awk -F@ -f test.awk test
real 0m11.615s
user 0m11.582s
sys 0m0.007s
$ sed --version
sed (GNU sed) 4.5
$ awk --version
GNU Awk 4.2.1, API: 2.0 (GNU MPFR 3.1.6-p2, GNU MP 6.1.2)


Here, since the test file only contains 100000 lines, all the commands in test.sed and test.awk are no-ops. Both programs only need to match the line number with the address (in sed) or NR(in awk) to decide that the command does not need to be executed, but there is still a huge difference in the time cost. Why is it the case? Are there anyone with different versions of sed and awk installed that gives a different result on this test?



Edit:
The results for mawk (as suggested by @mosvy), original-awk(the name for "one true awk" at debian based systems, suggested by @GregA.Woods) and perl are given below,



$ time mawk -F@ -f test.awk test
real 0m5.934s
user 0m5.919s
sys 0m0.004s
$ time original-awk -F@ -f test.awk test
real 0m8.132s
user 0m8.128s
sys 0m0.004s
$ yes 'print if $.==100001;' | head -n 5000 > test.pl
$ time perl -n test.pl test
real 0m33.245s
user 0m33.110s
sys 0m0.019s
$ mawk -W version
mawk 1.3.4 20171017
$ perl --version
This is perl 5, version 28, subversion 1 (v5.28.1) built for x86_64-linux-thread-multi


Replacing -F@ with -F '' does not make observable changes in the case of gawk and mawk. original-awk does not support empty FS.



Edit 2
The test by @mosvy gives different results, 21s for sed and 11s for mawk, see the comment below for details.







awk sed perl performance






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited 14 hours ago







Weijun Zhou

















asked 2 days ago









Weijun ZhouWeijun Zhou

1,583425




1,583425







  • 2





    I also suggest you try it with mawk ;-)

    – mosvy
    2 days ago






  • 2





    Without any testing, I wonder if awk is doing more work per line because of the -F@ field splitting.

    – Jeff Schaller
    2 days ago











  • One should always test Awk performance and compatability against The One True Awk. github.com/onetrueawk/awk

    – Greg A. Woods
    yesterday











  • @JeffSchaller I try to figure out a way so that awk does not do any field splitting at all, but at least failed for GNU awk. Setting FS to empty string seems to cause awk to split each individual character as a field.

    – Weijun Zhou
    23 hours ago











  • @GregA.Woods Updated.

    – Weijun Zhou
    16 hours ago












  • 2





    I also suggest you try it with mawk ;-)

    – mosvy
    2 days ago






  • 2





    Without any testing, I wonder if awk is doing more work per line because of the -F@ field splitting.

    – Jeff Schaller
    2 days ago











  • One should always test Awk performance and compatability against The One True Awk. github.com/onetrueawk/awk

    – Greg A. Woods
    yesterday











  • @JeffSchaller I try to figure out a way so that awk does not do any field splitting at all, but at least failed for GNU awk. Setting FS to empty string seems to cause awk to split each individual character as a field.

    – Weijun Zhou
    23 hours ago











  • @GregA.Woods Updated.

    – Weijun Zhou
    16 hours ago







2




2





I also suggest you try it with mawk ;-)

– mosvy
2 days ago





I also suggest you try it with mawk ;-)

– mosvy
2 days ago




2




2





Without any testing, I wonder if awk is doing more work per line because of the -F@ field splitting.

– Jeff Schaller
2 days ago





Without any testing, I wonder if awk is doing more work per line because of the -F@ field splitting.

– Jeff Schaller
2 days ago













One should always test Awk performance and compatability against The One True Awk. github.com/onetrueawk/awk

– Greg A. Woods
yesterday





One should always test Awk performance and compatability against The One True Awk. github.com/onetrueawk/awk

– Greg A. Woods
yesterday













@JeffSchaller I try to figure out a way so that awk does not do any field splitting at all, but at least failed for GNU awk. Setting FS to empty string seems to cause awk to split each individual character as a field.

– Weijun Zhou
23 hours ago





@JeffSchaller I try to figure out a way so that awk does not do any field splitting at all, but at least failed for GNU awk. Setting FS to empty string seems to cause awk to split each individual character as a field.

– Weijun Zhou
23 hours ago













@GregA.Woods Updated.

– Weijun Zhou
16 hours ago





@GregA.Woods Updated.

– Weijun Zhou
16 hours ago










2 Answers
2






active

oldest

votes


















2














awk has a wider feature set than sed, with a more flexible syntax. So it's not unreasonable that it'll take longer both to parse its scripts, and to execute them.



As your example command (the part inside the braces) never runs, the time-sensitive part should be your test expression.



awk



First, look at the test in the awk example:



NR==100001


and see the effects of that in gprof (GNU awk 4.0.1):




% cumulative self self total
time seconds seconds calls s/call s/call name
55.89 19.73 19.73 1 19.73 35.04 interpret
8.90 22.87 3.14 500000000 0.00 0.00 cmp_scalar
8.64 25.92 3.05 1000305023 0.00 0.00 free_wstr
8.61 28.96 3.04 500105014 0.00 0.00 mk_number
6.09 31.11 2.15 500000001 0.00 0.00 cmp_nodes
4.18 32.59 1.48 500200013 0.00 0.00 unref
3.68 33.89 1.30 500000000 0.00 0.00 eval_condition
2.21 34.67 0.78 500000000 0.00 0.00 update_NR


~50% of the time is spent in "interpret", the top-level loop to run the opcodes resulting from the parsed script.



Every time the test is run (ie. 5000 script lines * 100000 input lines), awk has to:



  • Fetch the built-in variable "NR" (update_NR).

  • Convert the string "100001" (mk_number).

  • Compare them (cmp_nodes, cmp_scalar, eval_condition).

  • Discard any temporary objects needed for the comparison (free_wstr, unref)

Other awk implementations won't have the exact same call flow, but they will still have to retrieve variables, automatically convert, then compare.



sed



By comparison, in sed, the "test" is much more limited. It can only be a single address, an address range, or nothing (when the command is the first thing on the line), and sed can tell from the first character whether it's an address or command. In the example, it's



100001


...a single numerical address. The profile (GNU sed 4.2.2) shows




% cumulative self self total
time seconds seconds calls s/call s/call name
52.01 2.98 2.98 100000 0.00 0.00 execute_program
44.16 5.51 2.53 1000000000 0.00 0.00 match_address_p
3.84 5.73 0.22 match_an_address_p
[...]
0.00 5.73 0.00 5000 0.00 0.00 in_integer


Again, ~50% of the time is in the top-level execute_program. In this case, it's called once per input line, then loops over the parsed commands. The loop starts with an address check.



The important comparison, match_address_p, is also called 2*5000*100000 times[^1], but it only compares integers that are already available (through structs and pointers).



The line numbers in the input script were parsed at compile-time (in_integer). That only has to be done once for each address number in the input, ie. 5000 times, and doesn't make a significant contribution to the overall running time.



[^1]: I'm not entirely sure why 2x yet, but the braces make a difference. When the lines read 100001p, it's only once per script*input line. 100001p uses 2x.






share|improve this answer

























  • JiggilyNaga, what was the command to get the output like that, please?

    – Tagwint
    11 hours ago












  • @Tagwint I recompiled awk and sed with profiling enabled, then used gprof (part of binutils). Though the large numbers meant I had to realign the columns manually.

    – JigglyNaga
    11 hours ago


















1














Actually the above script is not a noop for awk:



Even if you do not use the contents of the fields, according to GAWK manual for each record that is read in the following steps are inevitably performed:



  • scanning for all occurrences of the FS

  • field splitting

  • updating th NF variable

If you are not using this information it just gets discarded afterwards.



If a field separator does not occur within the record, awk still has to assign text to $0 (and in your case to $1, too), and set NF to the actual number of obtained fields (1 in the sample above)






share|improve this answer




















  • 2





    all that doesn't really make a difference -- try time gawk '$1=$1+$1' test >/dev/null; it's really the big unrealistic script that's blowing it up. Also notice that (at least the original awk) does not do splitting until the $1, ... fields are first used.

    – mosvy
    14 hours ago











Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "106"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f506892%2fwhy-is-sed-no-op-much-faster-than-awk-in-this-case%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes









2














awk has a wider feature set than sed, with a more flexible syntax. So it's not unreasonable that it'll take longer both to parse its scripts, and to execute them.



As your example command (the part inside the braces) never runs, the time-sensitive part should be your test expression.



awk



First, look at the test in the awk example:



NR==100001


and see the effects of that in gprof (GNU awk 4.0.1):




% cumulative self self total
time seconds seconds calls s/call s/call name
55.89 19.73 19.73 1 19.73 35.04 interpret
8.90 22.87 3.14 500000000 0.00 0.00 cmp_scalar
8.64 25.92 3.05 1000305023 0.00 0.00 free_wstr
8.61 28.96 3.04 500105014 0.00 0.00 mk_number
6.09 31.11 2.15 500000001 0.00 0.00 cmp_nodes
4.18 32.59 1.48 500200013 0.00 0.00 unref
3.68 33.89 1.30 500000000 0.00 0.00 eval_condition
2.21 34.67 0.78 500000000 0.00 0.00 update_NR


~50% of the time is spent in "interpret", the top-level loop to run the opcodes resulting from the parsed script.



Every time the test is run (ie. 5000 script lines * 100000 input lines), awk has to:



  • Fetch the built-in variable "NR" (update_NR).

  • Convert the string "100001" (mk_number).

  • Compare them (cmp_nodes, cmp_scalar, eval_condition).

  • Discard any temporary objects needed for the comparison (free_wstr, unref)

Other awk implementations won't have the exact same call flow, but they will still have to retrieve variables, automatically convert, then compare.



sed



By comparison, in sed, the "test" is much more limited. It can only be a single address, an address range, or nothing (when the command is the first thing on the line), and sed can tell from the first character whether it's an address or command. In the example, it's



100001


...a single numerical address. The profile (GNU sed 4.2.2) shows




% cumulative self self total
time seconds seconds calls s/call s/call name
52.01 2.98 2.98 100000 0.00 0.00 execute_program
44.16 5.51 2.53 1000000000 0.00 0.00 match_address_p
3.84 5.73 0.22 match_an_address_p
[...]
0.00 5.73 0.00 5000 0.00 0.00 in_integer


Again, ~50% of the time is in the top-level execute_program. In this case, it's called once per input line, then loops over the parsed commands. The loop starts with an address check.



The important comparison, match_address_p, is also called 2*5000*100000 times[^1], but it only compares integers that are already available (through structs and pointers).



The line numbers in the input script were parsed at compile-time (in_integer). That only has to be done once for each address number in the input, ie. 5000 times, and doesn't make a significant contribution to the overall running time.



[^1]: I'm not entirely sure why 2x yet, but the braces make a difference. When the lines read 100001p, it's only once per script*input line. 100001p uses 2x.






share|improve this answer

























  • JiggilyNaga, what was the command to get the output like that, please?

    – Tagwint
    11 hours ago












  • @Tagwint I recompiled awk and sed with profiling enabled, then used gprof (part of binutils). Though the large numbers meant I had to realign the columns manually.

    – JigglyNaga
    11 hours ago















2














awk has a wider feature set than sed, with a more flexible syntax. So it's not unreasonable that it'll take longer both to parse its scripts, and to execute them.



As your example command (the part inside the braces) never runs, the time-sensitive part should be your test expression.



awk



First, look at the test in the awk example:



NR==100001


and see the effects of that in gprof (GNU awk 4.0.1):




% cumulative self self total
time seconds seconds calls s/call s/call name
55.89 19.73 19.73 1 19.73 35.04 interpret
8.90 22.87 3.14 500000000 0.00 0.00 cmp_scalar
8.64 25.92 3.05 1000305023 0.00 0.00 free_wstr
8.61 28.96 3.04 500105014 0.00 0.00 mk_number
6.09 31.11 2.15 500000001 0.00 0.00 cmp_nodes
4.18 32.59 1.48 500200013 0.00 0.00 unref
3.68 33.89 1.30 500000000 0.00 0.00 eval_condition
2.21 34.67 0.78 500000000 0.00 0.00 update_NR


~50% of the time is spent in "interpret", the top-level loop to run the opcodes resulting from the parsed script.



Every time the test is run (ie. 5000 script lines * 100000 input lines), awk has to:



  • Fetch the built-in variable "NR" (update_NR).

  • Convert the string "100001" (mk_number).

  • Compare them (cmp_nodes, cmp_scalar, eval_condition).

  • Discard any temporary objects needed for the comparison (free_wstr, unref)

Other awk implementations won't have the exact same call flow, but they will still have to retrieve variables, automatically convert, then compare.



sed



By comparison, in sed, the "test" is much more limited. It can only be a single address, an address range, or nothing (when the command is the first thing on the line), and sed can tell from the first character whether it's an address or command. In the example, it's



100001


...a single numerical address. The profile (GNU sed 4.2.2) shows




% cumulative self self total
time seconds seconds calls s/call s/call name
52.01 2.98 2.98 100000 0.00 0.00 execute_program
44.16 5.51 2.53 1000000000 0.00 0.00 match_address_p
3.84 5.73 0.22 match_an_address_p
[...]
0.00 5.73 0.00 5000 0.00 0.00 in_integer


Again, ~50% of the time is in the top-level execute_program. In this case, it's called once per input line, then loops over the parsed commands. The loop starts with an address check.



The important comparison, match_address_p, is also called 2*5000*100000 times[^1], but it only compares integers that are already available (through structs and pointers).



The line numbers in the input script were parsed at compile-time (in_integer). That only has to be done once for each address number in the input, ie. 5000 times, and doesn't make a significant contribution to the overall running time.



[^1]: I'm not entirely sure why 2x yet, but the braces make a difference. When the lines read 100001p, it's only once per script*input line. 100001p uses 2x.






share|improve this answer

























  • JiggilyNaga, what was the command to get the output like that, please?

    – Tagwint
    11 hours ago












  • @Tagwint I recompiled awk and sed with profiling enabled, then used gprof (part of binutils). Though the large numbers meant I had to realign the columns manually.

    – JigglyNaga
    11 hours ago













2












2








2







awk has a wider feature set than sed, with a more flexible syntax. So it's not unreasonable that it'll take longer both to parse its scripts, and to execute them.



As your example command (the part inside the braces) never runs, the time-sensitive part should be your test expression.



awk



First, look at the test in the awk example:



NR==100001


and see the effects of that in gprof (GNU awk 4.0.1):




% cumulative self self total
time seconds seconds calls s/call s/call name
55.89 19.73 19.73 1 19.73 35.04 interpret
8.90 22.87 3.14 500000000 0.00 0.00 cmp_scalar
8.64 25.92 3.05 1000305023 0.00 0.00 free_wstr
8.61 28.96 3.04 500105014 0.00 0.00 mk_number
6.09 31.11 2.15 500000001 0.00 0.00 cmp_nodes
4.18 32.59 1.48 500200013 0.00 0.00 unref
3.68 33.89 1.30 500000000 0.00 0.00 eval_condition
2.21 34.67 0.78 500000000 0.00 0.00 update_NR


~50% of the time is spent in "interpret", the top-level loop to run the opcodes resulting from the parsed script.



Every time the test is run (ie. 5000 script lines * 100000 input lines), awk has to:



  • Fetch the built-in variable "NR" (update_NR).

  • Convert the string "100001" (mk_number).

  • Compare them (cmp_nodes, cmp_scalar, eval_condition).

  • Discard any temporary objects needed for the comparison (free_wstr, unref)

Other awk implementations won't have the exact same call flow, but they will still have to retrieve variables, automatically convert, then compare.



sed



By comparison, in sed, the "test" is much more limited. It can only be a single address, an address range, or nothing (when the command is the first thing on the line), and sed can tell from the first character whether it's an address or command. In the example, it's



100001


...a single numerical address. The profile (GNU sed 4.2.2) shows




% cumulative self self total
time seconds seconds calls s/call s/call name
52.01 2.98 2.98 100000 0.00 0.00 execute_program
44.16 5.51 2.53 1000000000 0.00 0.00 match_address_p
3.84 5.73 0.22 match_an_address_p
[...]
0.00 5.73 0.00 5000 0.00 0.00 in_integer


Again, ~50% of the time is in the top-level execute_program. In this case, it's called once per input line, then loops over the parsed commands. The loop starts with an address check.



The important comparison, match_address_p, is also called 2*5000*100000 times[^1], but it only compares integers that are already available (through structs and pointers).



The line numbers in the input script were parsed at compile-time (in_integer). That only has to be done once for each address number in the input, ie. 5000 times, and doesn't make a significant contribution to the overall running time.



[^1]: I'm not entirely sure why 2x yet, but the braces make a difference. When the lines read 100001p, it's only once per script*input line. 100001p uses 2x.






share|improve this answer















awk has a wider feature set than sed, with a more flexible syntax. So it's not unreasonable that it'll take longer both to parse its scripts, and to execute them.



As your example command (the part inside the braces) never runs, the time-sensitive part should be your test expression.



awk



First, look at the test in the awk example:



NR==100001


and see the effects of that in gprof (GNU awk 4.0.1):




% cumulative self self total
time seconds seconds calls s/call s/call name
55.89 19.73 19.73 1 19.73 35.04 interpret
8.90 22.87 3.14 500000000 0.00 0.00 cmp_scalar
8.64 25.92 3.05 1000305023 0.00 0.00 free_wstr
8.61 28.96 3.04 500105014 0.00 0.00 mk_number
6.09 31.11 2.15 500000001 0.00 0.00 cmp_nodes
4.18 32.59 1.48 500200013 0.00 0.00 unref
3.68 33.89 1.30 500000000 0.00 0.00 eval_condition
2.21 34.67 0.78 500000000 0.00 0.00 update_NR


~50% of the time is spent in "interpret", the top-level loop to run the opcodes resulting from the parsed script.



Every time the test is run (ie. 5000 script lines * 100000 input lines), awk has to:



  • Fetch the built-in variable "NR" (update_NR).

  • Convert the string "100001" (mk_number).

  • Compare them (cmp_nodes, cmp_scalar, eval_condition).

  • Discard any temporary objects needed for the comparison (free_wstr, unref)

Other awk implementations won't have the exact same call flow, but they will still have to retrieve variables, automatically convert, then compare.



sed



By comparison, in sed, the "test" is much more limited. It can only be a single address, an address range, or nothing (when the command is the first thing on the line), and sed can tell from the first character whether it's an address or command. In the example, it's



100001


...a single numerical address. The profile (GNU sed 4.2.2) shows




% cumulative self self total
time seconds seconds calls s/call s/call name
52.01 2.98 2.98 100000 0.00 0.00 execute_program
44.16 5.51 2.53 1000000000 0.00 0.00 match_address_p
3.84 5.73 0.22 match_an_address_p
[...]
0.00 5.73 0.00 5000 0.00 0.00 in_integer


Again, ~50% of the time is in the top-level execute_program. In this case, it's called once per input line, then loops over the parsed commands. The loop starts with an address check.



The important comparison, match_address_p, is also called 2*5000*100000 times[^1], but it only compares integers that are already available (through structs and pointers).



The line numbers in the input script were parsed at compile-time (in_integer). That only has to be done once for each address number in the input, ie. 5000 times, and doesn't make a significant contribution to the overall running time.



[^1]: I'm not entirely sure why 2x yet, but the braces make a difference. When the lines read 100001p, it's only once per script*input line. 100001p uses 2x.







share|improve this answer














share|improve this answer



share|improve this answer








edited 11 hours ago

























answered 12 hours ago









JigglyNagaJigglyNaga

3,9671035




3,9671035












  • JiggilyNaga, what was the command to get the output like that, please?

    – Tagwint
    11 hours ago












  • @Tagwint I recompiled awk and sed with profiling enabled, then used gprof (part of binutils). Though the large numbers meant I had to realign the columns manually.

    – JigglyNaga
    11 hours ago

















  • JiggilyNaga, what was the command to get the output like that, please?

    – Tagwint
    11 hours ago












  • @Tagwint I recompiled awk and sed with profiling enabled, then used gprof (part of binutils). Though the large numbers meant I had to realign the columns manually.

    – JigglyNaga
    11 hours ago
















JiggilyNaga, what was the command to get the output like that, please?

– Tagwint
11 hours ago






JiggilyNaga, what was the command to get the output like that, please?

– Tagwint
11 hours ago














@Tagwint I recompiled awk and sed with profiling enabled, then used gprof (part of binutils). Though the large numbers meant I had to realign the columns manually.

– JigglyNaga
11 hours ago





@Tagwint I recompiled awk and sed with profiling enabled, then used gprof (part of binutils). Though the large numbers meant I had to realign the columns manually.

– JigglyNaga
11 hours ago













1














Actually the above script is not a noop for awk:



Even if you do not use the contents of the fields, according to GAWK manual for each record that is read in the following steps are inevitably performed:



  • scanning for all occurrences of the FS

  • field splitting

  • updating th NF variable

If you are not using this information it just gets discarded afterwards.



If a field separator does not occur within the record, awk still has to assign text to $0 (and in your case to $1, too), and set NF to the actual number of obtained fields (1 in the sample above)






share|improve this answer




















  • 2





    all that doesn't really make a difference -- try time gawk '$1=$1+$1' test >/dev/null; it's really the big unrealistic script that's blowing it up. Also notice that (at least the original awk) does not do splitting until the $1, ... fields are first used.

    – mosvy
    14 hours ago
















1














Actually the above script is not a noop for awk:



Even if you do not use the contents of the fields, according to GAWK manual for each record that is read in the following steps are inevitably performed:



  • scanning for all occurrences of the FS

  • field splitting

  • updating th NF variable

If you are not using this information it just gets discarded afterwards.



If a field separator does not occur within the record, awk still has to assign text to $0 (and in your case to $1, too), and set NF to the actual number of obtained fields (1 in the sample above)






share|improve this answer




















  • 2





    all that doesn't really make a difference -- try time gawk '$1=$1+$1' test >/dev/null; it's really the big unrealistic script that's blowing it up. Also notice that (at least the original awk) does not do splitting until the $1, ... fields are first used.

    – mosvy
    14 hours ago














1












1








1







Actually the above script is not a noop for awk:



Even if you do not use the contents of the fields, according to GAWK manual for each record that is read in the following steps are inevitably performed:



  • scanning for all occurrences of the FS

  • field splitting

  • updating th NF variable

If you are not using this information it just gets discarded afterwards.



If a field separator does not occur within the record, awk still has to assign text to $0 (and in your case to $1, too), and set NF to the actual number of obtained fields (1 in the sample above)






share|improve this answer















Actually the above script is not a noop for awk:



Even if you do not use the contents of the fields, according to GAWK manual for each record that is read in the following steps are inevitably performed:



  • scanning for all occurrences of the FS

  • field splitting

  • updating th NF variable

If you are not using this information it just gets discarded afterwards.



If a field separator does not occur within the record, awk still has to assign text to $0 (and in your case to $1, too), and set NF to the actual number of obtained fields (1 in the sample above)







share|improve this answer














share|improve this answer



share|improve this answer








edited 14 hours ago

























answered 14 hours ago









jf1jf1

1745




1745







  • 2





    all that doesn't really make a difference -- try time gawk '$1=$1+$1' test >/dev/null; it's really the big unrealistic script that's blowing it up. Also notice that (at least the original awk) does not do splitting until the $1, ... fields are first used.

    – mosvy
    14 hours ago













  • 2





    all that doesn't really make a difference -- try time gawk '$1=$1+$1' test >/dev/null; it's really the big unrealistic script that's blowing it up. Also notice that (at least the original awk) does not do splitting until the $1, ... fields are first used.

    – mosvy
    14 hours ago








2




2





all that doesn't really make a difference -- try time gawk '$1=$1+$1' test >/dev/null; it's really the big unrealistic script that's blowing it up. Also notice that (at least the original awk) does not do splitting until the $1, ... fields are first used.

– mosvy
14 hours ago






all that doesn't really make a difference -- try time gawk '$1=$1+$1' test >/dev/null; it's really the big unrealistic script that's blowing it up. Also notice that (at least the original awk) does not do splitting until the $1, ... fields are first used.

– mosvy
14 hours ago


















draft saved

draft discarded
















































Thanks for contributing an answer to Unix & Linux Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f506892%2fwhy-is-sed-no-op-much-faster-than-awk-in-this-case%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

getting Checkpoint VPN SSL Network Extender working in the command lineHow to connect to CheckPoint VPN on Ubuntu 18.04LTS?Will the Linux ( red-hat ) Open VPNC Client connect to checkpoint or nortel VPN gateways?VPN client for linux machine + support checkpoint gatewayVPN SSL Network Extender in FirefoxLinux Checkpoint SNX tool configuration issuesCheck Point - Connect under Linux - snx + OTPSNX VPN Ububuntu 18.XXUsing Checkpoint VPN SSL Network Extender CLI with certificateVPN with network manager (nm-applet) is not workingWill the Linux ( red-hat ) Open VPNC Client connect to checkpoint or nortel VPN gateways?VPN client for linux machine + support checkpoint gatewayImport VPN config files to NetworkManager from command lineTrouble connecting to VPN using network-manager, while command line worksStart a VPN connection with PPTP protocol on command linestarting a docker service daemon breaks the vpn networkCan't connect to vpn with Network-managerVPN SSL Network Extender in FirefoxUsing Checkpoint VPN SSL Network Extender CLI with certificate

대한민국 목차 국명 지리 역사 정치 국방 경제 사회 문화 국제 순위 관련 항목 각주 외부 링크 둘러보기 메뉴북위 37° 34′ 08″ 동경 126° 58′ 36″ / 북위 37.568889° 동경 126.976667°  / 37.568889; 126.976667ehThe Korean Repository문단을 편집문단을 편집추가해Clarkson PLC 사Report for Selected Countries and Subjects-Korea“Human Development Index and its components: P.198”“http://www.law.go.kr/%EB%B2%95%EB%A0%B9/%EB%8C%80%ED%95%9C%EB%AF%BC%EA%B5%AD%EA%B5%AD%EA%B8%B0%EB%B2%95”"한국은 국제법상 한반도 유일 합법정부 아니다" - 오마이뉴스 모바일Report for Selected Countries and Subjects: South Korea격동의 역사와 함께한 조선일보 90년 : 조선일보 인수해 혁신시킨 신석우, 임시정부 때는 '대한민국' 국호(國號) 정해《우리가 몰랐던 우리 역사: 나라 이름의 비밀을 찾아가는 역사 여행》“남북 공식호칭 ‘남한’‘북한’으로 쓴다”“Corea 대 Korea, 누가 이긴 거야?”국내기후자료 - 한국[김대중 前 대통령 서거] 과감한 구조개혁 'DJ노믹스'로 최단기간 환란극복 :: 네이버 뉴스“이라크 "韓-쿠르드 유전개발 MOU 승인 안해"(종합)”“해외 우리국민 추방사례 43%가 일본”차기전차 K2'흑표'의 세계 최고 전력 분석, 쿠키뉴스 엄기영, 2007-03-02두산인프라, 헬기잡는 장갑차 'K21'...내년부터 공급, 고뉴스 이대준, 2008-10-30과거 내용 찾기mk 뉴스 - 구매력 기준으로 보면 한국 1인당 소득 3만弗과거 내용 찾기"The N-11: More Than an Acronym"Archived조선일보 최우석, 2008-11-01Global 500 2008: Countries - South Korea“몇년째 '시한폭탄'... 가계부채, 올해는 터질까”가구당 부채 5000만원 처음 넘어서“‘빚’으로 내몰리는 사회.. 위기의 가계대출”“[경제365] 공공부문 부채 급증…800조 육박”“"소득 양극화 다소 완화...불평등은 여전"”“공정사회·공생발전 한참 멀었네”iSuppli,08年2QのDRAMシェア・ランキングを発表(08/8/11)South Korea dominates shipbuilding industry | Stock Market News & Stocks to Watch from StraightStocks한국 자동차 생산, 3년 연속 세계 5위자동차수출 '현대-삼성 웃고 기아-대우-쌍용은 울고' 과거 내용 찾기동반성장위 창립 1주년 맞아Archived"중기적합 3개업종 합의 무시한 채 선정"李대통령, 사업 무분별 확장 소상공인 생계 위협 질타삼성-LG, 서민업종인 빵·분식사업 잇따라 철수상생은 뒷전…SSM ‘몸집 불리기’ 혈안Archived“경부고속도에 '아시안하이웨이' 표지판”'철의 실크로드' 앞서 '말(言)의 실크로드'부터, 프레시안 정창현, 2008-10-01“'서울 지하철은 안전한가?'”“서울시 “올해 안에 모든 지하철역 스크린도어 설치””“부산지하철 1,2호선 승강장 안전펜스 설치 완료”“전교조, 정부 노조 통계서 처음 빠져”“[Weekly BIZ] 도요타 '제로 이사회'가 리콜 사태 불러들였다”“S Korea slams high tuition costs”““정치가 여론 양극화 부채질… 합리주의 절실””“〈"`촛불집회'는 민주주의의 질적 변화 상징"〉”““촛불집회가 민주주의 왜곡 초래””“국민 65%, "한국 노사관계 대립적"”“한국 국가경쟁력 27위‥노사관계 '꼴찌'”“제대로 형성되지 않은 대한민국 이념지형”“[신년기획-갈등의 시대] 갈등지수 OECD 4위…사회적 손실 GDP 27% 무려 300조”“2012 총선-대선의 키워드는 '국민과 소통'”“한국 삶의 질 27위, 2000년과 2008년 연속 하위권 머물러”“[해피 코리아] 행복점수 68점…해외 평가선 '낙제점'”“한국 어린이·청소년 행복지수 3년 연속 OECD ‘꼴찌’”“한국 이혼율 OECD중 8위”“[통계청] 한국 이혼율 OECD 4위”“오피니언 [이렇게 생각한다] `부부의 날` 에 돌아본 이혼율 1위 한국”“Suicide Rates by Country, Global Health Observatory Data Repository.”“1. 또 다른 차별”“오피니언 [편집자에게] '왕따'와 '패거리 정치' 심리는 닮은꼴”“[미래한국리포트] 무한경쟁에 빠진 대한민국”“대학생 98% "외모가 경쟁력이라는 말 동의"”“특급호텔 웨딩·200만원대 유모차… "남보다 더…" 호화病, 고질병 됐다”“[스트레스 공화국] ① 경쟁사회, 스트레스 쌓인다”““매일 30여명 자살 한국, 의사보다 무속인에…””“"자살 부르는 '우울증', 환자 중 85% 치료 안 받아"”“정신병원을 가다”“대한민국도 ‘묻지마 범죄’,안전지대 아니다”“유엔 "학생 '성적 지향'에 따른 차별 금지하라"”“유엔아동권리위원회 보고서 및 번역본 원문”“고졸 성공스토리 담은 '제빵왕 김탁구' 드라마 나온다”“‘빛 좋은 개살구’ 고졸 취업…실습 대신 착취”원본 문서“정신건강, 사회적 편견부터 고쳐드립니다”‘소통’과 ‘행복’에 목 마른 사회가 잠들어 있던 ‘심리학’ 깨웠다“[포토] 사유리-곽금주 교수의 유쾌한 심리상담”“"올해 한국인 평균 영화관람횟수 세계 1위"(종합)”“[게임연중기획] 게임은 문화다-여가활동 1순위 게임”“영화속 ‘영어 지상주의’ …“왠지 씁쓸한데””“2월 `신문 부수 인증기관` 지정..방송법 후속작업”“무료신문 성장동력 ‘차별성’과 ‘갈등해소’”대한민국 국회 법률지식정보시스템"Pew Research Center's Religion & Public Life Project: South Korea"“amp;vwcd=MT_ZTITLE&path=인구·가구%20>%20인구총조사%20>%20인구부문%20>%20 총조사인구(2005)%20>%20전수부문&oper_YN=Y&item=&keyword=종교별%20인구& amp;lang_mode=kor&list_id= 2005년 통계청 인구 총조사”원본 문서“한국인이 좋아하는 취미와 운동 (2004-2009)”“한국인이 좋아하는 취미와 운동 (2004-2014)”Archived“한국, `부분적 언론자유국' 강등〈프리덤하우스〉”“국경없는기자회 "한국, 인터넷감시 대상국"”“한국, 조선산업 1위 유지(S. Korea Stays Top Shipbuilding Nation) RZD-Partner Portal”원본 문서“한국, 4년 만에 ‘선박건조 1위’”“옛 마산시,인터넷속도 세계 1위”“"한국 초고속 인터넷망 세계1위"”“인터넷·휴대폰 요금, 외국보다 훨씬 비싸”“한국 관세행정 6년 연속 세계 '1위'”“한국 교통사고 사망자 수 OECD 회원국 중 2위”“결핵 후진국' 한국, 환자가 급증한 이유는”“수술은 신중해야… 자칫하면 생명 위협”대한민국분류대한민국의 지도대한민국 정부대표 다국어포털대한민국 전자정부대한민국 국회한국방송공사about korea and information korea브리태니커 백과사전(한국편)론리플래닛의 정보(한국편)CIA의 세계 정보(한국편)마리암 부디아 (Mariam Budia),『한국: 하늘이 내린 한 폭의 그림』, 서울: 트랜스라틴 19호 (2012년 3월)대한민국ehehehehehehehehehehehehehehWorldCat132441370n791268020000 0001 2308 81034078029-6026373548cb11863345f(데이터)00573706ge128495

Cannot Extend partition with GParted The 2019 Stack Overflow Developer Survey Results Are In Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern) 2019 Community Moderator Election ResultsCan't increase partition size with GParted?GParted doesn't recognize the unallocated space after my current partitionWhat is the best way to add unallocated space located before to Ubuntu 12.04 partition with GParted live?I can't figure out how to extend my Arch home partition into free spaceGparted Linux Mint 18.1 issueTrying to extend but swap partition is showing as Unknown in Gparted, shows proper from fdiskRearrange partitions in gparted to extend a partitionUnable to extend partition even though unallocated space is next to it using GPartedAllocate free space to root partitiongparted: how to merge unallocated space with a partition