C++ much faster than Bash script writing to text file












17














I wanted to test the performance of writing to a file in a bash script vs a C++ program.



Here is the bash script:



#!/bin/bash

while true; do
echo "something" >> bash.txt
done


This added about 2-3 KB to the text file per second.



Here is the C++ code:



#include <iostream>
#include <fstream>

using namespace std;

int main() {
ofstream myfile;
myfile.open("cpp.txt");

while (true) {
myfile << "Writing this to a file Writing this to a file n";
}

myfile.close();
}


This created a ~6 GB text file in less than 10 seconds.



What makes this C++ code so much faster, and/or this bash script so much slower?










share|improve this question




















  • 16




    Just guessing here but I'd say the main difference is that batch opens and closes the file each iteration while C++ doesn't. Try moving open() and close() inside the loop in C++ to have a fair performance comparison (you'll need to pass ios::app to open)
    – IlBeldus
    Jul 4 '17 at 18:34








  • 4




    Or, put the redirection on the loop in the shell script: while true; do ...; done >> bash.txt.
    – chepner
    Jul 4 '17 at 18:35






  • 12




    Confirmed using strace that my bash opens and closes the bash.txt file every time.
    – aschepler
    Jul 4 '17 at 18:40






  • 1




    @obl It is related to your question in that it is a comment on the overabundance of unnecessary code in it. Unless you get paid by lines of code, you could take it as useful information, knowledge that may help you write more concise code in the future.
    – juanchopanza
    Jul 4 '17 at 18:42






  • 3




    See how a stupid little program like this compares: #include <fstream> int main() { while (true) { std::ofstream myfile("cpp.txt", std::ios::app); myfile << "Writing this to a file Writing this to a file n"; } }
    – user4581301
    Jul 4 '17 at 18:51
















17














I wanted to test the performance of writing to a file in a bash script vs a C++ program.



Here is the bash script:



#!/bin/bash

while true; do
echo "something" >> bash.txt
done


This added about 2-3 KB to the text file per second.



Here is the C++ code:



#include <iostream>
#include <fstream>

using namespace std;

int main() {
ofstream myfile;
myfile.open("cpp.txt");

while (true) {
myfile << "Writing this to a file Writing this to a file n";
}

myfile.close();
}


This created a ~6 GB text file in less than 10 seconds.



What makes this C++ code so much faster, and/or this bash script so much slower?










share|improve this question




















  • 16




    Just guessing here but I'd say the main difference is that batch opens and closes the file each iteration while C++ doesn't. Try moving open() and close() inside the loop in C++ to have a fair performance comparison (you'll need to pass ios::app to open)
    – IlBeldus
    Jul 4 '17 at 18:34








  • 4




    Or, put the redirection on the loop in the shell script: while true; do ...; done >> bash.txt.
    – chepner
    Jul 4 '17 at 18:35






  • 12




    Confirmed using strace that my bash opens and closes the bash.txt file every time.
    – aschepler
    Jul 4 '17 at 18:40






  • 1




    @obl It is related to your question in that it is a comment on the overabundance of unnecessary code in it. Unless you get paid by lines of code, you could take it as useful information, knowledge that may help you write more concise code in the future.
    – juanchopanza
    Jul 4 '17 at 18:42






  • 3




    See how a stupid little program like this compares: #include <fstream> int main() { while (true) { std::ofstream myfile("cpp.txt", std::ios::app); myfile << "Writing this to a file Writing this to a file n"; } }
    – user4581301
    Jul 4 '17 at 18:51














17












17








17


1





I wanted to test the performance of writing to a file in a bash script vs a C++ program.



Here is the bash script:



#!/bin/bash

while true; do
echo "something" >> bash.txt
done


This added about 2-3 KB to the text file per second.



Here is the C++ code:



#include <iostream>
#include <fstream>

using namespace std;

int main() {
ofstream myfile;
myfile.open("cpp.txt");

while (true) {
myfile << "Writing this to a file Writing this to a file n";
}

myfile.close();
}


This created a ~6 GB text file in less than 10 seconds.



What makes this C++ code so much faster, and/or this bash script so much slower?










share|improve this question















I wanted to test the performance of writing to a file in a bash script vs a C++ program.



Here is the bash script:



#!/bin/bash

while true; do
echo "something" >> bash.txt
done


This added about 2-3 KB to the text file per second.



Here is the C++ code:



#include <iostream>
#include <fstream>

using namespace std;

int main() {
ofstream myfile;
myfile.open("cpp.txt");

while (true) {
myfile << "Writing this to a file Writing this to a file n";
}

myfile.close();
}


This created a ~6 GB text file in less than 10 seconds.



What makes this C++ code so much faster, and/or this bash script so much slower?







c++ linux bash






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 12 at 15:45

























asked Jul 4 '17 at 18:31









obl

1,436729




1,436729








  • 16




    Just guessing here but I'd say the main difference is that batch opens and closes the file each iteration while C++ doesn't. Try moving open() and close() inside the loop in C++ to have a fair performance comparison (you'll need to pass ios::app to open)
    – IlBeldus
    Jul 4 '17 at 18:34








  • 4




    Or, put the redirection on the loop in the shell script: while true; do ...; done >> bash.txt.
    – chepner
    Jul 4 '17 at 18:35






  • 12




    Confirmed using strace that my bash opens and closes the bash.txt file every time.
    – aschepler
    Jul 4 '17 at 18:40






  • 1




    @obl It is related to your question in that it is a comment on the overabundance of unnecessary code in it. Unless you get paid by lines of code, you could take it as useful information, knowledge that may help you write more concise code in the future.
    – juanchopanza
    Jul 4 '17 at 18:42






  • 3




    See how a stupid little program like this compares: #include <fstream> int main() { while (true) { std::ofstream myfile("cpp.txt", std::ios::app); myfile << "Writing this to a file Writing this to a file n"; } }
    – user4581301
    Jul 4 '17 at 18:51














  • 16




    Just guessing here but I'd say the main difference is that batch opens and closes the file each iteration while C++ doesn't. Try moving open() and close() inside the loop in C++ to have a fair performance comparison (you'll need to pass ios::app to open)
    – IlBeldus
    Jul 4 '17 at 18:34








  • 4




    Or, put the redirection on the loop in the shell script: while true; do ...; done >> bash.txt.
    – chepner
    Jul 4 '17 at 18:35






  • 12




    Confirmed using strace that my bash opens and closes the bash.txt file every time.
    – aschepler
    Jul 4 '17 at 18:40






  • 1




    @obl It is related to your question in that it is a comment on the overabundance of unnecessary code in it. Unless you get paid by lines of code, you could take it as useful information, knowledge that may help you write more concise code in the future.
    – juanchopanza
    Jul 4 '17 at 18:42






  • 3




    See how a stupid little program like this compares: #include <fstream> int main() { while (true) { std::ofstream myfile("cpp.txt", std::ios::app); myfile << "Writing this to a file Writing this to a file n"; } }
    – user4581301
    Jul 4 '17 at 18:51








16




16




Just guessing here but I'd say the main difference is that batch opens and closes the file each iteration while C++ doesn't. Try moving open() and close() inside the loop in C++ to have a fair performance comparison (you'll need to pass ios::app to open)
– IlBeldus
Jul 4 '17 at 18:34






Just guessing here but I'd say the main difference is that batch opens and closes the file each iteration while C++ doesn't. Try moving open() and close() inside the loop in C++ to have a fair performance comparison (you'll need to pass ios::app to open)
– IlBeldus
Jul 4 '17 at 18:34






4




4




Or, put the redirection on the loop in the shell script: while true; do ...; done >> bash.txt.
– chepner
Jul 4 '17 at 18:35




Or, put the redirection on the loop in the shell script: while true; do ...; done >> bash.txt.
– chepner
Jul 4 '17 at 18:35




12




12




Confirmed using strace that my bash opens and closes the bash.txt file every time.
– aschepler
Jul 4 '17 at 18:40




Confirmed using strace that my bash opens and closes the bash.txt file every time.
– aschepler
Jul 4 '17 at 18:40




1




1




@obl It is related to your question in that it is a comment on the overabundance of unnecessary code in it. Unless you get paid by lines of code, you could take it as useful information, knowledge that may help you write more concise code in the future.
– juanchopanza
Jul 4 '17 at 18:42




@obl It is related to your question in that it is a comment on the overabundance of unnecessary code in it. Unless you get paid by lines of code, you could take it as useful information, knowledge that may help you write more concise code in the future.
– juanchopanza
Jul 4 '17 at 18:42




3




3




See how a stupid little program like this compares: #include <fstream> int main() { while (true) { std::ofstream myfile("cpp.txt", std::ios::app); myfile << "Writing this to a file Writing this to a file n"; } }
– user4581301
Jul 4 '17 at 18:51




See how a stupid little program like this compares: #include <fstream> int main() { while (true) { std::ofstream myfile("cpp.txt", std::ios::app); myfile << "Writing this to a file Writing this to a file n"; } }
– user4581301
Jul 4 '17 at 18:51












3 Answers
3






active

oldest

votes


















40














There are several reasons to it.



First off, interpreted execution environments (like bash, perl alongside with non-JITed lua and python etc.) are generally much slower than even poorly written compiled programs (C, C++, etc.).



Secondly, note how fragmented your bash code is - it just writes a line to a file, then it writes one more, and so on. Your C++ program, on the other side, performs buffered write - even without your direct efforts to it. You might see how slower will it run if you substitute



myfile << "Writing this to a file Writing this to a file n";


with



myfile << "Writing this to a file Writing this to a file" << endl;


for more information about how streams are implemented in C++, and why n is different from endl, see any reference documentation on C++.



Thirdly, as comments prove, your bash script performs open/close of the target file for each line. This implies a significant performance overhead in itself - imagine myfile.open and myfile.close moved inside your loop body!






share|improve this answer



















  • 3




    Flushing the performance down the drain is a great start. The next step is to open the file for append and close it on every loop. Should get even closer.
    – user4581301
    Jul 4 '17 at 18:42










  • @user4581301 yeah, I though about it (see edit), but was not quite sure - not an expert in bash :)
    – iehrlich
    Jul 4 '17 at 18:43






  • 2




    IIRC, bash lines must be translated/"built" to native every time. This is not true of perl, which is compiled only once, or python, which is compiled to byte code. Bash won't build a line until it's about to run it, while perl buils everything at the beginning, etc.
    – code_dredd
    Jul 4 '17 at 19:05










  • Running it with 'endl' instead of 'n' did make it significantly slower but still faster than the bash script. Running the code posted by @user4581301, the performance was very similar to the performance of the bash script.
    – obl
    Jul 4 '17 at 19:45






  • 1




    "... interpreted execution environments (like ... python ..." - Is it though? CPython, the default Python implementation, compiles the Python source to bytecode, which is run in a VM (which some call the interpreter, and that makes things even more confusing). I'm not intimately familiar with Perl, but I wouldn't be suprised if it employed a similar construction. I think purely interpreted language implementations are quite rare nowadays. Though I'm pretty sure Unix shells still are.
    – marcelm
    Jul 4 '17 at 22:49



















6














As others have already pointed out, this is because you are currently opening and closing the file with each line you write in your script (and shell scripts are interpreted while C++ is compiled). You might batch the writes instead and write once, for example



MSG="something"
logfile="test.txt"
(
for i in {1..10000}; do
echo $MSG
done
) >> $logfile


Which will write the message 10k times but only open the log once.






share|improve this answer



















  • 1




    echo is a bash builtin
    – Basile Starynkevitch
    Jul 5 '17 at 3:37










  • @BasileStarynkevitch Fair enough. It's late here. And that was really tangential so I removed it.
    – Elliott Frisch
    Jul 5 '17 at 3:43



















-3














Compiled vs. Interpreted Languages



Bash is interpreted while C++ is compiled. Just that makes it a lot faster






share|improve this answer

















  • 5




    Sometimes. And sometimes the interpreted language has nifty little instructions so tightly optimized that they blow expectations right out of the water.
    – user4581301
    Jul 4 '17 at 18:38








  • 1




    @user4581301 Well, technically they are not interpreted at this point, but JIT/AOT-compiled ;)
    – iehrlich
    Jul 4 '17 at 18:44








  • 1




    No... Bash is interpreted, and yes they can be fast, but you still have to interpret it is always going to be somewhat slower. You could compile it, but that is not what we are talking about
    – Reece Ward
    Jul 4 '17 at 18:47








  • 1




    @iehrlich even without jitting you sometimes run across "Holy Smurf!". Old matlab is a good example. The script is slow, but the code backing the script has some serious pep in it's step.
    – user4581301
    Jul 4 '17 at 18:48






  • 6




    Interpreted/JIT/compiled isn't especially relevant in this case, since the I/O is the bottleneck. CPU usage is going to be sitting below 1% for the entire duration of the program, so it won't really matter that the C++ version is faster during that 1%. iehrlich's answer is right; the problem is that the bash script opens the file anew every time it prints a line, while the C++ version keeps it open until it's done.
    – Ray
    Jul 4 '17 at 23:05













Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f44912276%2fc-much-faster-than-bash-script-writing-to-text-file%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























3 Answers
3






active

oldest

votes








3 Answers
3






active

oldest

votes









active

oldest

votes






active

oldest

votes









40














There are several reasons to it.



First off, interpreted execution environments (like bash, perl alongside with non-JITed lua and python etc.) are generally much slower than even poorly written compiled programs (C, C++, etc.).



Secondly, note how fragmented your bash code is - it just writes a line to a file, then it writes one more, and so on. Your C++ program, on the other side, performs buffered write - even without your direct efforts to it. You might see how slower will it run if you substitute



myfile << "Writing this to a file Writing this to a file n";


with



myfile << "Writing this to a file Writing this to a file" << endl;


for more information about how streams are implemented in C++, and why n is different from endl, see any reference documentation on C++.



Thirdly, as comments prove, your bash script performs open/close of the target file for each line. This implies a significant performance overhead in itself - imagine myfile.open and myfile.close moved inside your loop body!






share|improve this answer



















  • 3




    Flushing the performance down the drain is a great start. The next step is to open the file for append and close it on every loop. Should get even closer.
    – user4581301
    Jul 4 '17 at 18:42










  • @user4581301 yeah, I though about it (see edit), but was not quite sure - not an expert in bash :)
    – iehrlich
    Jul 4 '17 at 18:43






  • 2




    IIRC, bash lines must be translated/"built" to native every time. This is not true of perl, which is compiled only once, or python, which is compiled to byte code. Bash won't build a line until it's about to run it, while perl buils everything at the beginning, etc.
    – code_dredd
    Jul 4 '17 at 19:05










  • Running it with 'endl' instead of 'n' did make it significantly slower but still faster than the bash script. Running the code posted by @user4581301, the performance was very similar to the performance of the bash script.
    – obl
    Jul 4 '17 at 19:45






  • 1




    "... interpreted execution environments (like ... python ..." - Is it though? CPython, the default Python implementation, compiles the Python source to bytecode, which is run in a VM (which some call the interpreter, and that makes things even more confusing). I'm not intimately familiar with Perl, but I wouldn't be suprised if it employed a similar construction. I think purely interpreted language implementations are quite rare nowadays. Though I'm pretty sure Unix shells still are.
    – marcelm
    Jul 4 '17 at 22:49
















40














There are several reasons to it.



First off, interpreted execution environments (like bash, perl alongside with non-JITed lua and python etc.) are generally much slower than even poorly written compiled programs (C, C++, etc.).



Secondly, note how fragmented your bash code is - it just writes a line to a file, then it writes one more, and so on. Your C++ program, on the other side, performs buffered write - even without your direct efforts to it. You might see how slower will it run if you substitute



myfile << "Writing this to a file Writing this to a file n";


with



myfile << "Writing this to a file Writing this to a file" << endl;


for more information about how streams are implemented in C++, and why n is different from endl, see any reference documentation on C++.



Thirdly, as comments prove, your bash script performs open/close of the target file for each line. This implies a significant performance overhead in itself - imagine myfile.open and myfile.close moved inside your loop body!






share|improve this answer



















  • 3




    Flushing the performance down the drain is a great start. The next step is to open the file for append and close it on every loop. Should get even closer.
    – user4581301
    Jul 4 '17 at 18:42










  • @user4581301 yeah, I though about it (see edit), but was not quite sure - not an expert in bash :)
    – iehrlich
    Jul 4 '17 at 18:43






  • 2




    IIRC, bash lines must be translated/"built" to native every time. This is not true of perl, which is compiled only once, or python, which is compiled to byte code. Bash won't build a line until it's about to run it, while perl buils everything at the beginning, etc.
    – code_dredd
    Jul 4 '17 at 19:05










  • Running it with 'endl' instead of 'n' did make it significantly slower but still faster than the bash script. Running the code posted by @user4581301, the performance was very similar to the performance of the bash script.
    – obl
    Jul 4 '17 at 19:45






  • 1




    "... interpreted execution environments (like ... python ..." - Is it though? CPython, the default Python implementation, compiles the Python source to bytecode, which is run in a VM (which some call the interpreter, and that makes things even more confusing). I'm not intimately familiar with Perl, but I wouldn't be suprised if it employed a similar construction. I think purely interpreted language implementations are quite rare nowadays. Though I'm pretty sure Unix shells still are.
    – marcelm
    Jul 4 '17 at 22:49














40












40








40






There are several reasons to it.



First off, interpreted execution environments (like bash, perl alongside with non-JITed lua and python etc.) are generally much slower than even poorly written compiled programs (C, C++, etc.).



Secondly, note how fragmented your bash code is - it just writes a line to a file, then it writes one more, and so on. Your C++ program, on the other side, performs buffered write - even without your direct efforts to it. You might see how slower will it run if you substitute



myfile << "Writing this to a file Writing this to a file n";


with



myfile << "Writing this to a file Writing this to a file" << endl;


for more information about how streams are implemented in C++, and why n is different from endl, see any reference documentation on C++.



Thirdly, as comments prove, your bash script performs open/close of the target file for each line. This implies a significant performance overhead in itself - imagine myfile.open and myfile.close moved inside your loop body!






share|improve this answer














There are several reasons to it.



First off, interpreted execution environments (like bash, perl alongside with non-JITed lua and python etc.) are generally much slower than even poorly written compiled programs (C, C++, etc.).



Secondly, note how fragmented your bash code is - it just writes a line to a file, then it writes one more, and so on. Your C++ program, on the other side, performs buffered write - even without your direct efforts to it. You might see how slower will it run if you substitute



myfile << "Writing this to a file Writing this to a file n";


with



myfile << "Writing this to a file Writing this to a file" << endl;


for more information about how streams are implemented in C++, and why n is different from endl, see any reference documentation on C++.



Thirdly, as comments prove, your bash script performs open/close of the target file for each line. This implies a significant performance overhead in itself - imagine myfile.open and myfile.close moved inside your loop body!







share|improve this answer














share|improve this answer



share|improve this answer








edited Jul 4 '17 at 23:47

























answered Jul 4 '17 at 18:37









iehrlich

3,20042640




3,20042640








  • 3




    Flushing the performance down the drain is a great start. The next step is to open the file for append and close it on every loop. Should get even closer.
    – user4581301
    Jul 4 '17 at 18:42










  • @user4581301 yeah, I though about it (see edit), but was not quite sure - not an expert in bash :)
    – iehrlich
    Jul 4 '17 at 18:43






  • 2




    IIRC, bash lines must be translated/"built" to native every time. This is not true of perl, which is compiled only once, or python, which is compiled to byte code. Bash won't build a line until it's about to run it, while perl buils everything at the beginning, etc.
    – code_dredd
    Jul 4 '17 at 19:05










  • Running it with 'endl' instead of 'n' did make it significantly slower but still faster than the bash script. Running the code posted by @user4581301, the performance was very similar to the performance of the bash script.
    – obl
    Jul 4 '17 at 19:45






  • 1




    "... interpreted execution environments (like ... python ..." - Is it though? CPython, the default Python implementation, compiles the Python source to bytecode, which is run in a VM (which some call the interpreter, and that makes things even more confusing). I'm not intimately familiar with Perl, but I wouldn't be suprised if it employed a similar construction. I think purely interpreted language implementations are quite rare nowadays. Though I'm pretty sure Unix shells still are.
    – marcelm
    Jul 4 '17 at 22:49














  • 3




    Flushing the performance down the drain is a great start. The next step is to open the file for append and close it on every loop. Should get even closer.
    – user4581301
    Jul 4 '17 at 18:42










  • @user4581301 yeah, I though about it (see edit), but was not quite sure - not an expert in bash :)
    – iehrlich
    Jul 4 '17 at 18:43






  • 2




    IIRC, bash lines must be translated/"built" to native every time. This is not true of perl, which is compiled only once, or python, which is compiled to byte code. Bash won't build a line until it's about to run it, while perl buils everything at the beginning, etc.
    – code_dredd
    Jul 4 '17 at 19:05










  • Running it with 'endl' instead of 'n' did make it significantly slower but still faster than the bash script. Running the code posted by @user4581301, the performance was very similar to the performance of the bash script.
    – obl
    Jul 4 '17 at 19:45






  • 1




    "... interpreted execution environments (like ... python ..." - Is it though? CPython, the default Python implementation, compiles the Python source to bytecode, which is run in a VM (which some call the interpreter, and that makes things even more confusing). I'm not intimately familiar with Perl, but I wouldn't be suprised if it employed a similar construction. I think purely interpreted language implementations are quite rare nowadays. Though I'm pretty sure Unix shells still are.
    – marcelm
    Jul 4 '17 at 22:49








3




3




Flushing the performance down the drain is a great start. The next step is to open the file for append and close it on every loop. Should get even closer.
– user4581301
Jul 4 '17 at 18:42




Flushing the performance down the drain is a great start. The next step is to open the file for append and close it on every loop. Should get even closer.
– user4581301
Jul 4 '17 at 18:42












@user4581301 yeah, I though about it (see edit), but was not quite sure - not an expert in bash :)
– iehrlich
Jul 4 '17 at 18:43




@user4581301 yeah, I though about it (see edit), but was not quite sure - not an expert in bash :)
– iehrlich
Jul 4 '17 at 18:43




2




2




IIRC, bash lines must be translated/"built" to native every time. This is not true of perl, which is compiled only once, or python, which is compiled to byte code. Bash won't build a line until it's about to run it, while perl buils everything at the beginning, etc.
– code_dredd
Jul 4 '17 at 19:05




IIRC, bash lines must be translated/"built" to native every time. This is not true of perl, which is compiled only once, or python, which is compiled to byte code. Bash won't build a line until it's about to run it, while perl buils everything at the beginning, etc.
– code_dredd
Jul 4 '17 at 19:05












Running it with 'endl' instead of 'n' did make it significantly slower but still faster than the bash script. Running the code posted by @user4581301, the performance was very similar to the performance of the bash script.
– obl
Jul 4 '17 at 19:45




Running it with 'endl' instead of 'n' did make it significantly slower but still faster than the bash script. Running the code posted by @user4581301, the performance was very similar to the performance of the bash script.
– obl
Jul 4 '17 at 19:45




1




1




"... interpreted execution environments (like ... python ..." - Is it though? CPython, the default Python implementation, compiles the Python source to bytecode, which is run in a VM (which some call the interpreter, and that makes things even more confusing). I'm not intimately familiar with Perl, but I wouldn't be suprised if it employed a similar construction. I think purely interpreted language implementations are quite rare nowadays. Though I'm pretty sure Unix shells still are.
– marcelm
Jul 4 '17 at 22:49




"... interpreted execution environments (like ... python ..." - Is it though? CPython, the default Python implementation, compiles the Python source to bytecode, which is run in a VM (which some call the interpreter, and that makes things even more confusing). I'm not intimately familiar with Perl, but I wouldn't be suprised if it employed a similar construction. I think purely interpreted language implementations are quite rare nowadays. Though I'm pretty sure Unix shells still are.
– marcelm
Jul 4 '17 at 22:49













6














As others have already pointed out, this is because you are currently opening and closing the file with each line you write in your script (and shell scripts are interpreted while C++ is compiled). You might batch the writes instead and write once, for example



MSG="something"
logfile="test.txt"
(
for i in {1..10000}; do
echo $MSG
done
) >> $logfile


Which will write the message 10k times but only open the log once.






share|improve this answer



















  • 1




    echo is a bash builtin
    – Basile Starynkevitch
    Jul 5 '17 at 3:37










  • @BasileStarynkevitch Fair enough. It's late here. And that was really tangential so I removed it.
    – Elliott Frisch
    Jul 5 '17 at 3:43
















6














As others have already pointed out, this is because you are currently opening and closing the file with each line you write in your script (and shell scripts are interpreted while C++ is compiled). You might batch the writes instead and write once, for example



MSG="something"
logfile="test.txt"
(
for i in {1..10000}; do
echo $MSG
done
) >> $logfile


Which will write the message 10k times but only open the log once.






share|improve this answer



















  • 1




    echo is a bash builtin
    – Basile Starynkevitch
    Jul 5 '17 at 3:37










  • @BasileStarynkevitch Fair enough. It's late here. And that was really tangential so I removed it.
    – Elliott Frisch
    Jul 5 '17 at 3:43














6












6








6






As others have already pointed out, this is because you are currently opening and closing the file with each line you write in your script (and shell scripts are interpreted while C++ is compiled). You might batch the writes instead and write once, for example



MSG="something"
logfile="test.txt"
(
for i in {1..10000}; do
echo $MSG
done
) >> $logfile


Which will write the message 10k times but only open the log once.






share|improve this answer














As others have already pointed out, this is because you are currently opening and closing the file with each line you write in your script (and shell scripts are interpreted while C++ is compiled). You might batch the writes instead and write once, for example



MSG="something"
logfile="test.txt"
(
for i in {1..10000}; do
echo $MSG
done
) >> $logfile


Which will write the message 10k times but only open the log once.







share|improve this answer














share|improve this answer



share|improve this answer








edited Jul 5 '17 at 3:42

























answered Jul 5 '17 at 3:33









Elliott Frisch

152k1389178




152k1389178








  • 1




    echo is a bash builtin
    – Basile Starynkevitch
    Jul 5 '17 at 3:37










  • @BasileStarynkevitch Fair enough. It's late here. And that was really tangential so I removed it.
    – Elliott Frisch
    Jul 5 '17 at 3:43














  • 1




    echo is a bash builtin
    – Basile Starynkevitch
    Jul 5 '17 at 3:37










  • @BasileStarynkevitch Fair enough. It's late here. And that was really tangential so I removed it.
    – Elliott Frisch
    Jul 5 '17 at 3:43








1




1




echo is a bash builtin
– Basile Starynkevitch
Jul 5 '17 at 3:37




echo is a bash builtin
– Basile Starynkevitch
Jul 5 '17 at 3:37












@BasileStarynkevitch Fair enough. It's late here. And that was really tangential so I removed it.
– Elliott Frisch
Jul 5 '17 at 3:43




@BasileStarynkevitch Fair enough. It's late here. And that was really tangential so I removed it.
– Elliott Frisch
Jul 5 '17 at 3:43











-3














Compiled vs. Interpreted Languages



Bash is interpreted while C++ is compiled. Just that makes it a lot faster






share|improve this answer

















  • 5




    Sometimes. And sometimes the interpreted language has nifty little instructions so tightly optimized that they blow expectations right out of the water.
    – user4581301
    Jul 4 '17 at 18:38








  • 1




    @user4581301 Well, technically they are not interpreted at this point, but JIT/AOT-compiled ;)
    – iehrlich
    Jul 4 '17 at 18:44








  • 1




    No... Bash is interpreted, and yes they can be fast, but you still have to interpret it is always going to be somewhat slower. You could compile it, but that is not what we are talking about
    – Reece Ward
    Jul 4 '17 at 18:47








  • 1




    @iehrlich even without jitting you sometimes run across "Holy Smurf!". Old matlab is a good example. The script is slow, but the code backing the script has some serious pep in it's step.
    – user4581301
    Jul 4 '17 at 18:48






  • 6




    Interpreted/JIT/compiled isn't especially relevant in this case, since the I/O is the bottleneck. CPU usage is going to be sitting below 1% for the entire duration of the program, so it won't really matter that the C++ version is faster during that 1%. iehrlich's answer is right; the problem is that the bash script opens the file anew every time it prints a line, while the C++ version keeps it open until it's done.
    – Ray
    Jul 4 '17 at 23:05


















-3














Compiled vs. Interpreted Languages



Bash is interpreted while C++ is compiled. Just that makes it a lot faster






share|improve this answer

















  • 5




    Sometimes. And sometimes the interpreted language has nifty little instructions so tightly optimized that they blow expectations right out of the water.
    – user4581301
    Jul 4 '17 at 18:38








  • 1




    @user4581301 Well, technically they are not interpreted at this point, but JIT/AOT-compiled ;)
    – iehrlich
    Jul 4 '17 at 18:44








  • 1




    No... Bash is interpreted, and yes they can be fast, but you still have to interpret it is always going to be somewhat slower. You could compile it, but that is not what we are talking about
    – Reece Ward
    Jul 4 '17 at 18:47








  • 1




    @iehrlich even without jitting you sometimes run across "Holy Smurf!". Old matlab is a good example. The script is slow, but the code backing the script has some serious pep in it's step.
    – user4581301
    Jul 4 '17 at 18:48






  • 6




    Interpreted/JIT/compiled isn't especially relevant in this case, since the I/O is the bottleneck. CPU usage is going to be sitting below 1% for the entire duration of the program, so it won't really matter that the C++ version is faster during that 1%. iehrlich's answer is right; the problem is that the bash script opens the file anew every time it prints a line, while the C++ version keeps it open until it's done.
    – Ray
    Jul 4 '17 at 23:05
















-3












-3








-3






Compiled vs. Interpreted Languages



Bash is interpreted while C++ is compiled. Just that makes it a lot faster






share|improve this answer












Compiled vs. Interpreted Languages



Bash is interpreted while C++ is compiled. Just that makes it a lot faster







share|improve this answer












share|improve this answer



share|improve this answer










answered Jul 4 '17 at 18:36









Reece Ward

291




291








  • 5




    Sometimes. And sometimes the interpreted language has nifty little instructions so tightly optimized that they blow expectations right out of the water.
    – user4581301
    Jul 4 '17 at 18:38








  • 1




    @user4581301 Well, technically they are not interpreted at this point, but JIT/AOT-compiled ;)
    – iehrlich
    Jul 4 '17 at 18:44








  • 1




    No... Bash is interpreted, and yes they can be fast, but you still have to interpret it is always going to be somewhat slower. You could compile it, but that is not what we are talking about
    – Reece Ward
    Jul 4 '17 at 18:47








  • 1




    @iehrlich even without jitting you sometimes run across "Holy Smurf!". Old matlab is a good example. The script is slow, but the code backing the script has some serious pep in it's step.
    – user4581301
    Jul 4 '17 at 18:48






  • 6




    Interpreted/JIT/compiled isn't especially relevant in this case, since the I/O is the bottleneck. CPU usage is going to be sitting below 1% for the entire duration of the program, so it won't really matter that the C++ version is faster during that 1%. iehrlich's answer is right; the problem is that the bash script opens the file anew every time it prints a line, while the C++ version keeps it open until it's done.
    – Ray
    Jul 4 '17 at 23:05
















  • 5




    Sometimes. And sometimes the interpreted language has nifty little instructions so tightly optimized that they blow expectations right out of the water.
    – user4581301
    Jul 4 '17 at 18:38








  • 1




    @user4581301 Well, technically they are not interpreted at this point, but JIT/AOT-compiled ;)
    – iehrlich
    Jul 4 '17 at 18:44








  • 1




    No... Bash is interpreted, and yes they can be fast, but you still have to interpret it is always going to be somewhat slower. You could compile it, but that is not what we are talking about
    – Reece Ward
    Jul 4 '17 at 18:47








  • 1




    @iehrlich even without jitting you sometimes run across "Holy Smurf!". Old matlab is a good example. The script is slow, but the code backing the script has some serious pep in it's step.
    – user4581301
    Jul 4 '17 at 18:48






  • 6




    Interpreted/JIT/compiled isn't especially relevant in this case, since the I/O is the bottleneck. CPU usage is going to be sitting below 1% for the entire duration of the program, so it won't really matter that the C++ version is faster during that 1%. iehrlich's answer is right; the problem is that the bash script opens the file anew every time it prints a line, while the C++ version keeps it open until it's done.
    – Ray
    Jul 4 '17 at 23:05










5




5




Sometimes. And sometimes the interpreted language has nifty little instructions so tightly optimized that they blow expectations right out of the water.
– user4581301
Jul 4 '17 at 18:38






Sometimes. And sometimes the interpreted language has nifty little instructions so tightly optimized that they blow expectations right out of the water.
– user4581301
Jul 4 '17 at 18:38






1




1




@user4581301 Well, technically they are not interpreted at this point, but JIT/AOT-compiled ;)
– iehrlich
Jul 4 '17 at 18:44






@user4581301 Well, technically they are not interpreted at this point, but JIT/AOT-compiled ;)
– iehrlich
Jul 4 '17 at 18:44






1




1




No... Bash is interpreted, and yes they can be fast, but you still have to interpret it is always going to be somewhat slower. You could compile it, but that is not what we are talking about
– Reece Ward
Jul 4 '17 at 18:47






No... Bash is interpreted, and yes they can be fast, but you still have to interpret it is always going to be somewhat slower. You could compile it, but that is not what we are talking about
– Reece Ward
Jul 4 '17 at 18:47






1




1




@iehrlich even without jitting you sometimes run across "Holy Smurf!". Old matlab is a good example. The script is slow, but the code backing the script has some serious pep in it's step.
– user4581301
Jul 4 '17 at 18:48




@iehrlich even without jitting you sometimes run across "Holy Smurf!". Old matlab is a good example. The script is slow, but the code backing the script has some serious pep in it's step.
– user4581301
Jul 4 '17 at 18:48




6




6




Interpreted/JIT/compiled isn't especially relevant in this case, since the I/O is the bottleneck. CPU usage is going to be sitting below 1% for the entire duration of the program, so it won't really matter that the C++ version is faster during that 1%. iehrlich's answer is right; the problem is that the bash script opens the file anew every time it prints a line, while the C++ version keeps it open until it's done.
– Ray
Jul 4 '17 at 23:05






Interpreted/JIT/compiled isn't especially relevant in this case, since the I/O is the bottleneck. CPU usage is going to be sitting below 1% for the entire duration of the program, so it won't really matter that the C++ version is faster during that 1%. iehrlich's answer is right; the problem is that the bash script opens the file anew every time it prints a line, while the C++ version keeps it open until it's done.
– Ray
Jul 4 '17 at 23:05




















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f44912276%2fc-much-faster-than-bash-script-writing-to-text-file%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Xamarin.iOS Cant Deploy on Iphone

Glorious Revolution

Dulmage-Mendelsohn matrix decomposition in Python