write values into csv during script execution
up vote
1
down vote
favorite
I have a simple script that reads values from one csv, runs some internal function on them that takes 2-3 seconds each time, and then writes the results into another csv file.
Here is what it looks like, minus the internal function I referenced.
import csv
import time
pause = 3
with open('input.csv', mode='r') as input_file,
open('output.csv', mode='w') as output_file:
input_reader = csv.DictReader(input_file)
output_writer = csv.writer(output_file, delimiter=',', quotechar='"',
quoting=csv.QUOTE_MINIMAL)
count = 1
for row in input_reader:
row['new_value'] = "result from function that takes time"
output_writer.writerow( row.values() )
print( 'Processed row: ' + str( count ) )
count = count + 1
time.sleep(pause)
The problem I face is that the output.csv
file remains blank until everything is finished executing.
I'd like to access and make use of the file elsewhere whilst this long script runs.
Is there a way I can prevent the delay of writing of the values into the output.csv
?
Edit: here is an dummy csv file for the script above:
value
43t34t34t
4r245r243
2q352q352
gergmergre
435q345q35
python
add a comment |
up vote
1
down vote
favorite
I have a simple script that reads values from one csv, runs some internal function on them that takes 2-3 seconds each time, and then writes the results into another csv file.
Here is what it looks like, minus the internal function I referenced.
import csv
import time
pause = 3
with open('input.csv', mode='r') as input_file,
open('output.csv', mode='w') as output_file:
input_reader = csv.DictReader(input_file)
output_writer = csv.writer(output_file, delimiter=',', quotechar='"',
quoting=csv.QUOTE_MINIMAL)
count = 1
for row in input_reader:
row['new_value'] = "result from function that takes time"
output_writer.writerow( row.values() )
print( 'Processed row: ' + str( count ) )
count = count + 1
time.sleep(pause)
The problem I face is that the output.csv
file remains blank until everything is finished executing.
I'd like to access and make use of the file elsewhere whilst this long script runs.
Is there a way I can prevent the delay of writing of the values into the output.csv
?
Edit: here is an dummy csv file for the script above:
value
43t34t34t
4r245r243
2q352q352
gergmergre
435q345q35
python
Have you thought of creating a string object which you append (write to) the new rows, which you then later write to the file?
– Modelmat
Nov 12 at 6:57
Putting anoutput_file.flush()
after theoutput_writer.writerow()
call might do the trick.
– martineau
Nov 12 at 7:59
add a comment |
up vote
1
down vote
favorite
up vote
1
down vote
favorite
I have a simple script that reads values from one csv, runs some internal function on them that takes 2-3 seconds each time, and then writes the results into another csv file.
Here is what it looks like, minus the internal function I referenced.
import csv
import time
pause = 3
with open('input.csv', mode='r') as input_file,
open('output.csv', mode='w') as output_file:
input_reader = csv.DictReader(input_file)
output_writer = csv.writer(output_file, delimiter=',', quotechar='"',
quoting=csv.QUOTE_MINIMAL)
count = 1
for row in input_reader:
row['new_value'] = "result from function that takes time"
output_writer.writerow( row.values() )
print( 'Processed row: ' + str( count ) )
count = count + 1
time.sleep(pause)
The problem I face is that the output.csv
file remains blank until everything is finished executing.
I'd like to access and make use of the file elsewhere whilst this long script runs.
Is there a way I can prevent the delay of writing of the values into the output.csv
?
Edit: here is an dummy csv file for the script above:
value
43t34t34t
4r245r243
2q352q352
gergmergre
435q345q35
python
I have a simple script that reads values from one csv, runs some internal function on them that takes 2-3 seconds each time, and then writes the results into another csv file.
Here is what it looks like, minus the internal function I referenced.
import csv
import time
pause = 3
with open('input.csv', mode='r') as input_file,
open('output.csv', mode='w') as output_file:
input_reader = csv.DictReader(input_file)
output_writer = csv.writer(output_file, delimiter=',', quotechar='"',
quoting=csv.QUOTE_MINIMAL)
count = 1
for row in input_reader:
row['new_value'] = "result from function that takes time"
output_writer.writerow( row.values() )
print( 'Processed row: ' + str( count ) )
count = count + 1
time.sleep(pause)
The problem I face is that the output.csv
file remains blank until everything is finished executing.
I'd like to access and make use of the file elsewhere whilst this long script runs.
Is there a way I can prevent the delay of writing of the values into the output.csv
?
Edit: here is an dummy csv file for the script above:
value
43t34t34t
4r245r243
2q352q352
gergmergre
435q345q35
python
python
edited Nov 12 at 7:55
martineau
65.3k988177
65.3k988177
asked Nov 12 at 6:54
Jack Robson
628519
628519
Have you thought of creating a string object which you append (write to) the new rows, which you then later write to the file?
– Modelmat
Nov 12 at 6:57
Putting anoutput_file.flush()
after theoutput_writer.writerow()
call might do the trick.
– martineau
Nov 12 at 7:59
add a comment |
Have you thought of creating a string object which you append (write to) the new rows, which you then later write to the file?
– Modelmat
Nov 12 at 6:57
Putting anoutput_file.flush()
after theoutput_writer.writerow()
call might do the trick.
– martineau
Nov 12 at 7:59
Have you thought of creating a string object which you append (write to) the new rows, which you then later write to the file?
– Modelmat
Nov 12 at 6:57
Have you thought of creating a string object which you append (write to) the new rows, which you then later write to the file?
– Modelmat
Nov 12 at 6:57
Putting an
output_file.flush()
after the output_writer.writerow()
call might do the trick.– martineau
Nov 12 at 7:59
Putting an
output_file.flush()
after the output_writer.writerow()
call might do the trick.– martineau
Nov 12 at 7:59
add a comment |
1 Answer
1
active
oldest
votes
up vote
2
down vote
accepted
I think you want to look at the buffering option - this is what controls how often Python flushes to a file.
Specifically setting open('name','wb',buffering=0)
will reduce buffering to minimum, but maybe you want to set it to some thing else that makes sense.
buffering is an optional integer used to set the buffering policy.
Pass 0 to switch buffering off (only allowed in binary mode), 1 to
select line buffering (only usable in text mode), and an integer > 1
to indicate the size in bytes of a fixed-size chunk buffer. When no
buffering argument is given, the default buffering policy works as
follows:
- Binary files are buffered in fixed-size chunks; the size of the buffer is chosen using a heuristic trying to determine the underlying
device’s “block size” and falling back on io.DEFAULT_BUFFER_SIZE. On
many systems, the buffer will typically be 4096 or 8192 bytes long.
- “Interactive” text files (files for which isatty() returns True) use line buffering. Other text files use the policy described above
for binary files.
See also How often does python flush to a file? .
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53257171%2fwrite-values-into-csv-during-script-execution%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
accepted
I think you want to look at the buffering option - this is what controls how often Python flushes to a file.
Specifically setting open('name','wb',buffering=0)
will reduce buffering to minimum, but maybe you want to set it to some thing else that makes sense.
buffering is an optional integer used to set the buffering policy.
Pass 0 to switch buffering off (only allowed in binary mode), 1 to
select line buffering (only usable in text mode), and an integer > 1
to indicate the size in bytes of a fixed-size chunk buffer. When no
buffering argument is given, the default buffering policy works as
follows:
- Binary files are buffered in fixed-size chunks; the size of the buffer is chosen using a heuristic trying to determine the underlying
device’s “block size” and falling back on io.DEFAULT_BUFFER_SIZE. On
many systems, the buffer will typically be 4096 or 8192 bytes long.
- “Interactive” text files (files for which isatty() returns True) use line buffering. Other text files use the policy described above
for binary files.
See also How often does python flush to a file? .
add a comment |
up vote
2
down vote
accepted
I think you want to look at the buffering option - this is what controls how often Python flushes to a file.
Specifically setting open('name','wb',buffering=0)
will reduce buffering to minimum, but maybe you want to set it to some thing else that makes sense.
buffering is an optional integer used to set the buffering policy.
Pass 0 to switch buffering off (only allowed in binary mode), 1 to
select line buffering (only usable in text mode), and an integer > 1
to indicate the size in bytes of a fixed-size chunk buffer. When no
buffering argument is given, the default buffering policy works as
follows:
- Binary files are buffered in fixed-size chunks; the size of the buffer is chosen using a heuristic trying to determine the underlying
device’s “block size” and falling back on io.DEFAULT_BUFFER_SIZE. On
many systems, the buffer will typically be 4096 or 8192 bytes long.
- “Interactive” text files (files for which isatty() returns True) use line buffering. Other text files use the policy described above
for binary files.
See also How often does python flush to a file? .
add a comment |
up vote
2
down vote
accepted
up vote
2
down vote
accepted
I think you want to look at the buffering option - this is what controls how often Python flushes to a file.
Specifically setting open('name','wb',buffering=0)
will reduce buffering to minimum, but maybe you want to set it to some thing else that makes sense.
buffering is an optional integer used to set the buffering policy.
Pass 0 to switch buffering off (only allowed in binary mode), 1 to
select line buffering (only usable in text mode), and an integer > 1
to indicate the size in bytes of a fixed-size chunk buffer. When no
buffering argument is given, the default buffering policy works as
follows:
- Binary files are buffered in fixed-size chunks; the size of the buffer is chosen using a heuristic trying to determine the underlying
device’s “block size” and falling back on io.DEFAULT_BUFFER_SIZE. On
many systems, the buffer will typically be 4096 or 8192 bytes long.
- “Interactive” text files (files for which isatty() returns True) use line buffering. Other text files use the policy described above
for binary files.
See also How often does python flush to a file? .
I think you want to look at the buffering option - this is what controls how often Python flushes to a file.
Specifically setting open('name','wb',buffering=0)
will reduce buffering to minimum, but maybe you want to set it to some thing else that makes sense.
buffering is an optional integer used to set the buffering policy.
Pass 0 to switch buffering off (only allowed in binary mode), 1 to
select line buffering (only usable in text mode), and an integer > 1
to indicate the size in bytes of a fixed-size chunk buffer. When no
buffering argument is given, the default buffering policy works as
follows:
- Binary files are buffered in fixed-size chunks; the size of the buffer is chosen using a heuristic trying to determine the underlying
device’s “block size” and falling back on io.DEFAULT_BUFFER_SIZE. On
many systems, the buffer will typically be 4096 or 8192 bytes long.
- “Interactive” text files (files for which isatty() returns True) use line buffering. Other text files use the policy described above
for binary files.
See also How often does python flush to a file? .
edited Nov 12 at 7:38
answered Nov 12 at 7:01
kabanus
11k31237
11k31237
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53257171%2fwrite-values-into-csv-during-script-execution%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Have you thought of creating a string object which you append (write to) the new rows, which you then later write to the file?
– Modelmat
Nov 12 at 6:57
Putting an
output_file.flush()
after theoutput_writer.writerow()
call might do the trick.– martineau
Nov 12 at 7:59