CSV Encoding Broken when Downloading from S3
up vote
1
down vote
favorite
I'm trying to download a CSV file from S3 using golang's SDK but it comes out encoded wrongly and is interpreted as one slice.
input := &s3.GetObjectInput{
Bucket: aws.String(bucket),
Key: aws.String(key),
ResponseContentType: aws.String("text/csv"),
ResponseContentEncoding: aws.String("utf-8"),
}
object, err := s3.New(s).GetObject(input)
if err != nil {
var obj s3.GetObjectOutput
return &obj, err
}
defer object.Body.Close()
lines, err := csv.NewReader(object.Body).ReadAll()
if err != nil {
log.Fatal(err)
}
log.Printf("%q", lines[0])
// returns ["ufeffH1" "H2r" "field1" "field2r" "field1" field2r00602"]
I'm guessing this is incorrect character encoding. Problem is that I'm not clear what encoding that it is. When I'm putting the file, I'm specifying csv.
I would have expected to see string
:
[
,
]
Any advice?
Approach 2
buffer := new(bytes.Buffer)
buffer.ReadFrom(object.Body)
str := buffer.String()
lines, err := csv.NewReader(strings.NewReader(str)).ReadAll()
if err != nil {
log.Fatal(err)
}
log.Printf("length: %v", len(lines))
// still one line
Approach 3
My new approach is going to be manually removing byte sequences that are problematic. This is pretty terrible. Godocs on this need work.
This is closer but now I have to split out on new lines then again on commas.
Edit
When I print out the bytes it looks like:
"ufeffH1,H2r,field1,field2r
I have tried using the following encodings:
utf-8
, iso-8859-1
, iso-8859-1:utf-8
csv go amazon-s3
|
show 1 more comment
up vote
1
down vote
favorite
I'm trying to download a CSV file from S3 using golang's SDK but it comes out encoded wrongly and is interpreted as one slice.
input := &s3.GetObjectInput{
Bucket: aws.String(bucket),
Key: aws.String(key),
ResponseContentType: aws.String("text/csv"),
ResponseContentEncoding: aws.String("utf-8"),
}
object, err := s3.New(s).GetObject(input)
if err != nil {
var obj s3.GetObjectOutput
return &obj, err
}
defer object.Body.Close()
lines, err := csv.NewReader(object.Body).ReadAll()
if err != nil {
log.Fatal(err)
}
log.Printf("%q", lines[0])
// returns ["ufeffH1" "H2r" "field1" "field2r" "field1" field2r00602"]
I'm guessing this is incorrect character encoding. Problem is that I'm not clear what encoding that it is. When I'm putting the file, I'm specifying csv.
I would have expected to see string
:
[
,
]
Any advice?
Approach 2
buffer := new(bytes.Buffer)
buffer.ReadFrom(object.Body)
str := buffer.String()
lines, err := csv.NewReader(strings.NewReader(str)).ReadAll()
if err != nil {
log.Fatal(err)
}
log.Printf("length: %v", len(lines))
// still one line
Approach 3
My new approach is going to be manually removing byte sequences that are problematic. This is pretty terrible. Godocs on this need work.
This is closer but now I have to split out on new lines then again on commas.
Edit
When I print out the bytes it looks like:
"ufeffH1,H2r,field1,field2r
I have tried using the following encodings:
utf-8
, iso-8859-1
, iso-8859-1:utf-8
csv go amazon-s3
trycsv.NewReader(strings.NewReader(object.Body))
– Mark
Nov 11 at 22:49
hmm, I get a compile error:cannot use object.Body (type io.ReadCloser) as type string in argument to strings.NewReader
– user3162553
Nov 11 at 22:52
1) S3 returns the object contents as is. The ResponseContentEncoding and ResponseContentType options set the response headers, the options do not specify any transformation of the data. 2) It looks like the response starts with a byte order mark. You will need to skip over that. 3) It looks like lines are separated by r. To aid debugging, please dump raw body data as suggested to you here.
– ThunderCat
Nov 11 at 23:52
@ThunderCat updated. It's a string with the weird encodings.
– user3162553
Nov 12 at 0:10
1
Thank you for showing the actual data as was asked in comments on your previous question. The file uploaded to S3 is not encoded per the CSV Encoding RFC. Specifically, the file uses CR to separate lines instead of CRLF. Because the encoding/csv reader handles LF separators, the easiest workaround is to translate CR to LF in the input. Also, the file contains a byte order mark. Skip over those two bytes before decoding.
– ThunderCat
Nov 12 at 0:42
|
show 1 more comment
up vote
1
down vote
favorite
up vote
1
down vote
favorite
I'm trying to download a CSV file from S3 using golang's SDK but it comes out encoded wrongly and is interpreted as one slice.
input := &s3.GetObjectInput{
Bucket: aws.String(bucket),
Key: aws.String(key),
ResponseContentType: aws.String("text/csv"),
ResponseContentEncoding: aws.String("utf-8"),
}
object, err := s3.New(s).GetObject(input)
if err != nil {
var obj s3.GetObjectOutput
return &obj, err
}
defer object.Body.Close()
lines, err := csv.NewReader(object.Body).ReadAll()
if err != nil {
log.Fatal(err)
}
log.Printf("%q", lines[0])
// returns ["ufeffH1" "H2r" "field1" "field2r" "field1" field2r00602"]
I'm guessing this is incorrect character encoding. Problem is that I'm not clear what encoding that it is. When I'm putting the file, I'm specifying csv.
I would have expected to see string
:
[
,
]
Any advice?
Approach 2
buffer := new(bytes.Buffer)
buffer.ReadFrom(object.Body)
str := buffer.String()
lines, err := csv.NewReader(strings.NewReader(str)).ReadAll()
if err != nil {
log.Fatal(err)
}
log.Printf("length: %v", len(lines))
// still one line
Approach 3
My new approach is going to be manually removing byte sequences that are problematic. This is pretty terrible. Godocs on this need work.
This is closer but now I have to split out on new lines then again on commas.
Edit
When I print out the bytes it looks like:
"ufeffH1,H2r,field1,field2r
I have tried using the following encodings:
utf-8
, iso-8859-1
, iso-8859-1:utf-8
csv go amazon-s3
I'm trying to download a CSV file from S3 using golang's SDK but it comes out encoded wrongly and is interpreted as one slice.
input := &s3.GetObjectInput{
Bucket: aws.String(bucket),
Key: aws.String(key),
ResponseContentType: aws.String("text/csv"),
ResponseContentEncoding: aws.String("utf-8"),
}
object, err := s3.New(s).GetObject(input)
if err != nil {
var obj s3.GetObjectOutput
return &obj, err
}
defer object.Body.Close()
lines, err := csv.NewReader(object.Body).ReadAll()
if err != nil {
log.Fatal(err)
}
log.Printf("%q", lines[0])
// returns ["ufeffH1" "H2r" "field1" "field2r" "field1" field2r00602"]
I'm guessing this is incorrect character encoding. Problem is that I'm not clear what encoding that it is. When I'm putting the file, I'm specifying csv.
I would have expected to see string
:
[
,
]
Any advice?
Approach 2
buffer := new(bytes.Buffer)
buffer.ReadFrom(object.Body)
str := buffer.String()
lines, err := csv.NewReader(strings.NewReader(str)).ReadAll()
if err != nil {
log.Fatal(err)
}
log.Printf("length: %v", len(lines))
// still one line
Approach 3
My new approach is going to be manually removing byte sequences that are problematic. This is pretty terrible. Godocs on this need work.
This is closer but now I have to split out on new lines then again on commas.
Edit
When I print out the bytes it looks like:
"ufeffH1,H2r,field1,field2r
I have tried using the following encodings:
utf-8
, iso-8859-1
, iso-8859-1:utf-8
csv go amazon-s3
csv go amazon-s3
edited Nov 12 at 0:09
asked Nov 11 at 22:39
user3162553
9051331
9051331
trycsv.NewReader(strings.NewReader(object.Body))
– Mark
Nov 11 at 22:49
hmm, I get a compile error:cannot use object.Body (type io.ReadCloser) as type string in argument to strings.NewReader
– user3162553
Nov 11 at 22:52
1) S3 returns the object contents as is. The ResponseContentEncoding and ResponseContentType options set the response headers, the options do not specify any transformation of the data. 2) It looks like the response starts with a byte order mark. You will need to skip over that. 3) It looks like lines are separated by r. To aid debugging, please dump raw body data as suggested to you here.
– ThunderCat
Nov 11 at 23:52
@ThunderCat updated. It's a string with the weird encodings.
– user3162553
Nov 12 at 0:10
1
Thank you for showing the actual data as was asked in comments on your previous question. The file uploaded to S3 is not encoded per the CSV Encoding RFC. Specifically, the file uses CR to separate lines instead of CRLF. Because the encoding/csv reader handles LF separators, the easiest workaround is to translate CR to LF in the input. Also, the file contains a byte order mark. Skip over those two bytes before decoding.
– ThunderCat
Nov 12 at 0:42
|
show 1 more comment
trycsv.NewReader(strings.NewReader(object.Body))
– Mark
Nov 11 at 22:49
hmm, I get a compile error:cannot use object.Body (type io.ReadCloser) as type string in argument to strings.NewReader
– user3162553
Nov 11 at 22:52
1) S3 returns the object contents as is. The ResponseContentEncoding and ResponseContentType options set the response headers, the options do not specify any transformation of the data. 2) It looks like the response starts with a byte order mark. You will need to skip over that. 3) It looks like lines are separated by r. To aid debugging, please dump raw body data as suggested to you here.
– ThunderCat
Nov 11 at 23:52
@ThunderCat updated. It's a string with the weird encodings.
– user3162553
Nov 12 at 0:10
1
Thank you for showing the actual data as was asked in comments on your previous question. The file uploaded to S3 is not encoded per the CSV Encoding RFC. Specifically, the file uses CR to separate lines instead of CRLF. Because the encoding/csv reader handles LF separators, the easiest workaround is to translate CR to LF in the input. Also, the file contains a byte order mark. Skip over those two bytes before decoding.
– ThunderCat
Nov 12 at 0:42
try
csv.NewReader(strings.NewReader(object.Body))
– Mark
Nov 11 at 22:49
try
csv.NewReader(strings.NewReader(object.Body))
– Mark
Nov 11 at 22:49
hmm, I get a compile error:
cannot use object.Body (type io.ReadCloser) as type string in argument to strings.NewReader
– user3162553
Nov 11 at 22:52
hmm, I get a compile error:
cannot use object.Body (type io.ReadCloser) as type string in argument to strings.NewReader
– user3162553
Nov 11 at 22:52
1) S3 returns the object contents as is. The ResponseContentEncoding and ResponseContentType options set the response headers, the options do not specify any transformation of the data. 2) It looks like the response starts with a byte order mark. You will need to skip over that. 3) It looks like lines are separated by r. To aid debugging, please dump raw body data as suggested to you here.
– ThunderCat
Nov 11 at 23:52
1) S3 returns the object contents as is. The ResponseContentEncoding and ResponseContentType options set the response headers, the options do not specify any transformation of the data. 2) It looks like the response starts with a byte order mark. You will need to skip over that. 3) It looks like lines are separated by r. To aid debugging, please dump raw body data as suggested to you here.
– ThunderCat
Nov 11 at 23:52
@ThunderCat updated. It's a string with the weird encodings.
– user3162553
Nov 12 at 0:10
@ThunderCat updated. It's a string with the weird encodings.
– user3162553
Nov 12 at 0:10
1
1
Thank you for showing the actual data as was asked in comments on your previous question. The file uploaded to S3 is not encoded per the CSV Encoding RFC. Specifically, the file uses CR to separate lines instead of CRLF. Because the encoding/csv reader handles LF separators, the easiest workaround is to translate CR to LF in the input. Also, the file contains a byte order mark. Skip over those two bytes before decoding.
– ThunderCat
Nov 12 at 0:42
Thank you for showing the actual data as was asked in comments on your previous question. The file uploaded to S3 is not encoded per the CSV Encoding RFC. Specifically, the file uses CR to separate lines instead of CRLF. Because the encoding/csv reader handles LF separators, the easiest workaround is to translate CR to LF in the input. Also, the file contains a byte order mark. Skip over those two bytes before decoding.
– ThunderCat
Nov 12 at 0:42
|
show 1 more comment
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53253963%2fcsv-encoding-broken-when-downloading-from-s3%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
try
csv.NewReader(strings.NewReader(object.Body))
– Mark
Nov 11 at 22:49
hmm, I get a compile error:
cannot use object.Body (type io.ReadCloser) as type string in argument to strings.NewReader
– user3162553
Nov 11 at 22:52
1) S3 returns the object contents as is. The ResponseContentEncoding and ResponseContentType options set the response headers, the options do not specify any transformation of the data. 2) It looks like the response starts with a byte order mark. You will need to skip over that. 3) It looks like lines are separated by r. To aid debugging, please dump raw body data as suggested to you here.
– ThunderCat
Nov 11 at 23:52
@ThunderCat updated. It's a string with the weird encodings.
– user3162553
Nov 12 at 0:10
1
Thank you for showing the actual data as was asked in comments on your previous question. The file uploaded to S3 is not encoded per the CSV Encoding RFC. Specifically, the file uses CR to separate lines instead of CRLF. Because the encoding/csv reader handles LF separators, the easiest workaround is to translate CR to LF in the input. Also, the file contains a byte order mark. Skip over those two bytes before decoding.
– ThunderCat
Nov 12 at 0:42