Match patterns from one file awk not working
up vote
1
down vote
favorite
I want to match strings from a pattern file to look into Source.txt file.
pattern_list.txt has 139k lines
Source.txt more than 5 millions lines
If I use grep like this it tooks 2 seconds to get the output.
grep -F -f pattern_list.txt Source.txt > Output.txt
But if I try with this AWK script it gets stuck and after 10 min I need to stop because nothing happens.
awk 'NR==FNR {a[$1]; next} {
for (i in a) if ($0 ~ i) print $0
}' FS=, OFS=, pattern_list.txt Source.txt > Output.txt
pattern_list is like this
21051
99888
95746
and source.txt like this
72300,2,694
21051,1,694
63143,3,694
25223,2,694
99888,8,694
53919,2,694
51059,2,694
What it wrong with my AWK script?
I'm running on Cygwin in Windows.
awk pattern-matching
add a comment |
up vote
1
down vote
favorite
I want to match strings from a pattern file to look into Source.txt file.
pattern_list.txt has 139k lines
Source.txt more than 5 millions lines
If I use grep like this it tooks 2 seconds to get the output.
grep -F -f pattern_list.txt Source.txt > Output.txt
But if I try with this AWK script it gets stuck and after 10 min I need to stop because nothing happens.
awk 'NR==FNR {a[$1]; next} {
for (i in a) if ($0 ~ i) print $0
}' FS=, OFS=, pattern_list.txt Source.txt > Output.txt
pattern_list is like this
21051
99888
95746
and source.txt like this
72300,2,694
21051,1,694
63143,3,694
25223,2,694
99888,8,694
53919,2,694
51059,2,694
What it wrong with my AWK script?
I'm running on Cygwin in Windows.
awk pattern-matching
3
Another approach:join -t "," <(sort pattern_list) <(sort source.txt)
– Cyrus
Nov 10 at 21:10
Possible duplicate of Fastest way to find lines of a file from another larger file in Bash
– codeforester
Nov 11 at 7:06
@codeforester Hi, I was asking more about why my awk script was so slow, than that ask the fastest way to do it in perl, grep, bash or other tools.
– Ger Cas
Nov 11 at 12:44
Since yourawk
code is trying to exactly what the accepted answer in the linked post is doing, I considered it a duplicate, or at least, related.
– codeforester
Nov 11 at 17:23
add a comment |
up vote
1
down vote
favorite
up vote
1
down vote
favorite
I want to match strings from a pattern file to look into Source.txt file.
pattern_list.txt has 139k lines
Source.txt more than 5 millions lines
If I use grep like this it tooks 2 seconds to get the output.
grep -F -f pattern_list.txt Source.txt > Output.txt
But if I try with this AWK script it gets stuck and after 10 min I need to stop because nothing happens.
awk 'NR==FNR {a[$1]; next} {
for (i in a) if ($0 ~ i) print $0
}' FS=, OFS=, pattern_list.txt Source.txt > Output.txt
pattern_list is like this
21051
99888
95746
and source.txt like this
72300,2,694
21051,1,694
63143,3,694
25223,2,694
99888,8,694
53919,2,694
51059,2,694
What it wrong with my AWK script?
I'm running on Cygwin in Windows.
awk pattern-matching
I want to match strings from a pattern file to look into Source.txt file.
pattern_list.txt has 139k lines
Source.txt more than 5 millions lines
If I use grep like this it tooks 2 seconds to get the output.
grep -F -f pattern_list.txt Source.txt > Output.txt
But if I try with this AWK script it gets stuck and after 10 min I need to stop because nothing happens.
awk 'NR==FNR {a[$1]; next} {
for (i in a) if ($0 ~ i) print $0
}' FS=, OFS=, pattern_list.txt Source.txt > Output.txt
pattern_list is like this
21051
99888
95746
and source.txt like this
72300,2,694
21051,1,694
63143,3,694
25223,2,694
99888,8,694
53919,2,694
51059,2,694
What it wrong with my AWK script?
I'm running on Cygwin in Windows.
awk pattern-matching
awk pattern-matching
edited Nov 11 at 7:06
codeforester
17k83863
17k83863
asked Nov 10 at 20:44
Ger Cas
315111
315111
3
Another approach:join -t "," <(sort pattern_list) <(sort source.txt)
– Cyrus
Nov 10 at 21:10
Possible duplicate of Fastest way to find lines of a file from another larger file in Bash
– codeforester
Nov 11 at 7:06
@codeforester Hi, I was asking more about why my awk script was so slow, than that ask the fastest way to do it in perl, grep, bash or other tools.
– Ger Cas
Nov 11 at 12:44
Since yourawk
code is trying to exactly what the accepted answer in the linked post is doing, I considered it a duplicate, or at least, related.
– codeforester
Nov 11 at 17:23
add a comment |
3
Another approach:join -t "," <(sort pattern_list) <(sort source.txt)
– Cyrus
Nov 10 at 21:10
Possible duplicate of Fastest way to find lines of a file from another larger file in Bash
– codeforester
Nov 11 at 7:06
@codeforester Hi, I was asking more about why my awk script was so slow, than that ask the fastest way to do it in perl, grep, bash or other tools.
– Ger Cas
Nov 11 at 12:44
Since yourawk
code is trying to exactly what the accepted answer in the linked post is doing, I considered it a duplicate, or at least, related.
– codeforester
Nov 11 at 17:23
3
3
Another approach:
join -t "," <(sort pattern_list) <(sort source.txt)
– Cyrus
Nov 10 at 21:10
Another approach:
join -t "," <(sort pattern_list) <(sort source.txt)
– Cyrus
Nov 10 at 21:10
Possible duplicate of Fastest way to find lines of a file from another larger file in Bash
– codeforester
Nov 11 at 7:06
Possible duplicate of Fastest way to find lines of a file from another larger file in Bash
– codeforester
Nov 11 at 7:06
@codeforester Hi, I was asking more about why my awk script was so slow, than that ask the fastest way to do it in perl, grep, bash or other tools.
– Ger Cas
Nov 11 at 12:44
@codeforester Hi, I was asking more about why my awk script was so slow, than that ask the fastest way to do it in perl, grep, bash or other tools.
– Ger Cas
Nov 11 at 12:44
Since your
awk
code is trying to exactly what the accepted answer in the linked post is doing, I considered it a duplicate, or at least, related.– codeforester
Nov 11 at 17:23
Since your
awk
code is trying to exactly what the accepted answer in the linked post is doing, I considered it a duplicate, or at least, related.– codeforester
Nov 11 at 17:23
add a comment |
2 Answers
2
active
oldest
votes
up vote
2
down vote
accepted
if you are doing literal match this should be faster than your approach
$ awk -F, 'NR==FNR{a[$0]; next} $1 in a{print $1,$3,$8,$20}' pattern_list source > output
However, I think sort/join
will still be faster than grep and awk.
Excellent. Now the execution time with your awk script is less than 4 seconds. But since my original source file has several fields how to say in your script that print only $3, $8 and $20 for the matched strings?
– Ger Cas
Nov 10 at 23:10
just print required fields, see the update.
– karakfa
Nov 10 at 23:56
Can't improve on karakfa's answer, but for grep vs awk performance tests see polydesmida.info/BASHing/2018-10-24.html
– user2138595
Nov 11 at 6:57
@karakfa Thanks a lot. It works exactly as I wanted.
– Ger Cas
Nov 11 at 12:36
@user2138595 Thanks for share the info. Yes, is what I understood in theory and practice, that awk is champion in speed.
– Ger Cas
Nov 11 at 12:37
add a comment |
up vote
2
down vote
If increasing performance is your goal, you'll need to multithread this (AWK is unlikely faster, perhaps slower?).
If I were you, I'd partition the source file, then search each part:
$ split -l 100000 src.txt src_part
$ ls src_part* | xargs -n1 -P4 fgrep -f pat.txt > matches.txt
$ rm src_part*
Thanks for answer, but what I know is that awk is faster than grep. So, I don't know what happens here.
– Ger Cas
Nov 10 at 21:06
1
@GerCas I doubt that is true as AWK has to parse the script, then run. grep, on the other hand, is heavily optimized for its purpose.
– Rafael
Nov 10 at 21:09
add a comment |
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
accepted
if you are doing literal match this should be faster than your approach
$ awk -F, 'NR==FNR{a[$0]; next} $1 in a{print $1,$3,$8,$20}' pattern_list source > output
However, I think sort/join
will still be faster than grep and awk.
Excellent. Now the execution time with your awk script is less than 4 seconds. But since my original source file has several fields how to say in your script that print only $3, $8 and $20 for the matched strings?
– Ger Cas
Nov 10 at 23:10
just print required fields, see the update.
– karakfa
Nov 10 at 23:56
Can't improve on karakfa's answer, but for grep vs awk performance tests see polydesmida.info/BASHing/2018-10-24.html
– user2138595
Nov 11 at 6:57
@karakfa Thanks a lot. It works exactly as I wanted.
– Ger Cas
Nov 11 at 12:36
@user2138595 Thanks for share the info. Yes, is what I understood in theory and practice, that awk is champion in speed.
– Ger Cas
Nov 11 at 12:37
add a comment |
up vote
2
down vote
accepted
if you are doing literal match this should be faster than your approach
$ awk -F, 'NR==FNR{a[$0]; next} $1 in a{print $1,$3,$8,$20}' pattern_list source > output
However, I think sort/join
will still be faster than grep and awk.
Excellent. Now the execution time with your awk script is less than 4 seconds. But since my original source file has several fields how to say in your script that print only $3, $8 and $20 for the matched strings?
– Ger Cas
Nov 10 at 23:10
just print required fields, see the update.
– karakfa
Nov 10 at 23:56
Can't improve on karakfa's answer, but for grep vs awk performance tests see polydesmida.info/BASHing/2018-10-24.html
– user2138595
Nov 11 at 6:57
@karakfa Thanks a lot. It works exactly as I wanted.
– Ger Cas
Nov 11 at 12:36
@user2138595 Thanks for share the info. Yes, is what I understood in theory and practice, that awk is champion in speed.
– Ger Cas
Nov 11 at 12:37
add a comment |
up vote
2
down vote
accepted
up vote
2
down vote
accepted
if you are doing literal match this should be faster than your approach
$ awk -F, 'NR==FNR{a[$0]; next} $1 in a{print $1,$3,$8,$20}' pattern_list source > output
However, I think sort/join
will still be faster than grep and awk.
if you are doing literal match this should be faster than your approach
$ awk -F, 'NR==FNR{a[$0]; next} $1 in a{print $1,$3,$8,$20}' pattern_list source > output
However, I think sort/join
will still be faster than grep and awk.
edited Nov 10 at 23:55
answered Nov 10 at 21:22
karakfa
46.6k52738
46.6k52738
Excellent. Now the execution time with your awk script is less than 4 seconds. But since my original source file has several fields how to say in your script that print only $3, $8 and $20 for the matched strings?
– Ger Cas
Nov 10 at 23:10
just print required fields, see the update.
– karakfa
Nov 10 at 23:56
Can't improve on karakfa's answer, but for grep vs awk performance tests see polydesmida.info/BASHing/2018-10-24.html
– user2138595
Nov 11 at 6:57
@karakfa Thanks a lot. It works exactly as I wanted.
– Ger Cas
Nov 11 at 12:36
@user2138595 Thanks for share the info. Yes, is what I understood in theory and practice, that awk is champion in speed.
– Ger Cas
Nov 11 at 12:37
add a comment |
Excellent. Now the execution time with your awk script is less than 4 seconds. But since my original source file has several fields how to say in your script that print only $3, $8 and $20 for the matched strings?
– Ger Cas
Nov 10 at 23:10
just print required fields, see the update.
– karakfa
Nov 10 at 23:56
Can't improve on karakfa's answer, but for grep vs awk performance tests see polydesmida.info/BASHing/2018-10-24.html
– user2138595
Nov 11 at 6:57
@karakfa Thanks a lot. It works exactly as I wanted.
– Ger Cas
Nov 11 at 12:36
@user2138595 Thanks for share the info. Yes, is what I understood in theory and practice, that awk is champion in speed.
– Ger Cas
Nov 11 at 12:37
Excellent. Now the execution time with your awk script is less than 4 seconds. But since my original source file has several fields how to say in your script that print only $3, $8 and $20 for the matched strings?
– Ger Cas
Nov 10 at 23:10
Excellent. Now the execution time with your awk script is less than 4 seconds. But since my original source file has several fields how to say in your script that print only $3, $8 and $20 for the matched strings?
– Ger Cas
Nov 10 at 23:10
just print required fields, see the update.
– karakfa
Nov 10 at 23:56
just print required fields, see the update.
– karakfa
Nov 10 at 23:56
Can't improve on karakfa's answer, but for grep vs awk performance tests see polydesmida.info/BASHing/2018-10-24.html
– user2138595
Nov 11 at 6:57
Can't improve on karakfa's answer, but for grep vs awk performance tests see polydesmida.info/BASHing/2018-10-24.html
– user2138595
Nov 11 at 6:57
@karakfa Thanks a lot. It works exactly as I wanted.
– Ger Cas
Nov 11 at 12:36
@karakfa Thanks a lot. It works exactly as I wanted.
– Ger Cas
Nov 11 at 12:36
@user2138595 Thanks for share the info. Yes, is what I understood in theory and practice, that awk is champion in speed.
– Ger Cas
Nov 11 at 12:37
@user2138595 Thanks for share the info. Yes, is what I understood in theory and practice, that awk is champion in speed.
– Ger Cas
Nov 11 at 12:37
add a comment |
up vote
2
down vote
If increasing performance is your goal, you'll need to multithread this (AWK is unlikely faster, perhaps slower?).
If I were you, I'd partition the source file, then search each part:
$ split -l 100000 src.txt src_part
$ ls src_part* | xargs -n1 -P4 fgrep -f pat.txt > matches.txt
$ rm src_part*
Thanks for answer, but what I know is that awk is faster than grep. So, I don't know what happens here.
– Ger Cas
Nov 10 at 21:06
1
@GerCas I doubt that is true as AWK has to parse the script, then run. grep, on the other hand, is heavily optimized for its purpose.
– Rafael
Nov 10 at 21:09
add a comment |
up vote
2
down vote
If increasing performance is your goal, you'll need to multithread this (AWK is unlikely faster, perhaps slower?).
If I were you, I'd partition the source file, then search each part:
$ split -l 100000 src.txt src_part
$ ls src_part* | xargs -n1 -P4 fgrep -f pat.txt > matches.txt
$ rm src_part*
Thanks for answer, but what I know is that awk is faster than grep. So, I don't know what happens here.
– Ger Cas
Nov 10 at 21:06
1
@GerCas I doubt that is true as AWK has to parse the script, then run. grep, on the other hand, is heavily optimized for its purpose.
– Rafael
Nov 10 at 21:09
add a comment |
up vote
2
down vote
up vote
2
down vote
If increasing performance is your goal, you'll need to multithread this (AWK is unlikely faster, perhaps slower?).
If I were you, I'd partition the source file, then search each part:
$ split -l 100000 src.txt src_part
$ ls src_part* | xargs -n1 -P4 fgrep -f pat.txt > matches.txt
$ rm src_part*
If increasing performance is your goal, you'll need to multithread this (AWK is unlikely faster, perhaps slower?).
If I were you, I'd partition the source file, then search each part:
$ split -l 100000 src.txt src_part
$ ls src_part* | xargs -n1 -P4 fgrep -f pat.txt > matches.txt
$ rm src_part*
answered Nov 10 at 21:00
Rafael
4,22372037
4,22372037
Thanks for answer, but what I know is that awk is faster than grep. So, I don't know what happens here.
– Ger Cas
Nov 10 at 21:06
1
@GerCas I doubt that is true as AWK has to parse the script, then run. grep, on the other hand, is heavily optimized for its purpose.
– Rafael
Nov 10 at 21:09
add a comment |
Thanks for answer, but what I know is that awk is faster than grep. So, I don't know what happens here.
– Ger Cas
Nov 10 at 21:06
1
@GerCas I doubt that is true as AWK has to parse the script, then run. grep, on the other hand, is heavily optimized for its purpose.
– Rafael
Nov 10 at 21:09
Thanks for answer, but what I know is that awk is faster than grep. So, I don't know what happens here.
– Ger Cas
Nov 10 at 21:06
Thanks for answer, but what I know is that awk is faster than grep. So, I don't know what happens here.
– Ger Cas
Nov 10 at 21:06
1
1
@GerCas I doubt that is true as AWK has to parse the script, then run. grep, on the other hand, is heavily optimized for its purpose.
– Rafael
Nov 10 at 21:09
@GerCas I doubt that is true as AWK has to parse the script, then run. grep, on the other hand, is heavily optimized for its purpose.
– Rafael
Nov 10 at 21:09
add a comment |
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53243236%2fmatch-patterns-from-one-file-awk-not-working%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
3
Another approach:
join -t "," <(sort pattern_list) <(sort source.txt)
– Cyrus
Nov 10 at 21:10
Possible duplicate of Fastest way to find lines of a file from another larger file in Bash
– codeforester
Nov 11 at 7:06
@codeforester Hi, I was asking more about why my awk script was so slow, than that ask the fastest way to do it in perl, grep, bash or other tools.
– Ger Cas
Nov 11 at 12:44
Since your
awk
code is trying to exactly what the accepted answer in the linked post is doing, I considered it a duplicate, or at least, related.– codeforester
Nov 11 at 17:23