How can I find expected target phrase or keywords from given sentence in Python?
I am wondering that is there any efficient way to extract expected target phrase or key phrase from given sentence. So far I tokenized the given sentence and get POS tag for each word. Now I am not sure how to extract target key phrase or keyword from given sentence. The way of doing this is not intuitive to me.
Here is my input sentence list:
sentence_List= {"Obviously one of the most important features of any computer is the human interface.", "Good for everyday computing and web browsing.",
"My problem was with DELL Customer Service", "I play a lot of casual games online[comma] and the touchpad is very responsive"}
here is the tokenized sentence:
from nltk.tokenize import word_tokenize
tokenized_sents = [word_tokenize(i) for i in sentence_List]
tokenized=[i for i in tokenized_sents]
Here I used Spacy
to get POS tag of words:
import spacy
nlp = spacy.load('en_core_web_sm')
res=
for i in range(len(sentence_list.index)):
for token in i:
res.append(token.pos_)
so I may use NER
(a.k.a, name entity relation) from spacy
but its output is not the same thing with my pre-defined expected target phrase. Does anyone know how to accomplish this task either using Spacy
or stanfordcorenlp
module in python? what is an efficient solution to make this happen? Any idea? Thanks in advance :)
desired output:
I want to get the list of target phrase from respective sentence list as follow:
target_phraseList={"human interface","everyday computing","DELL Customer Service","touchpad"}
so I concatenate my input sentence_list
with an expected target phrase, my final desired output would be like this:
import pandas as pd
df=pd.Series(sentence_List, target_phraseList)
df=pd.DataFrame(df)
How can I get my expected target phrases from a given input sentence list by using spacy
? Any idea?
python nlp sentiment-analysis feature-extraction spacy
add a comment |
I am wondering that is there any efficient way to extract expected target phrase or key phrase from given sentence. So far I tokenized the given sentence and get POS tag for each word. Now I am not sure how to extract target key phrase or keyword from given sentence. The way of doing this is not intuitive to me.
Here is my input sentence list:
sentence_List= {"Obviously one of the most important features of any computer is the human interface.", "Good for everyday computing and web browsing.",
"My problem was with DELL Customer Service", "I play a lot of casual games online[comma] and the touchpad is very responsive"}
here is the tokenized sentence:
from nltk.tokenize import word_tokenize
tokenized_sents = [word_tokenize(i) for i in sentence_List]
tokenized=[i for i in tokenized_sents]
Here I used Spacy
to get POS tag of words:
import spacy
nlp = spacy.load('en_core_web_sm')
res=
for i in range(len(sentence_list.index)):
for token in i:
res.append(token.pos_)
so I may use NER
(a.k.a, name entity relation) from spacy
but its output is not the same thing with my pre-defined expected target phrase. Does anyone know how to accomplish this task either using Spacy
or stanfordcorenlp
module in python? what is an efficient solution to make this happen? Any idea? Thanks in advance :)
desired output:
I want to get the list of target phrase from respective sentence list as follow:
target_phraseList={"human interface","everyday computing","DELL Customer Service","touchpad"}
so I concatenate my input sentence_list
with an expected target phrase, my final desired output would be like this:
import pandas as pd
df=pd.Series(sentence_List, target_phraseList)
df=pd.DataFrame(df)
How can I get my expected target phrases from a given input sentence list by using spacy
? Any idea?
python nlp sentiment-analysis feature-extraction spacy
add a comment |
I am wondering that is there any efficient way to extract expected target phrase or key phrase from given sentence. So far I tokenized the given sentence and get POS tag for each word. Now I am not sure how to extract target key phrase or keyword from given sentence. The way of doing this is not intuitive to me.
Here is my input sentence list:
sentence_List= {"Obviously one of the most important features of any computer is the human interface.", "Good for everyday computing and web browsing.",
"My problem was with DELL Customer Service", "I play a lot of casual games online[comma] and the touchpad is very responsive"}
here is the tokenized sentence:
from nltk.tokenize import word_tokenize
tokenized_sents = [word_tokenize(i) for i in sentence_List]
tokenized=[i for i in tokenized_sents]
Here I used Spacy
to get POS tag of words:
import spacy
nlp = spacy.load('en_core_web_sm')
res=
for i in range(len(sentence_list.index)):
for token in i:
res.append(token.pos_)
so I may use NER
(a.k.a, name entity relation) from spacy
but its output is not the same thing with my pre-defined expected target phrase. Does anyone know how to accomplish this task either using Spacy
or stanfordcorenlp
module in python? what is an efficient solution to make this happen? Any idea? Thanks in advance :)
desired output:
I want to get the list of target phrase from respective sentence list as follow:
target_phraseList={"human interface","everyday computing","DELL Customer Service","touchpad"}
so I concatenate my input sentence_list
with an expected target phrase, my final desired output would be like this:
import pandas as pd
df=pd.Series(sentence_List, target_phraseList)
df=pd.DataFrame(df)
How can I get my expected target phrases from a given input sentence list by using spacy
? Any idea?
python nlp sentiment-analysis feature-extraction spacy
I am wondering that is there any efficient way to extract expected target phrase or key phrase from given sentence. So far I tokenized the given sentence and get POS tag for each word. Now I am not sure how to extract target key phrase or keyword from given sentence. The way of doing this is not intuitive to me.
Here is my input sentence list:
sentence_List= {"Obviously one of the most important features of any computer is the human interface.", "Good for everyday computing and web browsing.",
"My problem was with DELL Customer Service", "I play a lot of casual games online[comma] and the touchpad is very responsive"}
here is the tokenized sentence:
from nltk.tokenize import word_tokenize
tokenized_sents = [word_tokenize(i) for i in sentence_List]
tokenized=[i for i in tokenized_sents]
Here I used Spacy
to get POS tag of words:
import spacy
nlp = spacy.load('en_core_web_sm')
res=
for i in range(len(sentence_list.index)):
for token in i:
res.append(token.pos_)
so I may use NER
(a.k.a, name entity relation) from spacy
but its output is not the same thing with my pre-defined expected target phrase. Does anyone know how to accomplish this task either using Spacy
or stanfordcorenlp
module in python? what is an efficient solution to make this happen? Any idea? Thanks in advance :)
desired output:
I want to get the list of target phrase from respective sentence list as follow:
target_phraseList={"human interface","everyday computing","DELL Customer Service","touchpad"}
so I concatenate my input sentence_list
with an expected target phrase, my final desired output would be like this:
import pandas as pd
df=pd.Series(sentence_List, target_phraseList)
df=pd.DataFrame(df)
How can I get my expected target phrases from a given input sentence list by using spacy
? Any idea?
python nlp sentiment-analysis feature-extraction spacy
python nlp sentiment-analysis feature-extraction spacy
asked Nov 15 '18 at 23:10
Andy.JianAndy.Jian
15712
15712
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
You can possibly do this using spacy by Phrase Matcher.
from spacy.matcher import PhraseMatcher
matcher = PhraseMatcher(nlp.vocab)
matcher.add('DELL', None, nlp(u"DELL Customer Service"))
doc = nlp(u"My problem was with DELL Customer Service")
matches = matcher(doc)
would it be possible to make your code smarter? Because I picked up target phrase by reading it, but how python get that phrase by using input sentence list? Any more thoughts? Is that possible to make this code linematcher.add('DELL', None, nlp(u"DELL Customer Service"))
smarter for applying all sentences? Thank you in advance
– Andy.Jian
Nov 16 '18 at 15:22
What do you mean by making the code smarter? An example?
– Pradip Pramanick
Nov 19 '18 at 4:46
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53329158%2fhow-can-i-find-expected-target-phrase-or-keywords-from-given-sentence-in-python%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
You can possibly do this using spacy by Phrase Matcher.
from spacy.matcher import PhraseMatcher
matcher = PhraseMatcher(nlp.vocab)
matcher.add('DELL', None, nlp(u"DELL Customer Service"))
doc = nlp(u"My problem was with DELL Customer Service")
matches = matcher(doc)
would it be possible to make your code smarter? Because I picked up target phrase by reading it, but how python get that phrase by using input sentence list? Any more thoughts? Is that possible to make this code linematcher.add('DELL', None, nlp(u"DELL Customer Service"))
smarter for applying all sentences? Thank you in advance
– Andy.Jian
Nov 16 '18 at 15:22
What do you mean by making the code smarter? An example?
– Pradip Pramanick
Nov 19 '18 at 4:46
add a comment |
You can possibly do this using spacy by Phrase Matcher.
from spacy.matcher import PhraseMatcher
matcher = PhraseMatcher(nlp.vocab)
matcher.add('DELL', None, nlp(u"DELL Customer Service"))
doc = nlp(u"My problem was with DELL Customer Service")
matches = matcher(doc)
would it be possible to make your code smarter? Because I picked up target phrase by reading it, but how python get that phrase by using input sentence list? Any more thoughts? Is that possible to make this code linematcher.add('DELL', None, nlp(u"DELL Customer Service"))
smarter for applying all sentences? Thank you in advance
– Andy.Jian
Nov 16 '18 at 15:22
What do you mean by making the code smarter? An example?
– Pradip Pramanick
Nov 19 '18 at 4:46
add a comment |
You can possibly do this using spacy by Phrase Matcher.
from spacy.matcher import PhraseMatcher
matcher = PhraseMatcher(nlp.vocab)
matcher.add('DELL', None, nlp(u"DELL Customer Service"))
doc = nlp(u"My problem was with DELL Customer Service")
matches = matcher(doc)
You can possibly do this using spacy by Phrase Matcher.
from spacy.matcher import PhraseMatcher
matcher = PhraseMatcher(nlp.vocab)
matcher.add('DELL', None, nlp(u"DELL Customer Service"))
doc = nlp(u"My problem was with DELL Customer Service")
matches = matcher(doc)
answered Nov 16 '18 at 11:31
Pradip PramanickPradip Pramanick
5821723
5821723
would it be possible to make your code smarter? Because I picked up target phrase by reading it, but how python get that phrase by using input sentence list? Any more thoughts? Is that possible to make this code linematcher.add('DELL', None, nlp(u"DELL Customer Service"))
smarter for applying all sentences? Thank you in advance
– Andy.Jian
Nov 16 '18 at 15:22
What do you mean by making the code smarter? An example?
– Pradip Pramanick
Nov 19 '18 at 4:46
add a comment |
would it be possible to make your code smarter? Because I picked up target phrase by reading it, but how python get that phrase by using input sentence list? Any more thoughts? Is that possible to make this code linematcher.add('DELL', None, nlp(u"DELL Customer Service"))
smarter for applying all sentences? Thank you in advance
– Andy.Jian
Nov 16 '18 at 15:22
What do you mean by making the code smarter? An example?
– Pradip Pramanick
Nov 19 '18 at 4:46
would it be possible to make your code smarter? Because I picked up target phrase by reading it, but how python get that phrase by using input sentence list? Any more thoughts? Is that possible to make this code line
matcher.add('DELL', None, nlp(u"DELL Customer Service"))
smarter for applying all sentences? Thank you in advance– Andy.Jian
Nov 16 '18 at 15:22
would it be possible to make your code smarter? Because I picked up target phrase by reading it, but how python get that phrase by using input sentence list? Any more thoughts? Is that possible to make this code line
matcher.add('DELL', None, nlp(u"DELL Customer Service"))
smarter for applying all sentences? Thank you in advance– Andy.Jian
Nov 16 '18 at 15:22
What do you mean by making the code smarter? An example?
– Pradip Pramanick
Nov 19 '18 at 4:46
What do you mean by making the code smarter? An example?
– Pradip Pramanick
Nov 19 '18 at 4:46
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53329158%2fhow-can-i-find-expected-target-phrase-or-keywords-from-given-sentence-in-python%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown