Saving the duplicate rows in a seperate dataframe












0















I am able to delete the duplicate rows using pandas,



df.drop_duplicates(subset=['issuer_id', 'hios_plan_identifier', 'group_or_individual_plan_type']) .
For I know , it drops all the duplicates keeping the first occurrence which is the default functionality.



My requirement is that I want to save the dropped data to a another dataframe checking on a subsets of columns.



I have my dataframe df,



  issuer_id hios_plan_identifier  plan_year group_or_individual_plan_type
0 484 99806CAAUSJ-TMP 2018 Group
1 484 99806CAAUSJ-TMP 2018 Group
2 484 99806CAAUSJ-TMP 2018 Group
3 484 99806CAAUSJ-TMP 2018 Group


I want to drop the duplicates from df(will have only 1 row) and save the rest in another dataframe df1 (will have 3 rows).










share|improve this question




















  • 1





    All values in column hios_plan_identifier are unique, so no duplicates in the example. Please check it.

    – Sandeep Kadapa
    Nov 16 '18 at 7:29











  • My bad.. updated

    – themaster
    Nov 16 '18 at 7:32
















0















I am able to delete the duplicate rows using pandas,



df.drop_duplicates(subset=['issuer_id', 'hios_plan_identifier', 'group_or_individual_plan_type']) .
For I know , it drops all the duplicates keeping the first occurrence which is the default functionality.



My requirement is that I want to save the dropped data to a another dataframe checking on a subsets of columns.



I have my dataframe df,



  issuer_id hios_plan_identifier  plan_year group_or_individual_plan_type
0 484 99806CAAUSJ-TMP 2018 Group
1 484 99806CAAUSJ-TMP 2018 Group
2 484 99806CAAUSJ-TMP 2018 Group
3 484 99806CAAUSJ-TMP 2018 Group


I want to drop the duplicates from df(will have only 1 row) and save the rest in another dataframe df1 (will have 3 rows).










share|improve this question




















  • 1





    All values in column hios_plan_identifier are unique, so no duplicates in the example. Please check it.

    – Sandeep Kadapa
    Nov 16 '18 at 7:29











  • My bad.. updated

    – themaster
    Nov 16 '18 at 7:32














0












0








0








I am able to delete the duplicate rows using pandas,



df.drop_duplicates(subset=['issuer_id', 'hios_plan_identifier', 'group_or_individual_plan_type']) .
For I know , it drops all the duplicates keeping the first occurrence which is the default functionality.



My requirement is that I want to save the dropped data to a another dataframe checking on a subsets of columns.



I have my dataframe df,



  issuer_id hios_plan_identifier  plan_year group_or_individual_plan_type
0 484 99806CAAUSJ-TMP 2018 Group
1 484 99806CAAUSJ-TMP 2018 Group
2 484 99806CAAUSJ-TMP 2018 Group
3 484 99806CAAUSJ-TMP 2018 Group


I want to drop the duplicates from df(will have only 1 row) and save the rest in another dataframe df1 (will have 3 rows).










share|improve this question
















I am able to delete the duplicate rows using pandas,



df.drop_duplicates(subset=['issuer_id', 'hios_plan_identifier', 'group_or_individual_plan_type']) .
For I know , it drops all the duplicates keeping the first occurrence which is the default functionality.



My requirement is that I want to save the dropped data to a another dataframe checking on a subsets of columns.



I have my dataframe df,



  issuer_id hios_plan_identifier  plan_year group_or_individual_plan_type
0 484 99806CAAUSJ-TMP 2018 Group
1 484 99806CAAUSJ-TMP 2018 Group
2 484 99806CAAUSJ-TMP 2018 Group
3 484 99806CAAUSJ-TMP 2018 Group


I want to drop the duplicates from df(will have only 1 row) and save the rest in another dataframe df1 (will have 3 rows).







python pandas dataframe






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 16 '18 at 7:31







themaster

















asked Nov 16 '18 at 7:25









themasterthemaster

15415




15415








  • 1





    All values in column hios_plan_identifier are unique, so no duplicates in the example. Please check it.

    – Sandeep Kadapa
    Nov 16 '18 at 7:29











  • My bad.. updated

    – themaster
    Nov 16 '18 at 7:32














  • 1





    All values in column hios_plan_identifier are unique, so no duplicates in the example. Please check it.

    – Sandeep Kadapa
    Nov 16 '18 at 7:29











  • My bad.. updated

    – themaster
    Nov 16 '18 at 7:32








1




1





All values in column hios_plan_identifier are unique, so no duplicates in the example. Please check it.

– Sandeep Kadapa
Nov 16 '18 at 7:29





All values in column hios_plan_identifier are unique, so no duplicates in the example. Please check it.

– Sandeep Kadapa
Nov 16 '18 at 7:29













My bad.. updated

– themaster
Nov 16 '18 at 7:32





My bad.. updated

– themaster
Nov 16 '18 at 7:32












1 Answer
1






active

oldest

votes


















1














Use duplicated and assign the values to df1 and then drop_duplicates on df:



subset_col = ['issuer_id', 'hios_plan_identifier', 'group_or_individual_plan_type']
df1 = df.loc[df.duplicated(subset=subset_col),:]
df = df.drop_duplicates(subset=subset_col)

print(df)
issuer_id hios_plan_identifier plan_year group_or_individual_plan_type
0 484 99806CAAUSJ-TMP 2018 Group

print(df1)
issuer_id hios_plan_identifier plan_year group_or_individual_plan_type
1 484 99806CAAUSJ-TMP 2018 Group
2 484 99806CAAUSJ-TMP 2018 Group
3 484 99806CAAUSJ-TMP 2018 Group





share|improve this answer
























  • @themaster Glad to help.

    – Sandeep Kadapa
    Nov 16 '18 at 10:22











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53333219%2fsaving-the-duplicate-rows-in-a-seperate-dataframe%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









1














Use duplicated and assign the values to df1 and then drop_duplicates on df:



subset_col = ['issuer_id', 'hios_plan_identifier', 'group_or_individual_plan_type']
df1 = df.loc[df.duplicated(subset=subset_col),:]
df = df.drop_duplicates(subset=subset_col)

print(df)
issuer_id hios_plan_identifier plan_year group_or_individual_plan_type
0 484 99806CAAUSJ-TMP 2018 Group

print(df1)
issuer_id hios_plan_identifier plan_year group_or_individual_plan_type
1 484 99806CAAUSJ-TMP 2018 Group
2 484 99806CAAUSJ-TMP 2018 Group
3 484 99806CAAUSJ-TMP 2018 Group





share|improve this answer
























  • @themaster Glad to help.

    – Sandeep Kadapa
    Nov 16 '18 at 10:22
















1














Use duplicated and assign the values to df1 and then drop_duplicates on df:



subset_col = ['issuer_id', 'hios_plan_identifier', 'group_or_individual_plan_type']
df1 = df.loc[df.duplicated(subset=subset_col),:]
df = df.drop_duplicates(subset=subset_col)

print(df)
issuer_id hios_plan_identifier plan_year group_or_individual_plan_type
0 484 99806CAAUSJ-TMP 2018 Group

print(df1)
issuer_id hios_plan_identifier plan_year group_or_individual_plan_type
1 484 99806CAAUSJ-TMP 2018 Group
2 484 99806CAAUSJ-TMP 2018 Group
3 484 99806CAAUSJ-TMP 2018 Group





share|improve this answer
























  • @themaster Glad to help.

    – Sandeep Kadapa
    Nov 16 '18 at 10:22














1












1








1







Use duplicated and assign the values to df1 and then drop_duplicates on df:



subset_col = ['issuer_id', 'hios_plan_identifier', 'group_or_individual_plan_type']
df1 = df.loc[df.duplicated(subset=subset_col),:]
df = df.drop_duplicates(subset=subset_col)

print(df)
issuer_id hios_plan_identifier plan_year group_or_individual_plan_type
0 484 99806CAAUSJ-TMP 2018 Group

print(df1)
issuer_id hios_plan_identifier plan_year group_or_individual_plan_type
1 484 99806CAAUSJ-TMP 2018 Group
2 484 99806CAAUSJ-TMP 2018 Group
3 484 99806CAAUSJ-TMP 2018 Group





share|improve this answer













Use duplicated and assign the values to df1 and then drop_duplicates on df:



subset_col = ['issuer_id', 'hios_plan_identifier', 'group_or_individual_plan_type']
df1 = df.loc[df.duplicated(subset=subset_col),:]
df = df.drop_duplicates(subset=subset_col)

print(df)
issuer_id hios_plan_identifier plan_year group_or_individual_plan_type
0 484 99806CAAUSJ-TMP 2018 Group

print(df1)
issuer_id hios_plan_identifier plan_year group_or_individual_plan_type
1 484 99806CAAUSJ-TMP 2018 Group
2 484 99806CAAUSJ-TMP 2018 Group
3 484 99806CAAUSJ-TMP 2018 Group






share|improve this answer












share|improve this answer



share|improve this answer










answered Nov 16 '18 at 7:34









Sandeep KadapaSandeep Kadapa

7,398831




7,398831













  • @themaster Glad to help.

    – Sandeep Kadapa
    Nov 16 '18 at 10:22



















  • @themaster Glad to help.

    – Sandeep Kadapa
    Nov 16 '18 at 10:22

















@themaster Glad to help.

– Sandeep Kadapa
Nov 16 '18 at 10:22





@themaster Glad to help.

– Sandeep Kadapa
Nov 16 '18 at 10:22




















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53333219%2fsaving-the-duplicate-rows-in-a-seperate-dataframe%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Bressuire

Vorschmack

Quarantine