Keeping rows that span several time ranges












2















I have a data frame (da) where each row has a timestamp in ascending order (intervals between each timestamp is random).



I wanted to keep rows of da based on whether its time fell within the times found between those in two other vectors (first.times and second.times). So I'd go down the vectors of first.time and second.time iteratively and see if da has times within those intervals (min = first times and max = second.times), with which I keep, and the rest I don't.



The only way I've figured out how to do it is with a for loop, but it can take a while. Here's the code with some example data:



#Set start and end dates
date1 <- as.POSIXct(strptime('1970-01-01 00:00', format = '%Y-%m-%d %H:%M'))
date2 <- as.POSIXct(strptime('1970-01-05 23:00', format = '%Y-%m-%d %H:%M'))

#Interpolate 250000 dates in between (dates are set to random intervals)
dates <- c(date1 + cumsum(c(0, round(runif(250000, 20, 200)))), date2)

#Set up dataframe
da <- data.frame(dates = dates,
a = round(runif(1, 1, 10)),
b = rep(c('Hi', 'There', 'Everyone'), length.out = length(dates)))
head(da); dim(da)

#Set up vectors of time
first.times <- seq(date1, #First time in sequence is date1
date2, #Last time in sequence is date2
by = 13*60) #Interval of 13 minutes between each time (13 min * 60 sec)

second.times <- first.times + 5*60 #Second time is 5 min * 60 seconds later
head(first.times); length(first.times)
head(second.times); length(second.times)

#Loop to obtain rows
subsetted.dates <- da[0,]
system.time(for(i in 1:length(first.times)){
subsetted.dates <- rbind(subsetted.dates, da[da$dates >= first.times[i] & da$dates < second.times[i],])
})
user system elapsed
2.590 0.825 3.520


I was wondering if there is a more efficient and faster way of doing what I did in the for loop. It goes pretty fast with this example dataset, but my actual dataset can take 45 seconds for each iteration, and with 1000 iterations to make, this can take a while!



Any help will go a long way!



Thanks!










share|improve this question





























    2















    I have a data frame (da) where each row has a timestamp in ascending order (intervals between each timestamp is random).



    I wanted to keep rows of da based on whether its time fell within the times found between those in two other vectors (first.times and second.times). So I'd go down the vectors of first.time and second.time iteratively and see if da has times within those intervals (min = first times and max = second.times), with which I keep, and the rest I don't.



    The only way I've figured out how to do it is with a for loop, but it can take a while. Here's the code with some example data:



    #Set start and end dates
    date1 <- as.POSIXct(strptime('1970-01-01 00:00', format = '%Y-%m-%d %H:%M'))
    date2 <- as.POSIXct(strptime('1970-01-05 23:00', format = '%Y-%m-%d %H:%M'))

    #Interpolate 250000 dates in between (dates are set to random intervals)
    dates <- c(date1 + cumsum(c(0, round(runif(250000, 20, 200)))), date2)

    #Set up dataframe
    da <- data.frame(dates = dates,
    a = round(runif(1, 1, 10)),
    b = rep(c('Hi', 'There', 'Everyone'), length.out = length(dates)))
    head(da); dim(da)

    #Set up vectors of time
    first.times <- seq(date1, #First time in sequence is date1
    date2, #Last time in sequence is date2
    by = 13*60) #Interval of 13 minutes between each time (13 min * 60 sec)

    second.times <- first.times + 5*60 #Second time is 5 min * 60 seconds later
    head(first.times); length(first.times)
    head(second.times); length(second.times)

    #Loop to obtain rows
    subsetted.dates <- da[0,]
    system.time(for(i in 1:length(first.times)){
    subsetted.dates <- rbind(subsetted.dates, da[da$dates >= first.times[i] & da$dates < second.times[i],])
    })
    user system elapsed
    2.590 0.825 3.520


    I was wondering if there is a more efficient and faster way of doing what I did in the for loop. It goes pretty fast with this example dataset, but my actual dataset can take 45 seconds for each iteration, and with 1000 iterations to make, this can take a while!



    Any help will go a long way!



    Thanks!










    share|improve this question



























      2












      2








      2


      1






      I have a data frame (da) where each row has a timestamp in ascending order (intervals between each timestamp is random).



      I wanted to keep rows of da based on whether its time fell within the times found between those in two other vectors (first.times and second.times). So I'd go down the vectors of first.time and second.time iteratively and see if da has times within those intervals (min = first times and max = second.times), with which I keep, and the rest I don't.



      The only way I've figured out how to do it is with a for loop, but it can take a while. Here's the code with some example data:



      #Set start and end dates
      date1 <- as.POSIXct(strptime('1970-01-01 00:00', format = '%Y-%m-%d %H:%M'))
      date2 <- as.POSIXct(strptime('1970-01-05 23:00', format = '%Y-%m-%d %H:%M'))

      #Interpolate 250000 dates in between (dates are set to random intervals)
      dates <- c(date1 + cumsum(c(0, round(runif(250000, 20, 200)))), date2)

      #Set up dataframe
      da <- data.frame(dates = dates,
      a = round(runif(1, 1, 10)),
      b = rep(c('Hi', 'There', 'Everyone'), length.out = length(dates)))
      head(da); dim(da)

      #Set up vectors of time
      first.times <- seq(date1, #First time in sequence is date1
      date2, #Last time in sequence is date2
      by = 13*60) #Interval of 13 minutes between each time (13 min * 60 sec)

      second.times <- first.times + 5*60 #Second time is 5 min * 60 seconds later
      head(first.times); length(first.times)
      head(second.times); length(second.times)

      #Loop to obtain rows
      subsetted.dates <- da[0,]
      system.time(for(i in 1:length(first.times)){
      subsetted.dates <- rbind(subsetted.dates, da[da$dates >= first.times[i] & da$dates < second.times[i],])
      })
      user system elapsed
      2.590 0.825 3.520


      I was wondering if there is a more efficient and faster way of doing what I did in the for loop. It goes pretty fast with this example dataset, but my actual dataset can take 45 seconds for each iteration, and with 1000 iterations to make, this can take a while!



      Any help will go a long way!



      Thanks!










      share|improve this question
















      I have a data frame (da) where each row has a timestamp in ascending order (intervals between each timestamp is random).



      I wanted to keep rows of da based on whether its time fell within the times found between those in two other vectors (first.times and second.times). So I'd go down the vectors of first.time and second.time iteratively and see if da has times within those intervals (min = first times and max = second.times), with which I keep, and the rest I don't.



      The only way I've figured out how to do it is with a for loop, but it can take a while. Here's the code with some example data:



      #Set start and end dates
      date1 <- as.POSIXct(strptime('1970-01-01 00:00', format = '%Y-%m-%d %H:%M'))
      date2 <- as.POSIXct(strptime('1970-01-05 23:00', format = '%Y-%m-%d %H:%M'))

      #Interpolate 250000 dates in between (dates are set to random intervals)
      dates <- c(date1 + cumsum(c(0, round(runif(250000, 20, 200)))), date2)

      #Set up dataframe
      da <- data.frame(dates = dates,
      a = round(runif(1, 1, 10)),
      b = rep(c('Hi', 'There', 'Everyone'), length.out = length(dates)))
      head(da); dim(da)

      #Set up vectors of time
      first.times <- seq(date1, #First time in sequence is date1
      date2, #Last time in sequence is date2
      by = 13*60) #Interval of 13 minutes between each time (13 min * 60 sec)

      second.times <- first.times + 5*60 #Second time is 5 min * 60 seconds later
      head(first.times); length(first.times)
      head(second.times); length(second.times)

      #Loop to obtain rows
      subsetted.dates <- da[0,]
      system.time(for(i in 1:length(first.times)){
      subsetted.dates <- rbind(subsetted.dates, da[da$dates >= first.times[i] & da$dates < second.times[i],])
      })
      user system elapsed
      2.590 0.825 3.520


      I was wondering if there is a more efficient and faster way of doing what I did in the for loop. It goes pretty fast with this example dataset, but my actual dataset can take 45 seconds for each iteration, and with 1000 iterations to make, this can take a while!



      Any help will go a long way!



      Thanks!







      r for-loop






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 15 '18 at 17:05







      Lalochezia

















      asked Nov 14 '18 at 20:58









      LalocheziaLalochezia

      10411




      10411
























          2 Answers
          2






          active

          oldest

          votes


















          1














          Never use rbind or cbind within a loop! This leads to excessive copying in memory. See Patrick Burns' R Interno: Circle 2 - Growing Objects. Instead, build a list of data frames to rbind once outside the loop:



          Since you iterate element wise between equal length vectors, consider mapply or its list wrapper, Map:



          df_list <- Map(function(f, s) da[da$dates >= f & da$dates < s,],
          first.times, second.times)

          # EQUIVALENT CALL
          df_list <- mapply(function(f, s) da[da$dates >= f & da$dates < s,],
          first.times, second.times, SIMPLIFY=FALSE)


          Even consider adding first and second times into data frame with transform to add columns:



          df_list <- Map(function(f, s) transform(da[da$dates >= f & da$dates < s,], 
          first_time = f, second_time = s),
          first.times, second.times)


          From there, use a host of solutions to row bind list of data frames:



          # BASE
          final_df <- do.call(rbind, df_list)

          # PLYR
          final_df <- rbind.fill(df_list)

          # DPLYR
          final_df <- bind_rows(df_list)

          # DATA TABLE
          final_df <- rbindlist(df_list)


          Check benchmark examples here: Convert a list of data frames into one data frame






          share|improve this answer


























          • Thanks Parfait, that's exactly what I needed. I was first introduced to forloops and I'm stuck in a loop (hah!) I can't seem to break out of. Did it take you a while to figure out how to use the applied() functions properly/dynamically? Thanks for the book as well.

            – Lalochezia
            Nov 15 '18 at 17:00











          • First, there is nothing wrong with for loops. You could still use the method to build a list of data frames (but rbind once outside loop). Second, the apply family are loops but more compact versions that return objects.

            – Parfait
            Nov 15 '18 at 19:04











          • And yes, indeed, the apply family did take some time to grasp but taught me too the elegance of the R language and R's object model of the vector: no scalars in R (only a vector of one element); matrix (vector with dim attribute); data frames (list of equal length vectors), etc.

            – Parfait
            Nov 15 '18 at 19:08





















          0














          Comparing to the original setup ...



          > subsetted.dates <- da[0,]
          > system.time(for(i in 1:length(first.times)){
          + subsetted.dates <- rbind(subsetted.dates, da[da$dates >= first.times[i] & da$dates < second.times[i],])
          + })
          user system elapsed
          3.97 0.35 4.33


          ... it is possible to get a slight performance improvement using lapply:



          > system.time({
          + subsetted.dates <- lapply(1:length(first.times),function(i) da[da$dates >= first.times[i] & da$dates < second.times[i],])
          + subsetted.dates <- do.call(rbind,subsetted.dates)
          + })
          user system elapsed
          3.37 0.26 3.75


          Changing a bit the algorithm, if you first create index of dates with a bit smaller set of data and then apply it, that leads to even a better performance:



          > system.time({
          + da_dates <- da$dates
          + da_inds <- lapply(1:length(first.times),function(i) which(da_dates >= first.times[i] & da_dates < second.times[i]))
          + subsetted.dates <- da[unlist(da_inds),]
          + })
          user system elapsed
          2.60 0.31 2.94


          Suggesting that that the time intervals can be ordered in time order (in this case they were already in time order) and that they are not overlapping, the problem becomes even faster:



          system.time({ 
          da_date_order <- order(da$dates)
          da_date_back_order <- order(da$dates)
          da_sorted_dates <- sort(da$dates)
          da_selected_dates <- rep(FALSE,length(da_sorted_dates))
          j = 1
          for (i in 1:length(da_dates)) {
          if (da_sorted_dates[i] >= first.times[j] & da_sorted_dates[i] < second.times[j]) {
          da_selected_dates[i] <- TRUE
          } else if (da_sorted_dates[i] >= second.times[j]) {
          j = j + 1
          if (j > length(second.times)) {
          break
          }
          }
          }
          subsetted.dates <- da[da_date_back_order[da_selected_dates],]
          })

          user system elapsed
          0.98 0.00 1.01


          And if you allow sorting the original da dataset, then the solution is even faster:



          system.time({
          da <- da[order(da$dates),]
          da_sorted_dates <- da$dates
          da_selected_dates <- rep(FALSE,length(da_sorted_dates))
          j = 1
          for (i in 1:length(da_dates)) {
          if (da_sorted_dates[i] >= first.times[j] & da_sorted_dates[i] < second.times[j]) {
          da_selected_dates[i] <- TRUE
          } else if (da_sorted_dates[i] >= second.times[j]) {
          j = j + 1
          if (j > length(second.times)) {
          break
          }
          }
          }
          subsetted.dates <- da[da_selected_dates,]
          })

          user system elapsed
          0.63 0.00 0.63





          share|improve this answer

























            Your Answer






            StackExchange.ifUsing("editor", function () {
            StackExchange.using("externalEditor", function () {
            StackExchange.using("snippets", function () {
            StackExchange.snippets.init();
            });
            });
            }, "code-snippets");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "1"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53308609%2fkeeping-rows-that-span-several-time-ranges%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            1














            Never use rbind or cbind within a loop! This leads to excessive copying in memory. See Patrick Burns' R Interno: Circle 2 - Growing Objects. Instead, build a list of data frames to rbind once outside the loop:



            Since you iterate element wise between equal length vectors, consider mapply or its list wrapper, Map:



            df_list <- Map(function(f, s) da[da$dates >= f & da$dates < s,],
            first.times, second.times)

            # EQUIVALENT CALL
            df_list <- mapply(function(f, s) da[da$dates >= f & da$dates < s,],
            first.times, second.times, SIMPLIFY=FALSE)


            Even consider adding first and second times into data frame with transform to add columns:



            df_list <- Map(function(f, s) transform(da[da$dates >= f & da$dates < s,], 
            first_time = f, second_time = s),
            first.times, second.times)


            From there, use a host of solutions to row bind list of data frames:



            # BASE
            final_df <- do.call(rbind, df_list)

            # PLYR
            final_df <- rbind.fill(df_list)

            # DPLYR
            final_df <- bind_rows(df_list)

            # DATA TABLE
            final_df <- rbindlist(df_list)


            Check benchmark examples here: Convert a list of data frames into one data frame






            share|improve this answer


























            • Thanks Parfait, that's exactly what I needed. I was first introduced to forloops and I'm stuck in a loop (hah!) I can't seem to break out of. Did it take you a while to figure out how to use the applied() functions properly/dynamically? Thanks for the book as well.

              – Lalochezia
              Nov 15 '18 at 17:00











            • First, there is nothing wrong with for loops. You could still use the method to build a list of data frames (but rbind once outside loop). Second, the apply family are loops but more compact versions that return objects.

              – Parfait
              Nov 15 '18 at 19:04











            • And yes, indeed, the apply family did take some time to grasp but taught me too the elegance of the R language and R's object model of the vector: no scalars in R (only a vector of one element); matrix (vector with dim attribute); data frames (list of equal length vectors), etc.

              – Parfait
              Nov 15 '18 at 19:08


















            1














            Never use rbind or cbind within a loop! This leads to excessive copying in memory. See Patrick Burns' R Interno: Circle 2 - Growing Objects. Instead, build a list of data frames to rbind once outside the loop:



            Since you iterate element wise between equal length vectors, consider mapply or its list wrapper, Map:



            df_list <- Map(function(f, s) da[da$dates >= f & da$dates < s,],
            first.times, second.times)

            # EQUIVALENT CALL
            df_list <- mapply(function(f, s) da[da$dates >= f & da$dates < s,],
            first.times, second.times, SIMPLIFY=FALSE)


            Even consider adding first and second times into data frame with transform to add columns:



            df_list <- Map(function(f, s) transform(da[da$dates >= f & da$dates < s,], 
            first_time = f, second_time = s),
            first.times, second.times)


            From there, use a host of solutions to row bind list of data frames:



            # BASE
            final_df <- do.call(rbind, df_list)

            # PLYR
            final_df <- rbind.fill(df_list)

            # DPLYR
            final_df <- bind_rows(df_list)

            # DATA TABLE
            final_df <- rbindlist(df_list)


            Check benchmark examples here: Convert a list of data frames into one data frame






            share|improve this answer


























            • Thanks Parfait, that's exactly what I needed. I was first introduced to forloops and I'm stuck in a loop (hah!) I can't seem to break out of. Did it take you a while to figure out how to use the applied() functions properly/dynamically? Thanks for the book as well.

              – Lalochezia
              Nov 15 '18 at 17:00











            • First, there is nothing wrong with for loops. You could still use the method to build a list of data frames (but rbind once outside loop). Second, the apply family are loops but more compact versions that return objects.

              – Parfait
              Nov 15 '18 at 19:04











            • And yes, indeed, the apply family did take some time to grasp but taught me too the elegance of the R language and R's object model of the vector: no scalars in R (only a vector of one element); matrix (vector with dim attribute); data frames (list of equal length vectors), etc.

              – Parfait
              Nov 15 '18 at 19:08
















            1












            1








            1







            Never use rbind or cbind within a loop! This leads to excessive copying in memory. See Patrick Burns' R Interno: Circle 2 - Growing Objects. Instead, build a list of data frames to rbind once outside the loop:



            Since you iterate element wise between equal length vectors, consider mapply or its list wrapper, Map:



            df_list <- Map(function(f, s) da[da$dates >= f & da$dates < s,],
            first.times, second.times)

            # EQUIVALENT CALL
            df_list <- mapply(function(f, s) da[da$dates >= f & da$dates < s,],
            first.times, second.times, SIMPLIFY=FALSE)


            Even consider adding first and second times into data frame with transform to add columns:



            df_list <- Map(function(f, s) transform(da[da$dates >= f & da$dates < s,], 
            first_time = f, second_time = s),
            first.times, second.times)


            From there, use a host of solutions to row bind list of data frames:



            # BASE
            final_df <- do.call(rbind, df_list)

            # PLYR
            final_df <- rbind.fill(df_list)

            # DPLYR
            final_df <- bind_rows(df_list)

            # DATA TABLE
            final_df <- rbindlist(df_list)


            Check benchmark examples here: Convert a list of data frames into one data frame






            share|improve this answer















            Never use rbind or cbind within a loop! This leads to excessive copying in memory. See Patrick Burns' R Interno: Circle 2 - Growing Objects. Instead, build a list of data frames to rbind once outside the loop:



            Since you iterate element wise between equal length vectors, consider mapply or its list wrapper, Map:



            df_list <- Map(function(f, s) da[da$dates >= f & da$dates < s,],
            first.times, second.times)

            # EQUIVALENT CALL
            df_list <- mapply(function(f, s) da[da$dates >= f & da$dates < s,],
            first.times, second.times, SIMPLIFY=FALSE)


            Even consider adding first and second times into data frame with transform to add columns:



            df_list <- Map(function(f, s) transform(da[da$dates >= f & da$dates < s,], 
            first_time = f, second_time = s),
            first.times, second.times)


            From there, use a host of solutions to row bind list of data frames:



            # BASE
            final_df <- do.call(rbind, df_list)

            # PLYR
            final_df <- rbind.fill(df_list)

            # DPLYR
            final_df <- bind_rows(df_list)

            # DATA TABLE
            final_df <- rbindlist(df_list)


            Check benchmark examples here: Convert a list of data frames into one data frame







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Nov 14 '18 at 21:27

























            answered Nov 14 '18 at 21:22









            ParfaitParfait

            52k84470




            52k84470













            • Thanks Parfait, that's exactly what I needed. I was first introduced to forloops and I'm stuck in a loop (hah!) I can't seem to break out of. Did it take you a while to figure out how to use the applied() functions properly/dynamically? Thanks for the book as well.

              – Lalochezia
              Nov 15 '18 at 17:00











            • First, there is nothing wrong with for loops. You could still use the method to build a list of data frames (but rbind once outside loop). Second, the apply family are loops but more compact versions that return objects.

              – Parfait
              Nov 15 '18 at 19:04











            • And yes, indeed, the apply family did take some time to grasp but taught me too the elegance of the R language and R's object model of the vector: no scalars in R (only a vector of one element); matrix (vector with dim attribute); data frames (list of equal length vectors), etc.

              – Parfait
              Nov 15 '18 at 19:08





















            • Thanks Parfait, that's exactly what I needed. I was first introduced to forloops and I'm stuck in a loop (hah!) I can't seem to break out of. Did it take you a while to figure out how to use the applied() functions properly/dynamically? Thanks for the book as well.

              – Lalochezia
              Nov 15 '18 at 17:00











            • First, there is nothing wrong with for loops. You could still use the method to build a list of data frames (but rbind once outside loop). Second, the apply family are loops but more compact versions that return objects.

              – Parfait
              Nov 15 '18 at 19:04











            • And yes, indeed, the apply family did take some time to grasp but taught me too the elegance of the R language and R's object model of the vector: no scalars in R (only a vector of one element); matrix (vector with dim attribute); data frames (list of equal length vectors), etc.

              – Parfait
              Nov 15 '18 at 19:08



















            Thanks Parfait, that's exactly what I needed. I was first introduced to forloops and I'm stuck in a loop (hah!) I can't seem to break out of. Did it take you a while to figure out how to use the applied() functions properly/dynamically? Thanks for the book as well.

            – Lalochezia
            Nov 15 '18 at 17:00





            Thanks Parfait, that's exactly what I needed. I was first introduced to forloops and I'm stuck in a loop (hah!) I can't seem to break out of. Did it take you a while to figure out how to use the applied() functions properly/dynamically? Thanks for the book as well.

            – Lalochezia
            Nov 15 '18 at 17:00













            First, there is nothing wrong with for loops. You could still use the method to build a list of data frames (but rbind once outside loop). Second, the apply family are loops but more compact versions that return objects.

            – Parfait
            Nov 15 '18 at 19:04





            First, there is nothing wrong with for loops. You could still use the method to build a list of data frames (but rbind once outside loop). Second, the apply family are loops but more compact versions that return objects.

            – Parfait
            Nov 15 '18 at 19:04













            And yes, indeed, the apply family did take some time to grasp but taught me too the elegance of the R language and R's object model of the vector: no scalars in R (only a vector of one element); matrix (vector with dim attribute); data frames (list of equal length vectors), etc.

            – Parfait
            Nov 15 '18 at 19:08







            And yes, indeed, the apply family did take some time to grasp but taught me too the elegance of the R language and R's object model of the vector: no scalars in R (only a vector of one element); matrix (vector with dim attribute); data frames (list of equal length vectors), etc.

            – Parfait
            Nov 15 '18 at 19:08















            0














            Comparing to the original setup ...



            > subsetted.dates <- da[0,]
            > system.time(for(i in 1:length(first.times)){
            + subsetted.dates <- rbind(subsetted.dates, da[da$dates >= first.times[i] & da$dates < second.times[i],])
            + })
            user system elapsed
            3.97 0.35 4.33


            ... it is possible to get a slight performance improvement using lapply:



            > system.time({
            + subsetted.dates <- lapply(1:length(first.times),function(i) da[da$dates >= first.times[i] & da$dates < second.times[i],])
            + subsetted.dates <- do.call(rbind,subsetted.dates)
            + })
            user system elapsed
            3.37 0.26 3.75


            Changing a bit the algorithm, if you first create index of dates with a bit smaller set of data and then apply it, that leads to even a better performance:



            > system.time({
            + da_dates <- da$dates
            + da_inds <- lapply(1:length(first.times),function(i) which(da_dates >= first.times[i] & da_dates < second.times[i]))
            + subsetted.dates <- da[unlist(da_inds),]
            + })
            user system elapsed
            2.60 0.31 2.94


            Suggesting that that the time intervals can be ordered in time order (in this case they were already in time order) and that they are not overlapping, the problem becomes even faster:



            system.time({ 
            da_date_order <- order(da$dates)
            da_date_back_order <- order(da$dates)
            da_sorted_dates <- sort(da$dates)
            da_selected_dates <- rep(FALSE,length(da_sorted_dates))
            j = 1
            for (i in 1:length(da_dates)) {
            if (da_sorted_dates[i] >= first.times[j] & da_sorted_dates[i] < second.times[j]) {
            da_selected_dates[i] <- TRUE
            } else if (da_sorted_dates[i] >= second.times[j]) {
            j = j + 1
            if (j > length(second.times)) {
            break
            }
            }
            }
            subsetted.dates <- da[da_date_back_order[da_selected_dates],]
            })

            user system elapsed
            0.98 0.00 1.01


            And if you allow sorting the original da dataset, then the solution is even faster:



            system.time({
            da <- da[order(da$dates),]
            da_sorted_dates <- da$dates
            da_selected_dates <- rep(FALSE,length(da_sorted_dates))
            j = 1
            for (i in 1:length(da_dates)) {
            if (da_sorted_dates[i] >= first.times[j] & da_sorted_dates[i] < second.times[j]) {
            da_selected_dates[i] <- TRUE
            } else if (da_sorted_dates[i] >= second.times[j]) {
            j = j + 1
            if (j > length(second.times)) {
            break
            }
            }
            }
            subsetted.dates <- da[da_selected_dates,]
            })

            user system elapsed
            0.63 0.00 0.63





            share|improve this answer






























              0














              Comparing to the original setup ...



              > subsetted.dates <- da[0,]
              > system.time(for(i in 1:length(first.times)){
              + subsetted.dates <- rbind(subsetted.dates, da[da$dates >= first.times[i] & da$dates < second.times[i],])
              + })
              user system elapsed
              3.97 0.35 4.33


              ... it is possible to get a slight performance improvement using lapply:



              > system.time({
              + subsetted.dates <- lapply(1:length(first.times),function(i) da[da$dates >= first.times[i] & da$dates < second.times[i],])
              + subsetted.dates <- do.call(rbind,subsetted.dates)
              + })
              user system elapsed
              3.37 0.26 3.75


              Changing a bit the algorithm, if you first create index of dates with a bit smaller set of data and then apply it, that leads to even a better performance:



              > system.time({
              + da_dates <- da$dates
              + da_inds <- lapply(1:length(first.times),function(i) which(da_dates >= first.times[i] & da_dates < second.times[i]))
              + subsetted.dates <- da[unlist(da_inds),]
              + })
              user system elapsed
              2.60 0.31 2.94


              Suggesting that that the time intervals can be ordered in time order (in this case they were already in time order) and that they are not overlapping, the problem becomes even faster:



              system.time({ 
              da_date_order <- order(da$dates)
              da_date_back_order <- order(da$dates)
              da_sorted_dates <- sort(da$dates)
              da_selected_dates <- rep(FALSE,length(da_sorted_dates))
              j = 1
              for (i in 1:length(da_dates)) {
              if (da_sorted_dates[i] >= first.times[j] & da_sorted_dates[i] < second.times[j]) {
              da_selected_dates[i] <- TRUE
              } else if (da_sorted_dates[i] >= second.times[j]) {
              j = j + 1
              if (j > length(second.times)) {
              break
              }
              }
              }
              subsetted.dates <- da[da_date_back_order[da_selected_dates],]
              })

              user system elapsed
              0.98 0.00 1.01


              And if you allow sorting the original da dataset, then the solution is even faster:



              system.time({
              da <- da[order(da$dates),]
              da_sorted_dates <- da$dates
              da_selected_dates <- rep(FALSE,length(da_sorted_dates))
              j = 1
              for (i in 1:length(da_dates)) {
              if (da_sorted_dates[i] >= first.times[j] & da_sorted_dates[i] < second.times[j]) {
              da_selected_dates[i] <- TRUE
              } else if (da_sorted_dates[i] >= second.times[j]) {
              j = j + 1
              if (j > length(second.times)) {
              break
              }
              }
              }
              subsetted.dates <- da[da_selected_dates,]
              })

              user system elapsed
              0.63 0.00 0.63





              share|improve this answer




























                0












                0








                0







                Comparing to the original setup ...



                > subsetted.dates <- da[0,]
                > system.time(for(i in 1:length(first.times)){
                + subsetted.dates <- rbind(subsetted.dates, da[da$dates >= first.times[i] & da$dates < second.times[i],])
                + })
                user system elapsed
                3.97 0.35 4.33


                ... it is possible to get a slight performance improvement using lapply:



                > system.time({
                + subsetted.dates <- lapply(1:length(first.times),function(i) da[da$dates >= first.times[i] & da$dates < second.times[i],])
                + subsetted.dates <- do.call(rbind,subsetted.dates)
                + })
                user system elapsed
                3.37 0.26 3.75


                Changing a bit the algorithm, if you first create index of dates with a bit smaller set of data and then apply it, that leads to even a better performance:



                > system.time({
                + da_dates <- da$dates
                + da_inds <- lapply(1:length(first.times),function(i) which(da_dates >= first.times[i] & da_dates < second.times[i]))
                + subsetted.dates <- da[unlist(da_inds),]
                + })
                user system elapsed
                2.60 0.31 2.94


                Suggesting that that the time intervals can be ordered in time order (in this case they were already in time order) and that they are not overlapping, the problem becomes even faster:



                system.time({ 
                da_date_order <- order(da$dates)
                da_date_back_order <- order(da$dates)
                da_sorted_dates <- sort(da$dates)
                da_selected_dates <- rep(FALSE,length(da_sorted_dates))
                j = 1
                for (i in 1:length(da_dates)) {
                if (da_sorted_dates[i] >= first.times[j] & da_sorted_dates[i] < second.times[j]) {
                da_selected_dates[i] <- TRUE
                } else if (da_sorted_dates[i] >= second.times[j]) {
                j = j + 1
                if (j > length(second.times)) {
                break
                }
                }
                }
                subsetted.dates <- da[da_date_back_order[da_selected_dates],]
                })

                user system elapsed
                0.98 0.00 1.01


                And if you allow sorting the original da dataset, then the solution is even faster:



                system.time({
                da <- da[order(da$dates),]
                da_sorted_dates <- da$dates
                da_selected_dates <- rep(FALSE,length(da_sorted_dates))
                j = 1
                for (i in 1:length(da_dates)) {
                if (da_sorted_dates[i] >= first.times[j] & da_sorted_dates[i] < second.times[j]) {
                da_selected_dates[i] <- TRUE
                } else if (da_sorted_dates[i] >= second.times[j]) {
                j = j + 1
                if (j > length(second.times)) {
                break
                }
                }
                }
                subsetted.dates <- da[da_selected_dates,]
                })

                user system elapsed
                0.63 0.00 0.63





                share|improve this answer















                Comparing to the original setup ...



                > subsetted.dates <- da[0,]
                > system.time(for(i in 1:length(first.times)){
                + subsetted.dates <- rbind(subsetted.dates, da[da$dates >= first.times[i] & da$dates < second.times[i],])
                + })
                user system elapsed
                3.97 0.35 4.33


                ... it is possible to get a slight performance improvement using lapply:



                > system.time({
                + subsetted.dates <- lapply(1:length(first.times),function(i) da[da$dates >= first.times[i] & da$dates < second.times[i],])
                + subsetted.dates <- do.call(rbind,subsetted.dates)
                + })
                user system elapsed
                3.37 0.26 3.75


                Changing a bit the algorithm, if you first create index of dates with a bit smaller set of data and then apply it, that leads to even a better performance:



                > system.time({
                + da_dates <- da$dates
                + da_inds <- lapply(1:length(first.times),function(i) which(da_dates >= first.times[i] & da_dates < second.times[i]))
                + subsetted.dates <- da[unlist(da_inds),]
                + })
                user system elapsed
                2.60 0.31 2.94


                Suggesting that that the time intervals can be ordered in time order (in this case they were already in time order) and that they are not overlapping, the problem becomes even faster:



                system.time({ 
                da_date_order <- order(da$dates)
                da_date_back_order <- order(da$dates)
                da_sorted_dates <- sort(da$dates)
                da_selected_dates <- rep(FALSE,length(da_sorted_dates))
                j = 1
                for (i in 1:length(da_dates)) {
                if (da_sorted_dates[i] >= first.times[j] & da_sorted_dates[i] < second.times[j]) {
                da_selected_dates[i] <- TRUE
                } else if (da_sorted_dates[i] >= second.times[j]) {
                j = j + 1
                if (j > length(second.times)) {
                break
                }
                }
                }
                subsetted.dates <- da[da_date_back_order[da_selected_dates],]
                })

                user system elapsed
                0.98 0.00 1.01


                And if you allow sorting the original da dataset, then the solution is even faster:



                system.time({
                da <- da[order(da$dates),]
                da_sorted_dates <- da$dates
                da_selected_dates <- rep(FALSE,length(da_sorted_dates))
                j = 1
                for (i in 1:length(da_dates)) {
                if (da_sorted_dates[i] >= first.times[j] & da_sorted_dates[i] < second.times[j]) {
                da_selected_dates[i] <- TRUE
                } else if (da_sorted_dates[i] >= second.times[j]) {
                j = j + 1
                if (j > length(second.times)) {
                break
                }
                }
                }
                subsetted.dates <- da[da_selected_dates,]
                })

                user system elapsed
                0.63 0.00 0.63






                share|improve this answer














                share|improve this answer



                share|improve this answer








                edited Nov 14 '18 at 22:07

























                answered Nov 14 '18 at 21:23









                HeikkiHeikki

                1,3081018




                1,3081018






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53308609%2fkeeping-rows-that-span-several-time-ranges%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Xamarin.iOS Cant Deploy on Iphone

                    Glorious Revolution

                    Dulmage-Mendelsohn matrix decomposition in Python