How to delete large data of table in SQL without log?











up vote
82
down vote

favorite
24












I have a large data table.
There are 10 million records in this table.



What is the best way for this query



   Delete LargeTable where readTime < dateadd(MONTH,-7,GETDATE())









share|improve this question




















  • 4




    :) I am afraid unless you are willing to write some sort of ETL to get all the rows readTime >= dateadd(MONTH,-7,GETDATE()) into another table and then issue a Truncate table and put the data back using ETL, you would not be able to prevent it from writing to the log
    – TMNT2014
    Jun 13 '14 at 20:27












  • Logging is an all or nothing function of having a resilient transactions. It literally doesn't make sense to not have a log for some operation but not others, otherwise the log is useless.
    – Erik Philips
    Jun 13 '14 at 20:40






  • 1




    Export the data you want to keep, truncate the table, then import back in
    – Bohemian
    Jun 13 '14 at 20:40










  • Another option would be using a tablevariable which are not logged. Hence store your readTime >= dateadd(MONTH,-7,GETDATE()) data in a table variable and then truncate the original table and copy back the data from the table variable. I would however keep a back up of the data in case something went wrong and the table gets inadvertently truncated.:) And always do a test run of your script on a lesser environment.
    – TMNT2014
    Jun 13 '14 at 20:40












  • Possible duplicate of How to efficiently delete rows while NOT using Truncate Table in a 500,000+ rows table
    – Academy of Programmer
    Aug 30 '17 at 7:24















up vote
82
down vote

favorite
24












I have a large data table.
There are 10 million records in this table.



What is the best way for this query



   Delete LargeTable where readTime < dateadd(MONTH,-7,GETDATE())









share|improve this question




















  • 4




    :) I am afraid unless you are willing to write some sort of ETL to get all the rows readTime >= dateadd(MONTH,-7,GETDATE()) into another table and then issue a Truncate table and put the data back using ETL, you would not be able to prevent it from writing to the log
    – TMNT2014
    Jun 13 '14 at 20:27












  • Logging is an all or nothing function of having a resilient transactions. It literally doesn't make sense to not have a log for some operation but not others, otherwise the log is useless.
    – Erik Philips
    Jun 13 '14 at 20:40






  • 1




    Export the data you want to keep, truncate the table, then import back in
    – Bohemian
    Jun 13 '14 at 20:40










  • Another option would be using a tablevariable which are not logged. Hence store your readTime >= dateadd(MONTH,-7,GETDATE()) data in a table variable and then truncate the original table and copy back the data from the table variable. I would however keep a back up of the data in case something went wrong and the table gets inadvertently truncated.:) And always do a test run of your script on a lesser environment.
    – TMNT2014
    Jun 13 '14 at 20:40












  • Possible duplicate of How to efficiently delete rows while NOT using Truncate Table in a 500,000+ rows table
    – Academy of Programmer
    Aug 30 '17 at 7:24













up vote
82
down vote

favorite
24









up vote
82
down vote

favorite
24






24





I have a large data table.
There are 10 million records in this table.



What is the best way for this query



   Delete LargeTable where readTime < dateadd(MONTH,-7,GETDATE())









share|improve this question















I have a large data table.
There are 10 million records in this table.



What is the best way for this query



   Delete LargeTable where readTime < dateadd(MONTH,-7,GETDATE())






sql-server sql-server-2008 sql-optimization






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Jun 13 '14 at 20:55









M.Ali

54.1k105991




54.1k105991










asked Jun 13 '14 at 20:22









user3107343

61241230




61241230








  • 4




    :) I am afraid unless you are willing to write some sort of ETL to get all the rows readTime >= dateadd(MONTH,-7,GETDATE()) into another table and then issue a Truncate table and put the data back using ETL, you would not be able to prevent it from writing to the log
    – TMNT2014
    Jun 13 '14 at 20:27












  • Logging is an all or nothing function of having a resilient transactions. It literally doesn't make sense to not have a log for some operation but not others, otherwise the log is useless.
    – Erik Philips
    Jun 13 '14 at 20:40






  • 1




    Export the data you want to keep, truncate the table, then import back in
    – Bohemian
    Jun 13 '14 at 20:40










  • Another option would be using a tablevariable which are not logged. Hence store your readTime >= dateadd(MONTH,-7,GETDATE()) data in a table variable and then truncate the original table and copy back the data from the table variable. I would however keep a back up of the data in case something went wrong and the table gets inadvertently truncated.:) And always do a test run of your script on a lesser environment.
    – TMNT2014
    Jun 13 '14 at 20:40












  • Possible duplicate of How to efficiently delete rows while NOT using Truncate Table in a 500,000+ rows table
    – Academy of Programmer
    Aug 30 '17 at 7:24














  • 4




    :) I am afraid unless you are willing to write some sort of ETL to get all the rows readTime >= dateadd(MONTH,-7,GETDATE()) into another table and then issue a Truncate table and put the data back using ETL, you would not be able to prevent it from writing to the log
    – TMNT2014
    Jun 13 '14 at 20:27












  • Logging is an all or nothing function of having a resilient transactions. It literally doesn't make sense to not have a log for some operation but not others, otherwise the log is useless.
    – Erik Philips
    Jun 13 '14 at 20:40






  • 1




    Export the data you want to keep, truncate the table, then import back in
    – Bohemian
    Jun 13 '14 at 20:40










  • Another option would be using a tablevariable which are not logged. Hence store your readTime >= dateadd(MONTH,-7,GETDATE()) data in a table variable and then truncate the original table and copy back the data from the table variable. I would however keep a back up of the data in case something went wrong and the table gets inadvertently truncated.:) And always do a test run of your script on a lesser environment.
    – TMNT2014
    Jun 13 '14 at 20:40












  • Possible duplicate of How to efficiently delete rows while NOT using Truncate Table in a 500,000+ rows table
    – Academy of Programmer
    Aug 30 '17 at 7:24








4




4




:) I am afraid unless you are willing to write some sort of ETL to get all the rows readTime >= dateadd(MONTH,-7,GETDATE()) into another table and then issue a Truncate table and put the data back using ETL, you would not be able to prevent it from writing to the log
– TMNT2014
Jun 13 '14 at 20:27






:) I am afraid unless you are willing to write some sort of ETL to get all the rows readTime >= dateadd(MONTH,-7,GETDATE()) into another table and then issue a Truncate table and put the data back using ETL, you would not be able to prevent it from writing to the log
– TMNT2014
Jun 13 '14 at 20:27














Logging is an all or nothing function of having a resilient transactions. It literally doesn't make sense to not have a log for some operation but not others, otherwise the log is useless.
– Erik Philips
Jun 13 '14 at 20:40




Logging is an all or nothing function of having a resilient transactions. It literally doesn't make sense to not have a log for some operation but not others, otherwise the log is useless.
– Erik Philips
Jun 13 '14 at 20:40




1




1




Export the data you want to keep, truncate the table, then import back in
– Bohemian
Jun 13 '14 at 20:40




Export the data you want to keep, truncate the table, then import back in
– Bohemian
Jun 13 '14 at 20:40












Another option would be using a tablevariable which are not logged. Hence store your readTime >= dateadd(MONTH,-7,GETDATE()) data in a table variable and then truncate the original table and copy back the data from the table variable. I would however keep a back up of the data in case something went wrong and the table gets inadvertently truncated.:) And always do a test run of your script on a lesser environment.
– TMNT2014
Jun 13 '14 at 20:40






Another option would be using a tablevariable which are not logged. Hence store your readTime >= dateadd(MONTH,-7,GETDATE()) data in a table variable and then truncate the original table and copy back the data from the table variable. I would however keep a back up of the data in case something went wrong and the table gets inadvertently truncated.:) And always do a test run of your script on a lesser environment.
– TMNT2014
Jun 13 '14 at 20:40














Possible duplicate of How to efficiently delete rows while NOT using Truncate Table in a 500,000+ rows table
– Academy of Programmer
Aug 30 '17 at 7:24




Possible duplicate of How to efficiently delete rows while NOT using Truncate Table in a 500,000+ rows table
– Academy of Programmer
Aug 30 '17 at 7:24












11 Answers
11






active

oldest

votes

















up vote
137
down vote



accepted












  1. If you are Deleting All the rows in that table the simplest option is to Truncate table, something like



    TRUNCATE TABLE LargeTable
    GO


    Truncate table will simply empty the table, you cannot use WHERE clause to limit the rows being deleted and no triggers will be fired.



  2. On the other hand if you are deleting more than 80-90 Percent of the data, say if you have total of 11 Million rows and you want to delete 10 million another way would be to Insert these 1 million rows (records you want to keep) to another staging table. Truncate this Large table and Insert back these 1 Million rows.


  3. Or if permissions/views or other objects which has this large table as their underlying table doesnt get affected by dropping this table you can get these relatively small amount of the rows into another table drop this table and create another table with same schema and import these rows back into this ex-Large table.



  4. One last option I can think of is to change your database's Recovery Mode to SIMPLE and then delete rows in smaller batches using a while loop something like this..



    DECLARE @Deleted_Rows INT;
    SET @Deleted_Rows = 1;


    WHILE (@Deleted_Rows > 0)
    BEGIN
    -- Delete some small number of rows at a time
    DELETE TOP (10000) LargeTable
    WHERE readTime < dateadd(MONTH,-7,GETDATE())

    SET @Deleted_Rows = @@ROWCOUNT;
    END



and dont forget to change the Recovery mode back to full and I think you have to take a backup to make it fully affective (the change or recovery modes).






share|improve this answer

















  • 7




    Also rememeber that if you truncate a table, you cannot have any FKs assocaited with it.
    – HLGEM
    Feb 4 '15 at 15:26






  • 1




    But how to be sure that you are deleting 80-90% of data? Let's assume i have only range of values that should be deleted. And i have a few tables. So i have to check every of them and calculate percentage, and if it around 30% i guess this method is not very effective... I'm trying to find optimal solution for unknown case.
    – Archont
    Nov 25 '16 at 12:41






  • 4




    @Archont optimal solution for unknown case that’s the dream isn't it? Unfortunately you cannot cure every disease with any one pill; I have suggested some possible solutions for different scenarios. There is no sliver bullet here unfortunately.
    – M.Ali
    Nov 25 '16 at 15:54






  • 3




    One thing to note if choosing option 4: Depending on the how the table is used, it may be a better option to delete less than 5000 rows at a time to avoid lock escalation.
    – Daniel
    Dec 9 '16 at 14:01












  • If the count of records to delete is much much bigger then records that will remain in the table, i found that simple select into temp table of the records that will stay in and drop original table and rename of the temp table is much faster. Given that you don't use identity Id foreign key somewhere.
    – Vladimir Bozic
    Feb 13 '17 at 7:20


















up vote
59
down vote













@m-ali answer is right but also keep in mind that logs could grow a lot if you don't commit the transaction after each chunk and perform a checkpoint. This is how I would do it and take this article http://sqlperformance.com/2013/03/io-subsystem/chunk-deletes as reference, with performance tests and graphs:



DECLARE @Deleted_Rows INT;
SET @Deleted_Rows = 1;


WHILE (@Deleted_Rows > 0)
BEGIN

BEGIN TRANSACTION

-- Delete some small number of rows at a time
DELETE TOP (10000) LargeTable
WHERE readTime < dateadd(MONTH,-7,GETDATE())

SET @Deleted_Rows = @@ROWCOUNT;

COMMIT TRANSACTION
CHECKPOINT -- for simple recovery model
END





share|improve this answer



















  • 8




    I'd agree with this from practice. Small typo in that you would want to read rowcount before committing transaction and checkpoint
    – dove
    Nov 10 '15 at 14:38


















up vote
31
down vote













You can also use GO + how many times you want to execute the same query.



DELETE TOP (10000)  [TARGETDATABASE].[SCHEMA].[TARGETTABLE] 
WHERE readTime < dateadd(MONTH,-1,GETDATE());
-- how many times you want the query to repeat
GO 100





share|improve this answer





















  • I like this, it is working for me I accidentally inserted the same row into a table 26 Million times and needed to delete all occurrences of it, which in one single delete statement ran out of memory on the server, so this is great one question, will it stop mid loop if it runs out of rows to delete?
    – ScottC
    Mar 20 '17 at 20:42






  • 1




    @ScottC, it is not a loop, it just repeats the query (batch like) and if you run out of rows it can not delete anything. But it will not stop. you wil get something like (0 row(s) affected) if it runs out of rows you delete.
    – Bunkerbuster
    Mar 22 '17 at 8:02






  • 1




    Simple and cool way of running query for N times...
    – ManiMuthuPandi
    Nov 7 '17 at 9:32






  • 1




    From what MS SQL Server is this syntax GO xx supposed to work? I get a "Could not find stored procedure '' " error. Without the GO command it works fine though.
    – Abel
    Jun 25 at 22:53








  • 1




    Hmm, it seems like I can execute it, and it runs indeed multiple times, but in MS SQL Mgt Studio it shows the red curly line with the mentioned error (but F5-run works then)
    – Abel
    Jun 25 at 23:15


















up vote
8
down vote













This variation of M.Ali's is working fine for me. It deletes some, clears the log and repeats. I'm watching the log grow, drop and start over.



DECLARE @Deleted_Rows INT;
SET @Deleted_Rows = 1;
WHILE (@Deleted_Rows > 0)
BEGIN
-- Delete some small number of rows at a time
delete top (100000) from InstallLog where DateTime between '2014-12-01' and '2015-02-01'
SET @Deleted_Rows = @@ROWCOUNT;
dbcc shrinkfile (MobiControlDB_log,0,truncateonly);
END





share|improve this answer





















  • This was very useful! I modified it to parameterize the # of rows to delete at a time, and also the WHERE clause. Works like a charm!
    – Shiva
    Apr 25 at 19:04




















up vote
8
down vote













@Francisco Goldenstein, just a minor correction. The COMMIT must be used after you set the variable, otherwise the WHILE will be executed just once:



DECLARE @Deleted_Rows INT;
SET @Deleted_Rows = 1;

WHILE (@Deleted_Rows > 0)
BEGIN
BEGIN TRANSACTION

-- Delete some small number of rows at a time
DELETE TOP (10000) LargeTable
WHERE readTime < dateadd(MONTH,-7,GETDATE())

SET @Deleted_Rows = @@ROWCOUNT;

COMMIT TRANSACTION
CHECKPOINT -- for simple recovery model

END





share|improve this answer




























    up vote
    7
    down vote













    If you are willing (and able) to implement partitioning, that is an effective technique for removing large quantities of data with little run-time overhead. Not cost-effective for a once-off exercise, though.






    share|improve this answer




























      up vote
      3
      down vote













      You can delete small batches using a while loop, something like this:



      DELETE TOP (10000)  LargeTable 
      WHERE readTime < dateadd(MONTH,-7,GETDATE())
      WHILE @@ROWCOUNT > 0
      BEGIN
      DELETE TOP (10000) LargeTable
      WHERE readTime < dateadd(MONTH,-7,GETDATE())
      END





      share|improve this answer




























        up vote
        3
        down vote













        I was able to delete 19 million rows from my table of 21 million rows in matter of minutes. Here is my approach.



        If you have a auto-incrementing primary key on this table, then you can make use of this primary key.




        1. Get minimum value of primary key of the large table where readTime < dateadd(MONTH,-7,GETDATE()). (Add index on readTime, if not already present, this index will anyway be deleted along with the table in step 3.). Lets store it in a variable 'min_primary'


        2. Insert all the rows having primary key > min_primary into a staging table (memory table if no. of rows is not large).


        3. Drop the large table.


        4. Recreate the table. Copy all the rows from staging table to main table.


        5. Drop the staging table.







        share|improve this answer




























          up vote
          1
          down vote













          Another use:



          SET ROWCOUNT 1000 -- Buffer

          DECLARE @DATE AS DATETIME = dateadd(MONTH,-7,GETDATE())

          DELETE LargeTable WHERE readTime < @DATE
          WHILE @@ROWCOUNT > 0
          BEGIN
          DELETE LargeTable WHERE readTime < @DATE
          END
          SET ROWCOUNT 0


          Optional;



          If transaction log is enabled, disable transaction logs.



          ALTER DATABASE dbname SET RECOVERY SIMPLE;





          share|improve this answer






























            up vote
            0
            down vote













            Shorter syntax



            select 1
            WHILE (@@ROWCOUNT > 0)
            BEGIN
            DELETE TOP (10000) LargeTable
            WHERE readTime < dateadd(MONTH,-7,GETDATE())
            END





            share|improve this answer




























              up vote
              0
              down vote













              If you are using SQL server 2016 or higher and if your table is having partitions created based on column you are trying to delete(for example Timestamp column), then you could use this new command to delete data by partitions.



              TRUNCATE TABLE WITH ( PARTITIONS ( { | } [ , ...n ] ) )



              This will delete the data in selected partition(s) only and should be the most efficient way to delete data from part of table since it will not create transaction logs and will be done just as fast as regular truncate but without having all the data deleted from the table.



              Drawback is if your table is not setup with partition, then you need to go old school and delete the data with regular approach and then recreate the table with partitions so that you can do this in future, which is what I did. I added the partition creation and deletion into insertion procedure itself. I had table with 500 million rows so this was the only option to reduce deletion time.



              For more details refer to below links:
              https://docs.microsoft.com/en-us/sql/t-sql/statements/truncate-table-transact-sql?view=sql-server-2017



              SQL server 2016 Truncate table with partitions



              Below is what I did first to delete the data before I could recreate the table with partitions with required data in it. This query will run for days during specified time window until the data is deleted.



              :connect <<ServerName>>
              use <<DatabaseName>>

              SET NOCOUNT ON;
              DECLARE @Deleted_Rows INT;
              DECLARE @loopnum INT;
              DECLARE @msg varchar(100);
              DECLARE @FlagDate datetime;
              SET @FlagDate = getdate() - 31;
              SET @Deleted_Rows = 1;
              SET @loopnum = 1;

              /*while (getdate() < convert(datetime,'2018-11-08 14:00:00.000',120))
              BEGIN
              RAISERROR( 'WAIT for START' ,0,1) WITH NOWAIT
              WAITFOR DELAY '00:10:00'
              END*/
              RAISERROR( 'STARTING PURGE' ,0,1) WITH NOWAIT

              WHILE (1=1)
              BEGIN
              WHILE (@Deleted_Rows > 0 AND (datepart(hh, getdate() ) >= 12 AND datepart(hh, getdate() ) <= 20)) -- (getdate() < convert(datetime,'2018-11-08 19:00:00.000',120) )
              BEGIN
              -- Delete some small number of rows at a time
              DELETE TOP (500000) dbo.<<table_name>>
              WHERE timestamp_column < convert(datetime, @FlagDate,102)
              SET @Deleted_Rows = @@ROWCOUNT;
              WAITFOR DELAY '00:00:01'
              select @msg = 'ROWCOUNT' + convert(varchar,@Deleted_Rows);
              set @loopnum = @loopnum + 1
              if @loopnum > 1000
              begin
              begin try
              DBCC SHRINKFILE (N'<<databasename>>_log' , 0, TRUNCATEONLY)
              RAISERROR( @msg ,0,1) WITH NOWAIT
              end try
              begin catch
              RAISERROR( 'DBCC SHRINK' ,0,1) WITH NOWAIT
              end catch
              set @loopnum = 1
              end
              END
              WAITFOR DELAY '00:10:00'
              END
              select getdate()





              share|improve this answer





















                Your Answer






                StackExchange.ifUsing("editor", function () {
                StackExchange.using("externalEditor", function () {
                StackExchange.using("snippets", function () {
                StackExchange.snippets.init();
                });
                });
                }, "code-snippets");

                StackExchange.ready(function() {
                var channelOptions = {
                tags: "".split(" "),
                id: "1"
                };
                initTagRenderer("".split(" "), "".split(" "), channelOptions);

                StackExchange.using("externalEditor", function() {
                // Have to fire editor after snippets, if snippets enabled
                if (StackExchange.settings.snippets.snippetsEnabled) {
                StackExchange.using("snippets", function() {
                createEditor();
                });
                }
                else {
                createEditor();
                }
                });

                function createEditor() {
                StackExchange.prepareEditor({
                heartbeatType: 'answer',
                convertImagesToLinks: true,
                noModals: true,
                showLowRepImageUploadWarning: true,
                reputationToPostImages: 10,
                bindNavPrevention: true,
                postfix: "",
                imageUploader: {
                brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
                contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
                allowUrls: true
                },
                onDemand: true,
                discardSelector: ".discard-answer"
                ,immediatelyShowMarkdownHelp:true
                });


                }
                });














                 

                draft saved


                draft discarded


















                StackExchange.ready(
                function () {
                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f24213299%2fhow-to-delete-large-data-of-table-in-sql-without-log%23new-answer', 'question_page');
                }
                );

                Post as a guest















                Required, but never shown

























                11 Answers
                11






                active

                oldest

                votes








                11 Answers
                11






                active

                oldest

                votes









                active

                oldest

                votes






                active

                oldest

                votes








                up vote
                137
                down vote



                accepted












                1. If you are Deleting All the rows in that table the simplest option is to Truncate table, something like



                  TRUNCATE TABLE LargeTable
                  GO


                  Truncate table will simply empty the table, you cannot use WHERE clause to limit the rows being deleted and no triggers will be fired.



                2. On the other hand if you are deleting more than 80-90 Percent of the data, say if you have total of 11 Million rows and you want to delete 10 million another way would be to Insert these 1 million rows (records you want to keep) to another staging table. Truncate this Large table and Insert back these 1 Million rows.


                3. Or if permissions/views or other objects which has this large table as their underlying table doesnt get affected by dropping this table you can get these relatively small amount of the rows into another table drop this table and create another table with same schema and import these rows back into this ex-Large table.



                4. One last option I can think of is to change your database's Recovery Mode to SIMPLE and then delete rows in smaller batches using a while loop something like this..



                  DECLARE @Deleted_Rows INT;
                  SET @Deleted_Rows = 1;


                  WHILE (@Deleted_Rows > 0)
                  BEGIN
                  -- Delete some small number of rows at a time
                  DELETE TOP (10000) LargeTable
                  WHERE readTime < dateadd(MONTH,-7,GETDATE())

                  SET @Deleted_Rows = @@ROWCOUNT;
                  END



                and dont forget to change the Recovery mode back to full and I think you have to take a backup to make it fully affective (the change or recovery modes).






                share|improve this answer

















                • 7




                  Also rememeber that if you truncate a table, you cannot have any FKs assocaited with it.
                  – HLGEM
                  Feb 4 '15 at 15:26






                • 1




                  But how to be sure that you are deleting 80-90% of data? Let's assume i have only range of values that should be deleted. And i have a few tables. So i have to check every of them and calculate percentage, and if it around 30% i guess this method is not very effective... I'm trying to find optimal solution for unknown case.
                  – Archont
                  Nov 25 '16 at 12:41






                • 4




                  @Archont optimal solution for unknown case that’s the dream isn't it? Unfortunately you cannot cure every disease with any one pill; I have suggested some possible solutions for different scenarios. There is no sliver bullet here unfortunately.
                  – M.Ali
                  Nov 25 '16 at 15:54






                • 3




                  One thing to note if choosing option 4: Depending on the how the table is used, it may be a better option to delete less than 5000 rows at a time to avoid lock escalation.
                  – Daniel
                  Dec 9 '16 at 14:01












                • If the count of records to delete is much much bigger then records that will remain in the table, i found that simple select into temp table of the records that will stay in and drop original table and rename of the temp table is much faster. Given that you don't use identity Id foreign key somewhere.
                  – Vladimir Bozic
                  Feb 13 '17 at 7:20















                up vote
                137
                down vote



                accepted












                1. If you are Deleting All the rows in that table the simplest option is to Truncate table, something like



                  TRUNCATE TABLE LargeTable
                  GO


                  Truncate table will simply empty the table, you cannot use WHERE clause to limit the rows being deleted and no triggers will be fired.



                2. On the other hand if you are deleting more than 80-90 Percent of the data, say if you have total of 11 Million rows and you want to delete 10 million another way would be to Insert these 1 million rows (records you want to keep) to another staging table. Truncate this Large table and Insert back these 1 Million rows.


                3. Or if permissions/views or other objects which has this large table as their underlying table doesnt get affected by dropping this table you can get these relatively small amount of the rows into another table drop this table and create another table with same schema and import these rows back into this ex-Large table.



                4. One last option I can think of is to change your database's Recovery Mode to SIMPLE and then delete rows in smaller batches using a while loop something like this..



                  DECLARE @Deleted_Rows INT;
                  SET @Deleted_Rows = 1;


                  WHILE (@Deleted_Rows > 0)
                  BEGIN
                  -- Delete some small number of rows at a time
                  DELETE TOP (10000) LargeTable
                  WHERE readTime < dateadd(MONTH,-7,GETDATE())

                  SET @Deleted_Rows = @@ROWCOUNT;
                  END



                and dont forget to change the Recovery mode back to full and I think you have to take a backup to make it fully affective (the change or recovery modes).






                share|improve this answer

















                • 7




                  Also rememeber that if you truncate a table, you cannot have any FKs assocaited with it.
                  – HLGEM
                  Feb 4 '15 at 15:26






                • 1




                  But how to be sure that you are deleting 80-90% of data? Let's assume i have only range of values that should be deleted. And i have a few tables. So i have to check every of them and calculate percentage, and if it around 30% i guess this method is not very effective... I'm trying to find optimal solution for unknown case.
                  – Archont
                  Nov 25 '16 at 12:41






                • 4




                  @Archont optimal solution for unknown case that’s the dream isn't it? Unfortunately you cannot cure every disease with any one pill; I have suggested some possible solutions for different scenarios. There is no sliver bullet here unfortunately.
                  – M.Ali
                  Nov 25 '16 at 15:54






                • 3




                  One thing to note if choosing option 4: Depending on the how the table is used, it may be a better option to delete less than 5000 rows at a time to avoid lock escalation.
                  – Daniel
                  Dec 9 '16 at 14:01












                • If the count of records to delete is much much bigger then records that will remain in the table, i found that simple select into temp table of the records that will stay in and drop original table and rename of the temp table is much faster. Given that you don't use identity Id foreign key somewhere.
                  – Vladimir Bozic
                  Feb 13 '17 at 7:20













                up vote
                137
                down vote



                accepted







                up vote
                137
                down vote



                accepted








                1. If you are Deleting All the rows in that table the simplest option is to Truncate table, something like



                  TRUNCATE TABLE LargeTable
                  GO


                  Truncate table will simply empty the table, you cannot use WHERE clause to limit the rows being deleted and no triggers will be fired.



                2. On the other hand if you are deleting more than 80-90 Percent of the data, say if you have total of 11 Million rows and you want to delete 10 million another way would be to Insert these 1 million rows (records you want to keep) to another staging table. Truncate this Large table and Insert back these 1 Million rows.


                3. Or if permissions/views or other objects which has this large table as their underlying table doesnt get affected by dropping this table you can get these relatively small amount of the rows into another table drop this table and create another table with same schema and import these rows back into this ex-Large table.



                4. One last option I can think of is to change your database's Recovery Mode to SIMPLE and then delete rows in smaller batches using a while loop something like this..



                  DECLARE @Deleted_Rows INT;
                  SET @Deleted_Rows = 1;


                  WHILE (@Deleted_Rows > 0)
                  BEGIN
                  -- Delete some small number of rows at a time
                  DELETE TOP (10000) LargeTable
                  WHERE readTime < dateadd(MONTH,-7,GETDATE())

                  SET @Deleted_Rows = @@ROWCOUNT;
                  END



                and dont forget to change the Recovery mode back to full and I think you have to take a backup to make it fully affective (the change or recovery modes).






                share|improve this answer














                1. If you are Deleting All the rows in that table the simplest option is to Truncate table, something like



                  TRUNCATE TABLE LargeTable
                  GO


                  Truncate table will simply empty the table, you cannot use WHERE clause to limit the rows being deleted and no triggers will be fired.



                2. On the other hand if you are deleting more than 80-90 Percent of the data, say if you have total of 11 Million rows and you want to delete 10 million another way would be to Insert these 1 million rows (records you want to keep) to another staging table. Truncate this Large table and Insert back these 1 Million rows.


                3. Or if permissions/views or other objects which has this large table as their underlying table doesnt get affected by dropping this table you can get these relatively small amount of the rows into another table drop this table and create another table with same schema and import these rows back into this ex-Large table.



                4. One last option I can think of is to change your database's Recovery Mode to SIMPLE and then delete rows in smaller batches using a while loop something like this..



                  DECLARE @Deleted_Rows INT;
                  SET @Deleted_Rows = 1;


                  WHILE (@Deleted_Rows > 0)
                  BEGIN
                  -- Delete some small number of rows at a time
                  DELETE TOP (10000) LargeTable
                  WHERE readTime < dateadd(MONTH,-7,GETDATE())

                  SET @Deleted_Rows = @@ROWCOUNT;
                  END



                and dont forget to change the Recovery mode back to full and I think you have to take a backup to make it fully affective (the change or recovery modes).







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Jun 13 '14 at 20:50









                M.Ali

                54.1k105991




                54.1k105991








                • 7




                  Also rememeber that if you truncate a table, you cannot have any FKs assocaited with it.
                  – HLGEM
                  Feb 4 '15 at 15:26






                • 1




                  But how to be sure that you are deleting 80-90% of data? Let's assume i have only range of values that should be deleted. And i have a few tables. So i have to check every of them and calculate percentage, and if it around 30% i guess this method is not very effective... I'm trying to find optimal solution for unknown case.
                  – Archont
                  Nov 25 '16 at 12:41






                • 4




                  @Archont optimal solution for unknown case that’s the dream isn't it? Unfortunately you cannot cure every disease with any one pill; I have suggested some possible solutions for different scenarios. There is no sliver bullet here unfortunately.
                  – M.Ali
                  Nov 25 '16 at 15:54






                • 3




                  One thing to note if choosing option 4: Depending on the how the table is used, it may be a better option to delete less than 5000 rows at a time to avoid lock escalation.
                  – Daniel
                  Dec 9 '16 at 14:01












                • If the count of records to delete is much much bigger then records that will remain in the table, i found that simple select into temp table of the records that will stay in and drop original table and rename of the temp table is much faster. Given that you don't use identity Id foreign key somewhere.
                  – Vladimir Bozic
                  Feb 13 '17 at 7:20














                • 7




                  Also rememeber that if you truncate a table, you cannot have any FKs assocaited with it.
                  – HLGEM
                  Feb 4 '15 at 15:26






                • 1




                  But how to be sure that you are deleting 80-90% of data? Let's assume i have only range of values that should be deleted. And i have a few tables. So i have to check every of them and calculate percentage, and if it around 30% i guess this method is not very effective... I'm trying to find optimal solution for unknown case.
                  – Archont
                  Nov 25 '16 at 12:41






                • 4




                  @Archont optimal solution for unknown case that’s the dream isn't it? Unfortunately you cannot cure every disease with any one pill; I have suggested some possible solutions for different scenarios. There is no sliver bullet here unfortunately.
                  – M.Ali
                  Nov 25 '16 at 15:54






                • 3




                  One thing to note if choosing option 4: Depending on the how the table is used, it may be a better option to delete less than 5000 rows at a time to avoid lock escalation.
                  – Daniel
                  Dec 9 '16 at 14:01












                • If the count of records to delete is much much bigger then records that will remain in the table, i found that simple select into temp table of the records that will stay in and drop original table and rename of the temp table is much faster. Given that you don't use identity Id foreign key somewhere.
                  – Vladimir Bozic
                  Feb 13 '17 at 7:20








                7




                7




                Also rememeber that if you truncate a table, you cannot have any FKs assocaited with it.
                – HLGEM
                Feb 4 '15 at 15:26




                Also rememeber that if you truncate a table, you cannot have any FKs assocaited with it.
                – HLGEM
                Feb 4 '15 at 15:26




                1




                1




                But how to be sure that you are deleting 80-90% of data? Let's assume i have only range of values that should be deleted. And i have a few tables. So i have to check every of them and calculate percentage, and if it around 30% i guess this method is not very effective... I'm trying to find optimal solution for unknown case.
                – Archont
                Nov 25 '16 at 12:41




                But how to be sure that you are deleting 80-90% of data? Let's assume i have only range of values that should be deleted. And i have a few tables. So i have to check every of them and calculate percentage, and if it around 30% i guess this method is not very effective... I'm trying to find optimal solution for unknown case.
                – Archont
                Nov 25 '16 at 12:41




                4




                4




                @Archont optimal solution for unknown case that’s the dream isn't it? Unfortunately you cannot cure every disease with any one pill; I have suggested some possible solutions for different scenarios. There is no sliver bullet here unfortunately.
                – M.Ali
                Nov 25 '16 at 15:54




                @Archont optimal solution for unknown case that’s the dream isn't it? Unfortunately you cannot cure every disease with any one pill; I have suggested some possible solutions for different scenarios. There is no sliver bullet here unfortunately.
                – M.Ali
                Nov 25 '16 at 15:54




                3




                3




                One thing to note if choosing option 4: Depending on the how the table is used, it may be a better option to delete less than 5000 rows at a time to avoid lock escalation.
                – Daniel
                Dec 9 '16 at 14:01






                One thing to note if choosing option 4: Depending on the how the table is used, it may be a better option to delete less than 5000 rows at a time to avoid lock escalation.
                – Daniel
                Dec 9 '16 at 14:01














                If the count of records to delete is much much bigger then records that will remain in the table, i found that simple select into temp table of the records that will stay in and drop original table and rename of the temp table is much faster. Given that you don't use identity Id foreign key somewhere.
                – Vladimir Bozic
                Feb 13 '17 at 7:20




                If the count of records to delete is much much bigger then records that will remain in the table, i found that simple select into temp table of the records that will stay in and drop original table and rename of the temp table is much faster. Given that you don't use identity Id foreign key somewhere.
                – Vladimir Bozic
                Feb 13 '17 at 7:20












                up vote
                59
                down vote













                @m-ali answer is right but also keep in mind that logs could grow a lot if you don't commit the transaction after each chunk and perform a checkpoint. This is how I would do it and take this article http://sqlperformance.com/2013/03/io-subsystem/chunk-deletes as reference, with performance tests and graphs:



                DECLARE @Deleted_Rows INT;
                SET @Deleted_Rows = 1;


                WHILE (@Deleted_Rows > 0)
                BEGIN

                BEGIN TRANSACTION

                -- Delete some small number of rows at a time
                DELETE TOP (10000) LargeTable
                WHERE readTime < dateadd(MONTH,-7,GETDATE())

                SET @Deleted_Rows = @@ROWCOUNT;

                COMMIT TRANSACTION
                CHECKPOINT -- for simple recovery model
                END





                share|improve this answer



















                • 8




                  I'd agree with this from practice. Small typo in that you would want to read rowcount before committing transaction and checkpoint
                  – dove
                  Nov 10 '15 at 14:38















                up vote
                59
                down vote













                @m-ali answer is right but also keep in mind that logs could grow a lot if you don't commit the transaction after each chunk and perform a checkpoint. This is how I would do it and take this article http://sqlperformance.com/2013/03/io-subsystem/chunk-deletes as reference, with performance tests and graphs:



                DECLARE @Deleted_Rows INT;
                SET @Deleted_Rows = 1;


                WHILE (@Deleted_Rows > 0)
                BEGIN

                BEGIN TRANSACTION

                -- Delete some small number of rows at a time
                DELETE TOP (10000) LargeTable
                WHERE readTime < dateadd(MONTH,-7,GETDATE())

                SET @Deleted_Rows = @@ROWCOUNT;

                COMMIT TRANSACTION
                CHECKPOINT -- for simple recovery model
                END





                share|improve this answer



















                • 8




                  I'd agree with this from practice. Small typo in that you would want to read rowcount before committing transaction and checkpoint
                  – dove
                  Nov 10 '15 at 14:38













                up vote
                59
                down vote










                up vote
                59
                down vote









                @m-ali answer is right but also keep in mind that logs could grow a lot if you don't commit the transaction after each chunk and perform a checkpoint. This is how I would do it and take this article http://sqlperformance.com/2013/03/io-subsystem/chunk-deletes as reference, with performance tests and graphs:



                DECLARE @Deleted_Rows INT;
                SET @Deleted_Rows = 1;


                WHILE (@Deleted_Rows > 0)
                BEGIN

                BEGIN TRANSACTION

                -- Delete some small number of rows at a time
                DELETE TOP (10000) LargeTable
                WHERE readTime < dateadd(MONTH,-7,GETDATE())

                SET @Deleted_Rows = @@ROWCOUNT;

                COMMIT TRANSACTION
                CHECKPOINT -- for simple recovery model
                END





                share|improve this answer














                @m-ali answer is right but also keep in mind that logs could grow a lot if you don't commit the transaction after each chunk and perform a checkpoint. This is how I would do it and take this article http://sqlperformance.com/2013/03/io-subsystem/chunk-deletes as reference, with performance tests and graphs:



                DECLARE @Deleted_Rows INT;
                SET @Deleted_Rows = 1;


                WHILE (@Deleted_Rows > 0)
                BEGIN

                BEGIN TRANSACTION

                -- Delete some small number of rows at a time
                DELETE TOP (10000) LargeTable
                WHERE readTime < dateadd(MONTH,-7,GETDATE())

                SET @Deleted_Rows = @@ROWCOUNT;

                COMMIT TRANSACTION
                CHECKPOINT -- for simple recovery model
                END






                share|improve this answer














                share|improve this answer



                share|improve this answer








                edited May 17 '17 at 14:30









                Bort

                598319




                598319










                answered Feb 4 '15 at 15:04









                Francisco Goldenstein

                8,66733549




                8,66733549








                • 8




                  I'd agree with this from practice. Small typo in that you would want to read rowcount before committing transaction and checkpoint
                  – dove
                  Nov 10 '15 at 14:38














                • 8




                  I'd agree with this from practice. Small typo in that you would want to read rowcount before committing transaction and checkpoint
                  – dove
                  Nov 10 '15 at 14:38








                8




                8




                I'd agree with this from practice. Small typo in that you would want to read rowcount before committing transaction and checkpoint
                – dove
                Nov 10 '15 at 14:38




                I'd agree with this from practice. Small typo in that you would want to read rowcount before committing transaction and checkpoint
                – dove
                Nov 10 '15 at 14:38










                up vote
                31
                down vote













                You can also use GO + how many times you want to execute the same query.



                DELETE TOP (10000)  [TARGETDATABASE].[SCHEMA].[TARGETTABLE] 
                WHERE readTime < dateadd(MONTH,-1,GETDATE());
                -- how many times you want the query to repeat
                GO 100





                share|improve this answer





















                • I like this, it is working for me I accidentally inserted the same row into a table 26 Million times and needed to delete all occurrences of it, which in one single delete statement ran out of memory on the server, so this is great one question, will it stop mid loop if it runs out of rows to delete?
                  – ScottC
                  Mar 20 '17 at 20:42






                • 1




                  @ScottC, it is not a loop, it just repeats the query (batch like) and if you run out of rows it can not delete anything. But it will not stop. you wil get something like (0 row(s) affected) if it runs out of rows you delete.
                  – Bunkerbuster
                  Mar 22 '17 at 8:02






                • 1




                  Simple and cool way of running query for N times...
                  – ManiMuthuPandi
                  Nov 7 '17 at 9:32






                • 1




                  From what MS SQL Server is this syntax GO xx supposed to work? I get a "Could not find stored procedure '' " error. Without the GO command it works fine though.
                  – Abel
                  Jun 25 at 22:53








                • 1




                  Hmm, it seems like I can execute it, and it runs indeed multiple times, but in MS SQL Mgt Studio it shows the red curly line with the mentioned error (but F5-run works then)
                  – Abel
                  Jun 25 at 23:15















                up vote
                31
                down vote













                You can also use GO + how many times you want to execute the same query.



                DELETE TOP (10000)  [TARGETDATABASE].[SCHEMA].[TARGETTABLE] 
                WHERE readTime < dateadd(MONTH,-1,GETDATE());
                -- how many times you want the query to repeat
                GO 100





                share|improve this answer





















                • I like this, it is working for me I accidentally inserted the same row into a table 26 Million times and needed to delete all occurrences of it, which in one single delete statement ran out of memory on the server, so this is great one question, will it stop mid loop if it runs out of rows to delete?
                  – ScottC
                  Mar 20 '17 at 20:42






                • 1




                  @ScottC, it is not a loop, it just repeats the query (batch like) and if you run out of rows it can not delete anything. But it will not stop. you wil get something like (0 row(s) affected) if it runs out of rows you delete.
                  – Bunkerbuster
                  Mar 22 '17 at 8:02






                • 1




                  Simple and cool way of running query for N times...
                  – ManiMuthuPandi
                  Nov 7 '17 at 9:32






                • 1




                  From what MS SQL Server is this syntax GO xx supposed to work? I get a "Could not find stored procedure '' " error. Without the GO command it works fine though.
                  – Abel
                  Jun 25 at 22:53








                • 1




                  Hmm, it seems like I can execute it, and it runs indeed multiple times, but in MS SQL Mgt Studio it shows the red curly line with the mentioned error (but F5-run works then)
                  – Abel
                  Jun 25 at 23:15













                up vote
                31
                down vote










                up vote
                31
                down vote









                You can also use GO + how many times you want to execute the same query.



                DELETE TOP (10000)  [TARGETDATABASE].[SCHEMA].[TARGETTABLE] 
                WHERE readTime < dateadd(MONTH,-1,GETDATE());
                -- how many times you want the query to repeat
                GO 100





                share|improve this answer












                You can also use GO + how many times you want to execute the same query.



                DELETE TOP (10000)  [TARGETDATABASE].[SCHEMA].[TARGETTABLE] 
                WHERE readTime < dateadd(MONTH,-1,GETDATE());
                -- how many times you want the query to repeat
                GO 100






                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Nov 25 '16 at 10:05









                Bunkerbuster

                467511




                467511












                • I like this, it is working for me I accidentally inserted the same row into a table 26 Million times and needed to delete all occurrences of it, which in one single delete statement ran out of memory on the server, so this is great one question, will it stop mid loop if it runs out of rows to delete?
                  – ScottC
                  Mar 20 '17 at 20:42






                • 1




                  @ScottC, it is not a loop, it just repeats the query (batch like) and if you run out of rows it can not delete anything. But it will not stop. you wil get something like (0 row(s) affected) if it runs out of rows you delete.
                  – Bunkerbuster
                  Mar 22 '17 at 8:02






                • 1




                  Simple and cool way of running query for N times...
                  – ManiMuthuPandi
                  Nov 7 '17 at 9:32






                • 1




                  From what MS SQL Server is this syntax GO xx supposed to work? I get a "Could not find stored procedure '' " error. Without the GO command it works fine though.
                  – Abel
                  Jun 25 at 22:53








                • 1




                  Hmm, it seems like I can execute it, and it runs indeed multiple times, but in MS SQL Mgt Studio it shows the red curly line with the mentioned error (but F5-run works then)
                  – Abel
                  Jun 25 at 23:15


















                • I like this, it is working for me I accidentally inserted the same row into a table 26 Million times and needed to delete all occurrences of it, which in one single delete statement ran out of memory on the server, so this is great one question, will it stop mid loop if it runs out of rows to delete?
                  – ScottC
                  Mar 20 '17 at 20:42






                • 1




                  @ScottC, it is not a loop, it just repeats the query (batch like) and if you run out of rows it can not delete anything. But it will not stop. you wil get something like (0 row(s) affected) if it runs out of rows you delete.
                  – Bunkerbuster
                  Mar 22 '17 at 8:02






                • 1




                  Simple and cool way of running query for N times...
                  – ManiMuthuPandi
                  Nov 7 '17 at 9:32






                • 1




                  From what MS SQL Server is this syntax GO xx supposed to work? I get a "Could not find stored procedure '' " error. Without the GO command it works fine though.
                  – Abel
                  Jun 25 at 22:53








                • 1




                  Hmm, it seems like I can execute it, and it runs indeed multiple times, but in MS SQL Mgt Studio it shows the red curly line with the mentioned error (but F5-run works then)
                  – Abel
                  Jun 25 at 23:15
















                I like this, it is working for me I accidentally inserted the same row into a table 26 Million times and needed to delete all occurrences of it, which in one single delete statement ran out of memory on the server, so this is great one question, will it stop mid loop if it runs out of rows to delete?
                – ScottC
                Mar 20 '17 at 20:42




                I like this, it is working for me I accidentally inserted the same row into a table 26 Million times and needed to delete all occurrences of it, which in one single delete statement ran out of memory on the server, so this is great one question, will it stop mid loop if it runs out of rows to delete?
                – ScottC
                Mar 20 '17 at 20:42




                1




                1




                @ScottC, it is not a loop, it just repeats the query (batch like) and if you run out of rows it can not delete anything. But it will not stop. you wil get something like (0 row(s) affected) if it runs out of rows you delete.
                – Bunkerbuster
                Mar 22 '17 at 8:02




                @ScottC, it is not a loop, it just repeats the query (batch like) and if you run out of rows it can not delete anything. But it will not stop. you wil get something like (0 row(s) affected) if it runs out of rows you delete.
                – Bunkerbuster
                Mar 22 '17 at 8:02




                1




                1




                Simple and cool way of running query for N times...
                – ManiMuthuPandi
                Nov 7 '17 at 9:32




                Simple and cool way of running query for N times...
                – ManiMuthuPandi
                Nov 7 '17 at 9:32




                1




                1




                From what MS SQL Server is this syntax GO xx supposed to work? I get a "Could not find stored procedure '' " error. Without the GO command it works fine though.
                – Abel
                Jun 25 at 22:53






                From what MS SQL Server is this syntax GO xx supposed to work? I get a "Could not find stored procedure '' " error. Without the GO command it works fine though.
                – Abel
                Jun 25 at 22:53






                1




                1




                Hmm, it seems like I can execute it, and it runs indeed multiple times, but in MS SQL Mgt Studio it shows the red curly line with the mentioned error (but F5-run works then)
                – Abel
                Jun 25 at 23:15




                Hmm, it seems like I can execute it, and it runs indeed multiple times, but in MS SQL Mgt Studio it shows the red curly line with the mentioned error (but F5-run works then)
                – Abel
                Jun 25 at 23:15










                up vote
                8
                down vote













                This variation of M.Ali's is working fine for me. It deletes some, clears the log and repeats. I'm watching the log grow, drop and start over.



                DECLARE @Deleted_Rows INT;
                SET @Deleted_Rows = 1;
                WHILE (@Deleted_Rows > 0)
                BEGIN
                -- Delete some small number of rows at a time
                delete top (100000) from InstallLog where DateTime between '2014-12-01' and '2015-02-01'
                SET @Deleted_Rows = @@ROWCOUNT;
                dbcc shrinkfile (MobiControlDB_log,0,truncateonly);
                END





                share|improve this answer





















                • This was very useful! I modified it to parameterize the # of rows to delete at a time, and also the WHERE clause. Works like a charm!
                  – Shiva
                  Apr 25 at 19:04

















                up vote
                8
                down vote













                This variation of M.Ali's is working fine for me. It deletes some, clears the log and repeats. I'm watching the log grow, drop and start over.



                DECLARE @Deleted_Rows INT;
                SET @Deleted_Rows = 1;
                WHILE (@Deleted_Rows > 0)
                BEGIN
                -- Delete some small number of rows at a time
                delete top (100000) from InstallLog where DateTime between '2014-12-01' and '2015-02-01'
                SET @Deleted_Rows = @@ROWCOUNT;
                dbcc shrinkfile (MobiControlDB_log,0,truncateonly);
                END





                share|improve this answer





















                • This was very useful! I modified it to parameterize the # of rows to delete at a time, and also the WHERE clause. Works like a charm!
                  – Shiva
                  Apr 25 at 19:04















                up vote
                8
                down vote










                up vote
                8
                down vote









                This variation of M.Ali's is working fine for me. It deletes some, clears the log and repeats. I'm watching the log grow, drop and start over.



                DECLARE @Deleted_Rows INT;
                SET @Deleted_Rows = 1;
                WHILE (@Deleted_Rows > 0)
                BEGIN
                -- Delete some small number of rows at a time
                delete top (100000) from InstallLog where DateTime between '2014-12-01' and '2015-02-01'
                SET @Deleted_Rows = @@ROWCOUNT;
                dbcc shrinkfile (MobiControlDB_log,0,truncateonly);
                END





                share|improve this answer












                This variation of M.Ali's is working fine for me. It deletes some, clears the log and repeats. I'm watching the log grow, drop and start over.



                DECLARE @Deleted_Rows INT;
                SET @Deleted_Rows = 1;
                WHILE (@Deleted_Rows > 0)
                BEGIN
                -- Delete some small number of rows at a time
                delete top (100000) from InstallLog where DateTime between '2014-12-01' and '2015-02-01'
                SET @Deleted_Rows = @@ROWCOUNT;
                dbcc shrinkfile (MobiControlDB_log,0,truncateonly);
                END






                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered May 14 '15 at 22:13









                Ken Koehler

                8111




                8111












                • This was very useful! I modified it to parameterize the # of rows to delete at a time, and also the WHERE clause. Works like a charm!
                  – Shiva
                  Apr 25 at 19:04




















                • This was very useful! I modified it to parameterize the # of rows to delete at a time, and also the WHERE clause. Works like a charm!
                  – Shiva
                  Apr 25 at 19:04


















                This was very useful! I modified it to parameterize the # of rows to delete at a time, and also the WHERE clause. Works like a charm!
                – Shiva
                Apr 25 at 19:04






                This was very useful! I modified it to parameterize the # of rows to delete at a time, and also the WHERE clause. Works like a charm!
                – Shiva
                Apr 25 at 19:04












                up vote
                8
                down vote













                @Francisco Goldenstein, just a minor correction. The COMMIT must be used after you set the variable, otherwise the WHILE will be executed just once:



                DECLARE @Deleted_Rows INT;
                SET @Deleted_Rows = 1;

                WHILE (@Deleted_Rows > 0)
                BEGIN
                BEGIN TRANSACTION

                -- Delete some small number of rows at a time
                DELETE TOP (10000) LargeTable
                WHERE readTime < dateadd(MONTH,-7,GETDATE())

                SET @Deleted_Rows = @@ROWCOUNT;

                COMMIT TRANSACTION
                CHECKPOINT -- for simple recovery model

                END





                share|improve this answer

























                  up vote
                  8
                  down vote













                  @Francisco Goldenstein, just a minor correction. The COMMIT must be used after you set the variable, otherwise the WHILE will be executed just once:



                  DECLARE @Deleted_Rows INT;
                  SET @Deleted_Rows = 1;

                  WHILE (@Deleted_Rows > 0)
                  BEGIN
                  BEGIN TRANSACTION

                  -- Delete some small number of rows at a time
                  DELETE TOP (10000) LargeTable
                  WHERE readTime < dateadd(MONTH,-7,GETDATE())

                  SET @Deleted_Rows = @@ROWCOUNT;

                  COMMIT TRANSACTION
                  CHECKPOINT -- for simple recovery model

                  END





                  share|improve this answer























                    up vote
                    8
                    down vote










                    up vote
                    8
                    down vote









                    @Francisco Goldenstein, just a minor correction. The COMMIT must be used after you set the variable, otherwise the WHILE will be executed just once:



                    DECLARE @Deleted_Rows INT;
                    SET @Deleted_Rows = 1;

                    WHILE (@Deleted_Rows > 0)
                    BEGIN
                    BEGIN TRANSACTION

                    -- Delete some small number of rows at a time
                    DELETE TOP (10000) LargeTable
                    WHERE readTime < dateadd(MONTH,-7,GETDATE())

                    SET @Deleted_Rows = @@ROWCOUNT;

                    COMMIT TRANSACTION
                    CHECKPOINT -- for simple recovery model

                    END





                    share|improve this answer












                    @Francisco Goldenstein, just a minor correction. The COMMIT must be used after you set the variable, otherwise the WHILE will be executed just once:



                    DECLARE @Deleted_Rows INT;
                    SET @Deleted_Rows = 1;

                    WHILE (@Deleted_Rows > 0)
                    BEGIN
                    BEGIN TRANSACTION

                    -- Delete some small number of rows at a time
                    DELETE TOP (10000) LargeTable
                    WHERE readTime < dateadd(MONTH,-7,GETDATE())

                    SET @Deleted_Rows = @@ROWCOUNT;

                    COMMIT TRANSACTION
                    CHECKPOINT -- for simple recovery model

                    END






                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered Nov 4 '16 at 13:14









                    Cassio Veras

                    12614




                    12614






















                        up vote
                        7
                        down vote













                        If you are willing (and able) to implement partitioning, that is an effective technique for removing large quantities of data with little run-time overhead. Not cost-effective for a once-off exercise, though.






                        share|improve this answer

























                          up vote
                          7
                          down vote













                          If you are willing (and able) to implement partitioning, that is an effective technique for removing large quantities of data with little run-time overhead. Not cost-effective for a once-off exercise, though.






                          share|improve this answer























                            up vote
                            7
                            down vote










                            up vote
                            7
                            down vote









                            If you are willing (and able) to implement partitioning, that is an effective technique for removing large quantities of data with little run-time overhead. Not cost-effective for a once-off exercise, though.






                            share|improve this answer












                            If you are willing (and able) to implement partitioning, that is an effective technique for removing large quantities of data with little run-time overhead. Not cost-effective for a once-off exercise, though.







                            share|improve this answer












                            share|improve this answer



                            share|improve this answer










                            answered Jun 14 '14 at 5:11









                            Michael Green

                            1,19511321




                            1,19511321






















                                up vote
                                3
                                down vote













                                You can delete small batches using a while loop, something like this:



                                DELETE TOP (10000)  LargeTable 
                                WHERE readTime < dateadd(MONTH,-7,GETDATE())
                                WHILE @@ROWCOUNT > 0
                                BEGIN
                                DELETE TOP (10000) LargeTable
                                WHERE readTime < dateadd(MONTH,-7,GETDATE())
                                END





                                share|improve this answer

























                                  up vote
                                  3
                                  down vote













                                  You can delete small batches using a while loop, something like this:



                                  DELETE TOP (10000)  LargeTable 
                                  WHERE readTime < dateadd(MONTH,-7,GETDATE())
                                  WHILE @@ROWCOUNT > 0
                                  BEGIN
                                  DELETE TOP (10000) LargeTable
                                  WHERE readTime < dateadd(MONTH,-7,GETDATE())
                                  END





                                  share|improve this answer























                                    up vote
                                    3
                                    down vote










                                    up vote
                                    3
                                    down vote









                                    You can delete small batches using a while loop, something like this:



                                    DELETE TOP (10000)  LargeTable 
                                    WHERE readTime < dateadd(MONTH,-7,GETDATE())
                                    WHILE @@ROWCOUNT > 0
                                    BEGIN
                                    DELETE TOP (10000) LargeTable
                                    WHERE readTime < dateadd(MONTH,-7,GETDATE())
                                    END





                                    share|improve this answer












                                    You can delete small batches using a while loop, something like this:



                                    DELETE TOP (10000)  LargeTable 
                                    WHERE readTime < dateadd(MONTH,-7,GETDATE())
                                    WHILE @@ROWCOUNT > 0
                                    BEGIN
                                    DELETE TOP (10000) LargeTable
                                    WHERE readTime < dateadd(MONTH,-7,GETDATE())
                                    END






                                    share|improve this answer












                                    share|improve this answer



                                    share|improve this answer










                                    answered Dec 27 '16 at 16:11









                                    fnascimento

                                    7071014




                                    7071014






















                                        up vote
                                        3
                                        down vote













                                        I was able to delete 19 million rows from my table of 21 million rows in matter of minutes. Here is my approach.



                                        If you have a auto-incrementing primary key on this table, then you can make use of this primary key.




                                        1. Get minimum value of primary key of the large table where readTime < dateadd(MONTH,-7,GETDATE()). (Add index on readTime, if not already present, this index will anyway be deleted along with the table in step 3.). Lets store it in a variable 'min_primary'


                                        2. Insert all the rows having primary key > min_primary into a staging table (memory table if no. of rows is not large).


                                        3. Drop the large table.


                                        4. Recreate the table. Copy all the rows from staging table to main table.


                                        5. Drop the staging table.







                                        share|improve this answer

























                                          up vote
                                          3
                                          down vote













                                          I was able to delete 19 million rows from my table of 21 million rows in matter of minutes. Here is my approach.



                                          If you have a auto-incrementing primary key on this table, then you can make use of this primary key.




                                          1. Get minimum value of primary key of the large table where readTime < dateadd(MONTH,-7,GETDATE()). (Add index on readTime, if not already present, this index will anyway be deleted along with the table in step 3.). Lets store it in a variable 'min_primary'


                                          2. Insert all the rows having primary key > min_primary into a staging table (memory table if no. of rows is not large).


                                          3. Drop the large table.


                                          4. Recreate the table. Copy all the rows from staging table to main table.


                                          5. Drop the staging table.







                                          share|improve this answer























                                            up vote
                                            3
                                            down vote










                                            up vote
                                            3
                                            down vote









                                            I was able to delete 19 million rows from my table of 21 million rows in matter of minutes. Here is my approach.



                                            If you have a auto-incrementing primary key on this table, then you can make use of this primary key.




                                            1. Get minimum value of primary key of the large table where readTime < dateadd(MONTH,-7,GETDATE()). (Add index on readTime, if not already present, this index will anyway be deleted along with the table in step 3.). Lets store it in a variable 'min_primary'


                                            2. Insert all the rows having primary key > min_primary into a staging table (memory table if no. of rows is not large).


                                            3. Drop the large table.


                                            4. Recreate the table. Copy all the rows from staging table to main table.


                                            5. Drop the staging table.







                                            share|improve this answer












                                            I was able to delete 19 million rows from my table of 21 million rows in matter of minutes. Here is my approach.



                                            If you have a auto-incrementing primary key on this table, then you can make use of this primary key.




                                            1. Get minimum value of primary key of the large table where readTime < dateadd(MONTH,-7,GETDATE()). (Add index on readTime, if not already present, this index will anyway be deleted along with the table in step 3.). Lets store it in a variable 'min_primary'


                                            2. Insert all the rows having primary key > min_primary into a staging table (memory table if no. of rows is not large).


                                            3. Drop the large table.


                                            4. Recreate the table. Copy all the rows from staging table to main table.


                                            5. Drop the staging table.








                                            share|improve this answer












                                            share|improve this answer



                                            share|improve this answer










                                            answered Aug 29 '17 at 9:46









                                            Arpan Jain

                                            436




                                            436






















                                                up vote
                                                1
                                                down vote













                                                Another use:



                                                SET ROWCOUNT 1000 -- Buffer

                                                DECLARE @DATE AS DATETIME = dateadd(MONTH,-7,GETDATE())

                                                DELETE LargeTable WHERE readTime < @DATE
                                                WHILE @@ROWCOUNT > 0
                                                BEGIN
                                                DELETE LargeTable WHERE readTime < @DATE
                                                END
                                                SET ROWCOUNT 0


                                                Optional;



                                                If transaction log is enabled, disable transaction logs.



                                                ALTER DATABASE dbname SET RECOVERY SIMPLE;





                                                share|improve this answer



























                                                  up vote
                                                  1
                                                  down vote













                                                  Another use:



                                                  SET ROWCOUNT 1000 -- Buffer

                                                  DECLARE @DATE AS DATETIME = dateadd(MONTH,-7,GETDATE())

                                                  DELETE LargeTable WHERE readTime < @DATE
                                                  WHILE @@ROWCOUNT > 0
                                                  BEGIN
                                                  DELETE LargeTable WHERE readTime < @DATE
                                                  END
                                                  SET ROWCOUNT 0


                                                  Optional;



                                                  If transaction log is enabled, disable transaction logs.



                                                  ALTER DATABASE dbname SET RECOVERY SIMPLE;





                                                  share|improve this answer

























                                                    up vote
                                                    1
                                                    down vote










                                                    up vote
                                                    1
                                                    down vote









                                                    Another use:



                                                    SET ROWCOUNT 1000 -- Buffer

                                                    DECLARE @DATE AS DATETIME = dateadd(MONTH,-7,GETDATE())

                                                    DELETE LargeTable WHERE readTime < @DATE
                                                    WHILE @@ROWCOUNT > 0
                                                    BEGIN
                                                    DELETE LargeTable WHERE readTime < @DATE
                                                    END
                                                    SET ROWCOUNT 0


                                                    Optional;



                                                    If transaction log is enabled, disable transaction logs.



                                                    ALTER DATABASE dbname SET RECOVERY SIMPLE;





                                                    share|improve this answer














                                                    Another use:



                                                    SET ROWCOUNT 1000 -- Buffer

                                                    DECLARE @DATE AS DATETIME = dateadd(MONTH,-7,GETDATE())

                                                    DELETE LargeTable WHERE readTime < @DATE
                                                    WHILE @@ROWCOUNT > 0
                                                    BEGIN
                                                    DELETE LargeTable WHERE readTime < @DATE
                                                    END
                                                    SET ROWCOUNT 0


                                                    Optional;



                                                    If transaction log is enabled, disable transaction logs.



                                                    ALTER DATABASE dbname SET RECOVERY SIMPLE;






                                                    share|improve this answer














                                                    share|improve this answer



                                                    share|improve this answer








                                                    edited Jul 20 '17 at 13:27

























                                                    answered Jul 20 '17 at 13:21









                                                    Ali Osman Yavuz

                                                    11614




                                                    11614






















                                                        up vote
                                                        0
                                                        down vote













                                                        Shorter syntax



                                                        select 1
                                                        WHILE (@@ROWCOUNT > 0)
                                                        BEGIN
                                                        DELETE TOP (10000) LargeTable
                                                        WHERE readTime < dateadd(MONTH,-7,GETDATE())
                                                        END





                                                        share|improve this answer

























                                                          up vote
                                                          0
                                                          down vote













                                                          Shorter syntax



                                                          select 1
                                                          WHILE (@@ROWCOUNT > 0)
                                                          BEGIN
                                                          DELETE TOP (10000) LargeTable
                                                          WHERE readTime < dateadd(MONTH,-7,GETDATE())
                                                          END





                                                          share|improve this answer























                                                            up vote
                                                            0
                                                            down vote










                                                            up vote
                                                            0
                                                            down vote









                                                            Shorter syntax



                                                            select 1
                                                            WHILE (@@ROWCOUNT > 0)
                                                            BEGIN
                                                            DELETE TOP (10000) LargeTable
                                                            WHERE readTime < dateadd(MONTH,-7,GETDATE())
                                                            END





                                                            share|improve this answer












                                                            Shorter syntax



                                                            select 1
                                                            WHILE (@@ROWCOUNT > 0)
                                                            BEGIN
                                                            DELETE TOP (10000) LargeTable
                                                            WHERE readTime < dateadd(MONTH,-7,GETDATE())
                                                            END






                                                            share|improve this answer












                                                            share|improve this answer



                                                            share|improve this answer










                                                            answered Mar 15 at 11:57









                                                            paparazzo

                                                            37.3k1673134




                                                            37.3k1673134






















                                                                up vote
                                                                0
                                                                down vote













                                                                If you are using SQL server 2016 or higher and if your table is having partitions created based on column you are trying to delete(for example Timestamp column), then you could use this new command to delete data by partitions.



                                                                TRUNCATE TABLE WITH ( PARTITIONS ( { | } [ , ...n ] ) )



                                                                This will delete the data in selected partition(s) only and should be the most efficient way to delete data from part of table since it will not create transaction logs and will be done just as fast as regular truncate but without having all the data deleted from the table.



                                                                Drawback is if your table is not setup with partition, then you need to go old school and delete the data with regular approach and then recreate the table with partitions so that you can do this in future, which is what I did. I added the partition creation and deletion into insertion procedure itself. I had table with 500 million rows so this was the only option to reduce deletion time.



                                                                For more details refer to below links:
                                                                https://docs.microsoft.com/en-us/sql/t-sql/statements/truncate-table-transact-sql?view=sql-server-2017



                                                                SQL server 2016 Truncate table with partitions



                                                                Below is what I did first to delete the data before I could recreate the table with partitions with required data in it. This query will run for days during specified time window until the data is deleted.



                                                                :connect <<ServerName>>
                                                                use <<DatabaseName>>

                                                                SET NOCOUNT ON;
                                                                DECLARE @Deleted_Rows INT;
                                                                DECLARE @loopnum INT;
                                                                DECLARE @msg varchar(100);
                                                                DECLARE @FlagDate datetime;
                                                                SET @FlagDate = getdate() - 31;
                                                                SET @Deleted_Rows = 1;
                                                                SET @loopnum = 1;

                                                                /*while (getdate() < convert(datetime,'2018-11-08 14:00:00.000',120))
                                                                BEGIN
                                                                RAISERROR( 'WAIT for START' ,0,1) WITH NOWAIT
                                                                WAITFOR DELAY '00:10:00'
                                                                END*/
                                                                RAISERROR( 'STARTING PURGE' ,0,1) WITH NOWAIT

                                                                WHILE (1=1)
                                                                BEGIN
                                                                WHILE (@Deleted_Rows > 0 AND (datepart(hh, getdate() ) >= 12 AND datepart(hh, getdate() ) <= 20)) -- (getdate() < convert(datetime,'2018-11-08 19:00:00.000',120) )
                                                                BEGIN
                                                                -- Delete some small number of rows at a time
                                                                DELETE TOP (500000) dbo.<<table_name>>
                                                                WHERE timestamp_column < convert(datetime, @FlagDate,102)
                                                                SET @Deleted_Rows = @@ROWCOUNT;
                                                                WAITFOR DELAY '00:00:01'
                                                                select @msg = 'ROWCOUNT' + convert(varchar,@Deleted_Rows);
                                                                set @loopnum = @loopnum + 1
                                                                if @loopnum > 1000
                                                                begin
                                                                begin try
                                                                DBCC SHRINKFILE (N'<<databasename>>_log' , 0, TRUNCATEONLY)
                                                                RAISERROR( @msg ,0,1) WITH NOWAIT
                                                                end try
                                                                begin catch
                                                                RAISERROR( 'DBCC SHRINK' ,0,1) WITH NOWAIT
                                                                end catch
                                                                set @loopnum = 1
                                                                end
                                                                END
                                                                WAITFOR DELAY '00:10:00'
                                                                END
                                                                select getdate()





                                                                share|improve this answer

























                                                                  up vote
                                                                  0
                                                                  down vote













                                                                  If you are using SQL server 2016 or higher and if your table is having partitions created based on column you are trying to delete(for example Timestamp column), then you could use this new command to delete data by partitions.



                                                                  TRUNCATE TABLE WITH ( PARTITIONS ( { | } [ , ...n ] ) )



                                                                  This will delete the data in selected partition(s) only and should be the most efficient way to delete data from part of table since it will not create transaction logs and will be done just as fast as regular truncate but without having all the data deleted from the table.



                                                                  Drawback is if your table is not setup with partition, then you need to go old school and delete the data with regular approach and then recreate the table with partitions so that you can do this in future, which is what I did. I added the partition creation and deletion into insertion procedure itself. I had table with 500 million rows so this was the only option to reduce deletion time.



                                                                  For more details refer to below links:
                                                                  https://docs.microsoft.com/en-us/sql/t-sql/statements/truncate-table-transact-sql?view=sql-server-2017



                                                                  SQL server 2016 Truncate table with partitions



                                                                  Below is what I did first to delete the data before I could recreate the table with partitions with required data in it. This query will run for days during specified time window until the data is deleted.



                                                                  :connect <<ServerName>>
                                                                  use <<DatabaseName>>

                                                                  SET NOCOUNT ON;
                                                                  DECLARE @Deleted_Rows INT;
                                                                  DECLARE @loopnum INT;
                                                                  DECLARE @msg varchar(100);
                                                                  DECLARE @FlagDate datetime;
                                                                  SET @FlagDate = getdate() - 31;
                                                                  SET @Deleted_Rows = 1;
                                                                  SET @loopnum = 1;

                                                                  /*while (getdate() < convert(datetime,'2018-11-08 14:00:00.000',120))
                                                                  BEGIN
                                                                  RAISERROR( 'WAIT for START' ,0,1) WITH NOWAIT
                                                                  WAITFOR DELAY '00:10:00'
                                                                  END*/
                                                                  RAISERROR( 'STARTING PURGE' ,0,1) WITH NOWAIT

                                                                  WHILE (1=1)
                                                                  BEGIN
                                                                  WHILE (@Deleted_Rows > 0 AND (datepart(hh, getdate() ) >= 12 AND datepart(hh, getdate() ) <= 20)) -- (getdate() < convert(datetime,'2018-11-08 19:00:00.000',120) )
                                                                  BEGIN
                                                                  -- Delete some small number of rows at a time
                                                                  DELETE TOP (500000) dbo.<<table_name>>
                                                                  WHERE timestamp_column < convert(datetime, @FlagDate,102)
                                                                  SET @Deleted_Rows = @@ROWCOUNT;
                                                                  WAITFOR DELAY '00:00:01'
                                                                  select @msg = 'ROWCOUNT' + convert(varchar,@Deleted_Rows);
                                                                  set @loopnum = @loopnum + 1
                                                                  if @loopnum > 1000
                                                                  begin
                                                                  begin try
                                                                  DBCC SHRINKFILE (N'<<databasename>>_log' , 0, TRUNCATEONLY)
                                                                  RAISERROR( @msg ,0,1) WITH NOWAIT
                                                                  end try
                                                                  begin catch
                                                                  RAISERROR( 'DBCC SHRINK' ,0,1) WITH NOWAIT
                                                                  end catch
                                                                  set @loopnum = 1
                                                                  end
                                                                  END
                                                                  WAITFOR DELAY '00:10:00'
                                                                  END
                                                                  select getdate()





                                                                  share|improve this answer























                                                                    up vote
                                                                    0
                                                                    down vote










                                                                    up vote
                                                                    0
                                                                    down vote









                                                                    If you are using SQL server 2016 or higher and if your table is having partitions created based on column you are trying to delete(for example Timestamp column), then you could use this new command to delete data by partitions.



                                                                    TRUNCATE TABLE WITH ( PARTITIONS ( { | } [ , ...n ] ) )



                                                                    This will delete the data in selected partition(s) only and should be the most efficient way to delete data from part of table since it will not create transaction logs and will be done just as fast as regular truncate but without having all the data deleted from the table.



                                                                    Drawback is if your table is not setup with partition, then you need to go old school and delete the data with regular approach and then recreate the table with partitions so that you can do this in future, which is what I did. I added the partition creation and deletion into insertion procedure itself. I had table with 500 million rows so this was the only option to reduce deletion time.



                                                                    For more details refer to below links:
                                                                    https://docs.microsoft.com/en-us/sql/t-sql/statements/truncate-table-transact-sql?view=sql-server-2017



                                                                    SQL server 2016 Truncate table with partitions



                                                                    Below is what I did first to delete the data before I could recreate the table with partitions with required data in it. This query will run for days during specified time window until the data is deleted.



                                                                    :connect <<ServerName>>
                                                                    use <<DatabaseName>>

                                                                    SET NOCOUNT ON;
                                                                    DECLARE @Deleted_Rows INT;
                                                                    DECLARE @loopnum INT;
                                                                    DECLARE @msg varchar(100);
                                                                    DECLARE @FlagDate datetime;
                                                                    SET @FlagDate = getdate() - 31;
                                                                    SET @Deleted_Rows = 1;
                                                                    SET @loopnum = 1;

                                                                    /*while (getdate() < convert(datetime,'2018-11-08 14:00:00.000',120))
                                                                    BEGIN
                                                                    RAISERROR( 'WAIT for START' ,0,1) WITH NOWAIT
                                                                    WAITFOR DELAY '00:10:00'
                                                                    END*/
                                                                    RAISERROR( 'STARTING PURGE' ,0,1) WITH NOWAIT

                                                                    WHILE (1=1)
                                                                    BEGIN
                                                                    WHILE (@Deleted_Rows > 0 AND (datepart(hh, getdate() ) >= 12 AND datepart(hh, getdate() ) <= 20)) -- (getdate() < convert(datetime,'2018-11-08 19:00:00.000',120) )
                                                                    BEGIN
                                                                    -- Delete some small number of rows at a time
                                                                    DELETE TOP (500000) dbo.<<table_name>>
                                                                    WHERE timestamp_column < convert(datetime, @FlagDate,102)
                                                                    SET @Deleted_Rows = @@ROWCOUNT;
                                                                    WAITFOR DELAY '00:00:01'
                                                                    select @msg = 'ROWCOUNT' + convert(varchar,@Deleted_Rows);
                                                                    set @loopnum = @loopnum + 1
                                                                    if @loopnum > 1000
                                                                    begin
                                                                    begin try
                                                                    DBCC SHRINKFILE (N'<<databasename>>_log' , 0, TRUNCATEONLY)
                                                                    RAISERROR( @msg ,0,1) WITH NOWAIT
                                                                    end try
                                                                    begin catch
                                                                    RAISERROR( 'DBCC SHRINK' ,0,1) WITH NOWAIT
                                                                    end catch
                                                                    set @loopnum = 1
                                                                    end
                                                                    END
                                                                    WAITFOR DELAY '00:10:00'
                                                                    END
                                                                    select getdate()





                                                                    share|improve this answer












                                                                    If you are using SQL server 2016 or higher and if your table is having partitions created based on column you are trying to delete(for example Timestamp column), then you could use this new command to delete data by partitions.



                                                                    TRUNCATE TABLE WITH ( PARTITIONS ( { | } [ , ...n ] ) )



                                                                    This will delete the data in selected partition(s) only and should be the most efficient way to delete data from part of table since it will not create transaction logs and will be done just as fast as regular truncate but without having all the data deleted from the table.



                                                                    Drawback is if your table is not setup with partition, then you need to go old school and delete the data with regular approach and then recreate the table with partitions so that you can do this in future, which is what I did. I added the partition creation and deletion into insertion procedure itself. I had table with 500 million rows so this was the only option to reduce deletion time.



                                                                    For more details refer to below links:
                                                                    https://docs.microsoft.com/en-us/sql/t-sql/statements/truncate-table-transact-sql?view=sql-server-2017



                                                                    SQL server 2016 Truncate table with partitions



                                                                    Below is what I did first to delete the data before I could recreate the table with partitions with required data in it. This query will run for days during specified time window until the data is deleted.



                                                                    :connect <<ServerName>>
                                                                    use <<DatabaseName>>

                                                                    SET NOCOUNT ON;
                                                                    DECLARE @Deleted_Rows INT;
                                                                    DECLARE @loopnum INT;
                                                                    DECLARE @msg varchar(100);
                                                                    DECLARE @FlagDate datetime;
                                                                    SET @FlagDate = getdate() - 31;
                                                                    SET @Deleted_Rows = 1;
                                                                    SET @loopnum = 1;

                                                                    /*while (getdate() < convert(datetime,'2018-11-08 14:00:00.000',120))
                                                                    BEGIN
                                                                    RAISERROR( 'WAIT for START' ,0,1) WITH NOWAIT
                                                                    WAITFOR DELAY '00:10:00'
                                                                    END*/
                                                                    RAISERROR( 'STARTING PURGE' ,0,1) WITH NOWAIT

                                                                    WHILE (1=1)
                                                                    BEGIN
                                                                    WHILE (@Deleted_Rows > 0 AND (datepart(hh, getdate() ) >= 12 AND datepart(hh, getdate() ) <= 20)) -- (getdate() < convert(datetime,'2018-11-08 19:00:00.000',120) )
                                                                    BEGIN
                                                                    -- Delete some small number of rows at a time
                                                                    DELETE TOP (500000) dbo.<<table_name>>
                                                                    WHERE timestamp_column < convert(datetime, @FlagDate,102)
                                                                    SET @Deleted_Rows = @@ROWCOUNT;
                                                                    WAITFOR DELAY '00:00:01'
                                                                    select @msg = 'ROWCOUNT' + convert(varchar,@Deleted_Rows);
                                                                    set @loopnum = @loopnum + 1
                                                                    if @loopnum > 1000
                                                                    begin
                                                                    begin try
                                                                    DBCC SHRINKFILE (N'<<databasename>>_log' , 0, TRUNCATEONLY)
                                                                    RAISERROR( @msg ,0,1) WITH NOWAIT
                                                                    end try
                                                                    begin catch
                                                                    RAISERROR( 'DBCC SHRINK' ,0,1) WITH NOWAIT
                                                                    end catch
                                                                    set @loopnum = 1
                                                                    end
                                                                    END
                                                                    WAITFOR DELAY '00:10:00'
                                                                    END
                                                                    select getdate()






                                                                    share|improve this answer












                                                                    share|improve this answer



                                                                    share|improve this answer










                                                                    answered Nov 11 at 4:47









                                                                    digitally_inspired

                                                                    9018




                                                                    9018






























                                                                         

                                                                        draft saved


                                                                        draft discarded



















































                                                                         


                                                                        draft saved


                                                                        draft discarded














                                                                        StackExchange.ready(
                                                                        function () {
                                                                        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f24213299%2fhow-to-delete-large-data-of-table-in-sql-without-log%23new-answer', 'question_page');
                                                                        }
                                                                        );

                                                                        Post as a guest















                                                                        Required, but never shown





















































                                                                        Required, but never shown














                                                                        Required, but never shown












                                                                        Required, but never shown







                                                                        Required, but never shown

































                                                                        Required, but never shown














                                                                        Required, but never shown












                                                                        Required, but never shown







                                                                        Required, but never shown







                                                                        Popular posts from this blog

                                                                        Xamarin.iOS Cant Deploy on Iphone

                                                                        Glorious Revolution

                                                                        Dulmage-Mendelsohn matrix decomposition in Python