azure sql database bcp freezes and rolls back












0















I am BCPing a 130GB text file into Azure SQL Database. In the last 800 records the BCP stalls. I left it for 8 hours and the Azure database went from 150GB down to 30GB (rolled back), the table was empty, and DTU usage was stuck at 0. By mistake I left a clustered index on the table so I figured maybe this had something to do with the rollback. So I deleted the index and re-ran it and it uploaded much faster but it appears to be stalled again.



enter image description here



The database is currently at 157GB and was at 30GB before starting the BCP.
enter image description here



What could be causing it to stall/roll back? Any logs or something I can check?










share|improve this question




















  • 1





    Consider specifying the -b parameter along with a batch size (e.g. 100000).

    – Dan Guzman
    Nov 25 '18 at 4:40











  • @DanGuzman The error came up transaction log is full due to 'Active_Transaction'. Good thing i'm paying over 9g a month for this database... do i need to wait for this to roll back the transaction or can i kill it and re-start? Will increasing batch size work or do I need to upload in smaller files? Can BCP in select chunks for me?

    – MIKE
    Nov 25 '18 at 4:56








  • 1





    The entire file is bulk inserted as a single transaction by default. A smaller batch size spec like I suggested will avoid that without selecting in chunks. You'll need to let the transaction rollback.

    – Dan Guzman
    Nov 25 '18 at 11:37











  • @DanGuzman hey thanks for your help. I used a batch size of 100k like you said but because I was in a crunch for time I couldn't take any risks so I used the -F and -L commands to insert in chunks. It did work. Based on your suggestion and by the looks of it, your experience. I have not truly tested it and probably won't until I run into this issue again but I'm interested in understanding why 1K batch size will cause a full transaction log but a 100k wouldn't (with this current data set).

    – MIKE
    Nov 28 '18 at 18:01











  • @MIME, are you saying you specified a -b 1000 on your command? Without it, the 1000 rows sent messages are just progress messages and the actual batch size is the entire file.

    – Dan Guzman
    Nov 29 '18 at 3:06
















0















I am BCPing a 130GB text file into Azure SQL Database. In the last 800 records the BCP stalls. I left it for 8 hours and the Azure database went from 150GB down to 30GB (rolled back), the table was empty, and DTU usage was stuck at 0. By mistake I left a clustered index on the table so I figured maybe this had something to do with the rollback. So I deleted the index and re-ran it and it uploaded much faster but it appears to be stalled again.



enter image description here



The database is currently at 157GB and was at 30GB before starting the BCP.
enter image description here



What could be causing it to stall/roll back? Any logs or something I can check?










share|improve this question




















  • 1





    Consider specifying the -b parameter along with a batch size (e.g. 100000).

    – Dan Guzman
    Nov 25 '18 at 4:40











  • @DanGuzman The error came up transaction log is full due to 'Active_Transaction'. Good thing i'm paying over 9g a month for this database... do i need to wait for this to roll back the transaction or can i kill it and re-start? Will increasing batch size work or do I need to upload in smaller files? Can BCP in select chunks for me?

    – MIKE
    Nov 25 '18 at 4:56








  • 1





    The entire file is bulk inserted as a single transaction by default. A smaller batch size spec like I suggested will avoid that without selecting in chunks. You'll need to let the transaction rollback.

    – Dan Guzman
    Nov 25 '18 at 11:37











  • @DanGuzman hey thanks for your help. I used a batch size of 100k like you said but because I was in a crunch for time I couldn't take any risks so I used the -F and -L commands to insert in chunks. It did work. Based on your suggestion and by the looks of it, your experience. I have not truly tested it and probably won't until I run into this issue again but I'm interested in understanding why 1K batch size will cause a full transaction log but a 100k wouldn't (with this current data set).

    – MIKE
    Nov 28 '18 at 18:01











  • @MIME, are you saying you specified a -b 1000 on your command? Without it, the 1000 rows sent messages are just progress messages and the actual batch size is the entire file.

    – Dan Guzman
    Nov 29 '18 at 3:06














0












0








0








I am BCPing a 130GB text file into Azure SQL Database. In the last 800 records the BCP stalls. I left it for 8 hours and the Azure database went from 150GB down to 30GB (rolled back), the table was empty, and DTU usage was stuck at 0. By mistake I left a clustered index on the table so I figured maybe this had something to do with the rollback. So I deleted the index and re-ran it and it uploaded much faster but it appears to be stalled again.



enter image description here



The database is currently at 157GB and was at 30GB before starting the BCP.
enter image description here



What could be causing it to stall/roll back? Any logs or something I can check?










share|improve this question
















I am BCPing a 130GB text file into Azure SQL Database. In the last 800 records the BCP stalls. I left it for 8 hours and the Azure database went from 150GB down to 30GB (rolled back), the table was empty, and DTU usage was stuck at 0. By mistake I left a clustered index on the table so I figured maybe this had something to do with the rollback. So I deleted the index and re-ran it and it uploaded much faster but it appears to be stalled again.



enter image description here



The database is currently at 157GB and was at 30GB before starting the BCP.
enter image description here



What could be causing it to stall/roll back? Any logs or something I can check?







azure azure-sql-database






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 25 '18 at 3:32







MIKE

















asked Nov 25 '18 at 3:25









MIKEMIKE

609515




609515








  • 1





    Consider specifying the -b parameter along with a batch size (e.g. 100000).

    – Dan Guzman
    Nov 25 '18 at 4:40











  • @DanGuzman The error came up transaction log is full due to 'Active_Transaction'. Good thing i'm paying over 9g a month for this database... do i need to wait for this to roll back the transaction or can i kill it and re-start? Will increasing batch size work or do I need to upload in smaller files? Can BCP in select chunks for me?

    – MIKE
    Nov 25 '18 at 4:56








  • 1





    The entire file is bulk inserted as a single transaction by default. A smaller batch size spec like I suggested will avoid that without selecting in chunks. You'll need to let the transaction rollback.

    – Dan Guzman
    Nov 25 '18 at 11:37











  • @DanGuzman hey thanks for your help. I used a batch size of 100k like you said but because I was in a crunch for time I couldn't take any risks so I used the -F and -L commands to insert in chunks. It did work. Based on your suggestion and by the looks of it, your experience. I have not truly tested it and probably won't until I run into this issue again but I'm interested in understanding why 1K batch size will cause a full transaction log but a 100k wouldn't (with this current data set).

    – MIKE
    Nov 28 '18 at 18:01











  • @MIME, are you saying you specified a -b 1000 on your command? Without it, the 1000 rows sent messages are just progress messages and the actual batch size is the entire file.

    – Dan Guzman
    Nov 29 '18 at 3:06














  • 1





    Consider specifying the -b parameter along with a batch size (e.g. 100000).

    – Dan Guzman
    Nov 25 '18 at 4:40











  • @DanGuzman The error came up transaction log is full due to 'Active_Transaction'. Good thing i'm paying over 9g a month for this database... do i need to wait for this to roll back the transaction or can i kill it and re-start? Will increasing batch size work or do I need to upload in smaller files? Can BCP in select chunks for me?

    – MIKE
    Nov 25 '18 at 4:56








  • 1





    The entire file is bulk inserted as a single transaction by default. A smaller batch size spec like I suggested will avoid that without selecting in chunks. You'll need to let the transaction rollback.

    – Dan Guzman
    Nov 25 '18 at 11:37











  • @DanGuzman hey thanks for your help. I used a batch size of 100k like you said but because I was in a crunch for time I couldn't take any risks so I used the -F and -L commands to insert in chunks. It did work. Based on your suggestion and by the looks of it, your experience. I have not truly tested it and probably won't until I run into this issue again but I'm interested in understanding why 1K batch size will cause a full transaction log but a 100k wouldn't (with this current data set).

    – MIKE
    Nov 28 '18 at 18:01











  • @MIME, are you saying you specified a -b 1000 on your command? Without it, the 1000 rows sent messages are just progress messages and the actual batch size is the entire file.

    – Dan Guzman
    Nov 29 '18 at 3:06








1




1





Consider specifying the -b parameter along with a batch size (e.g. 100000).

– Dan Guzman
Nov 25 '18 at 4:40





Consider specifying the -b parameter along with a batch size (e.g. 100000).

– Dan Guzman
Nov 25 '18 at 4:40













@DanGuzman The error came up transaction log is full due to 'Active_Transaction'. Good thing i'm paying over 9g a month for this database... do i need to wait for this to roll back the transaction or can i kill it and re-start? Will increasing batch size work or do I need to upload in smaller files? Can BCP in select chunks for me?

– MIKE
Nov 25 '18 at 4:56







@DanGuzman The error came up transaction log is full due to 'Active_Transaction'. Good thing i'm paying over 9g a month for this database... do i need to wait for this to roll back the transaction or can i kill it and re-start? Will increasing batch size work or do I need to upload in smaller files? Can BCP in select chunks for me?

– MIKE
Nov 25 '18 at 4:56






1




1





The entire file is bulk inserted as a single transaction by default. A smaller batch size spec like I suggested will avoid that without selecting in chunks. You'll need to let the transaction rollback.

– Dan Guzman
Nov 25 '18 at 11:37





The entire file is bulk inserted as a single transaction by default. A smaller batch size spec like I suggested will avoid that without selecting in chunks. You'll need to let the transaction rollback.

– Dan Guzman
Nov 25 '18 at 11:37













@DanGuzman hey thanks for your help. I used a batch size of 100k like you said but because I was in a crunch for time I couldn't take any risks so I used the -F and -L commands to insert in chunks. It did work. Based on your suggestion and by the looks of it, your experience. I have not truly tested it and probably won't until I run into this issue again but I'm interested in understanding why 1K batch size will cause a full transaction log but a 100k wouldn't (with this current data set).

– MIKE
Nov 28 '18 at 18:01





@DanGuzman hey thanks for your help. I used a batch size of 100k like you said but because I was in a crunch for time I couldn't take any risks so I used the -F and -L commands to insert in chunks. It did work. Based on your suggestion and by the looks of it, your experience. I have not truly tested it and probably won't until I run into this issue again but I'm interested in understanding why 1K batch size will cause a full transaction log but a 100k wouldn't (with this current data set).

– MIKE
Nov 28 '18 at 18:01













@MIME, are you saying you specified a -b 1000 on your command? Without it, the 1000 rows sent messages are just progress messages and the actual batch size is the entire file.

– Dan Guzman
Nov 29 '18 at 3:06





@MIME, are you saying you specified a -b 1000 on your command? Without it, the 1000 rows sent messages are just progress messages and the actual batch size is the entire file.

– Dan Guzman
Nov 29 '18 at 3:06












1 Answer
1






active

oldest

votes


















1














When you execute the BCP without a batch size specified with the -b argument, the entire file is processed in a single transaction. The rows sent messages only indicate only progress in that case. BCPing a large file in a single transaction will require significant transaction log space and often reduces performance too due to the large number of dirty pages.



With the -b argument, the specified batch size controls the size of each transaction. Each batch is committed individually and the progress messages will reflect the batch size too.






share|improve this answer























    Your Answer






    StackExchange.ifUsing("editor", function () {
    StackExchange.using("externalEditor", function () {
    StackExchange.using("snippets", function () {
    StackExchange.snippets.init();
    });
    });
    }, "code-snippets");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "1"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53464382%2fazure-sql-database-bcp-freezes-and-rolls-back%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    1














    When you execute the BCP without a batch size specified with the -b argument, the entire file is processed in a single transaction. The rows sent messages only indicate only progress in that case. BCPing a large file in a single transaction will require significant transaction log space and often reduces performance too due to the large number of dirty pages.



    With the -b argument, the specified batch size controls the size of each transaction. Each batch is committed individually and the progress messages will reflect the batch size too.






    share|improve this answer




























      1














      When you execute the BCP without a batch size specified with the -b argument, the entire file is processed in a single transaction. The rows sent messages only indicate only progress in that case. BCPing a large file in a single transaction will require significant transaction log space and often reduces performance too due to the large number of dirty pages.



      With the -b argument, the specified batch size controls the size of each transaction. Each batch is committed individually and the progress messages will reflect the batch size too.






      share|improve this answer


























        1












        1








        1







        When you execute the BCP without a batch size specified with the -b argument, the entire file is processed in a single transaction. The rows sent messages only indicate only progress in that case. BCPing a large file in a single transaction will require significant transaction log space and often reduces performance too due to the large number of dirty pages.



        With the -b argument, the specified batch size controls the size of each transaction. Each batch is committed individually and the progress messages will reflect the batch size too.






        share|improve this answer













        When you execute the BCP without a batch size specified with the -b argument, the entire file is processed in a single transaction. The rows sent messages only indicate only progress in that case. BCPing a large file in a single transaction will require significant transaction log space and often reduces performance too due to the large number of dirty pages.



        With the -b argument, the specified batch size controls the size of each transaction. Each batch is committed individually and the progress messages will reflect the batch size too.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Nov 29 '18 at 3:32









        Dan GuzmanDan Guzman

        24k31642




        24k31642
































            draft saved

            draft discarded




















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53464382%2fazure-sql-database-bcp-freezes-and-rolls-back%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Wiesbaden

            Marschland

            Dieringhausen