In keras how to get epoch and validation loss from model checkpoint?











up vote
0
down vote

favorite












From here (https://keras.io/callbacks/#modelcheckpoint) you can save the best model according to the validation error by setting save_best_only.



I know you can save the corresponding epoch and validation error by writing it to the checkpoint's file name. However, this means a lot of models could be saved and I expect this to result in memory error on my GPU.



Is there a way to get the epoch and val loss corresponding to the final best model without having to write it in the filename?










share|improve this question


























    up vote
    0
    down vote

    favorite












    From here (https://keras.io/callbacks/#modelcheckpoint) you can save the best model according to the validation error by setting save_best_only.



    I know you can save the corresponding epoch and validation error by writing it to the checkpoint's file name. However, this means a lot of models could be saved and I expect this to result in memory error on my GPU.



    Is there a way to get the epoch and val loss corresponding to the final best model without having to write it in the filename?










    share|improve this question
























      up vote
      0
      down vote

      favorite









      up vote
      0
      down vote

      favorite











      From here (https://keras.io/callbacks/#modelcheckpoint) you can save the best model according to the validation error by setting save_best_only.



      I know you can save the corresponding epoch and validation error by writing it to the checkpoint's file name. However, this means a lot of models could be saved and I expect this to result in memory error on my GPU.



      Is there a way to get the epoch and val loss corresponding to the final best model without having to write it in the filename?










      share|improve this question













      From here (https://keras.io/callbacks/#modelcheckpoint) you can save the best model according to the validation error by setting save_best_only.



      I know you can save the corresponding epoch and validation error by writing it to the checkpoint's file name. However, this means a lot of models could be saved and I expect this to result in memory error on my GPU.



      Is there a way to get the epoch and val loss corresponding to the final best model without having to write it in the filename?







      keras






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 19 at 18:12









      user5490

      84




      84
























          1 Answer
          1






          active

          oldest

          votes

















          up vote
          0
          down vote













          Depends on what you want to do with the epoch and validation error and at what time in the training, but you can implement your own callback functionality quite easily. What you want to get the training metrics is the logs object, which is passed to the callback in each of the callback events (see here).



          If for example you need to call a certain function f with the epoch and validation error at the end of every epoch, you could implement this using the LambdaCallback:



          keras.callbacks.LambdaCallback(on_epoch_end=lambda epoch, logs: f(epoch, logs['val_loss']))


          If instead you want to use the ModelCheckpoint callback but don't want it to write to file, you can create a custom CallBack that rewrites the ModelCheck callback and changes the save behavior (code here).



          (Don't know if that answers your question, not entirely sure what the requirements are)






          share|improve this answer





















            Your Answer






            StackExchange.ifUsing("editor", function () {
            StackExchange.using("externalEditor", function () {
            StackExchange.using("snippets", function () {
            StackExchange.snippets.init();
            });
            });
            }, "code-snippets");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "1"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53380417%2fin-keras-how-to-get-epoch-and-validation-loss-from-model-checkpoint%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes








            up vote
            0
            down vote













            Depends on what you want to do with the epoch and validation error and at what time in the training, but you can implement your own callback functionality quite easily. What you want to get the training metrics is the logs object, which is passed to the callback in each of the callback events (see here).



            If for example you need to call a certain function f with the epoch and validation error at the end of every epoch, you could implement this using the LambdaCallback:



            keras.callbacks.LambdaCallback(on_epoch_end=lambda epoch, logs: f(epoch, logs['val_loss']))


            If instead you want to use the ModelCheckpoint callback but don't want it to write to file, you can create a custom CallBack that rewrites the ModelCheck callback and changes the save behavior (code here).



            (Don't know if that answers your question, not entirely sure what the requirements are)






            share|improve this answer

























              up vote
              0
              down vote













              Depends on what you want to do with the epoch and validation error and at what time in the training, but you can implement your own callback functionality quite easily. What you want to get the training metrics is the logs object, which is passed to the callback in each of the callback events (see here).



              If for example you need to call a certain function f with the epoch and validation error at the end of every epoch, you could implement this using the LambdaCallback:



              keras.callbacks.LambdaCallback(on_epoch_end=lambda epoch, logs: f(epoch, logs['val_loss']))


              If instead you want to use the ModelCheckpoint callback but don't want it to write to file, you can create a custom CallBack that rewrites the ModelCheck callback and changes the save behavior (code here).



              (Don't know if that answers your question, not entirely sure what the requirements are)






              share|improve this answer























                up vote
                0
                down vote










                up vote
                0
                down vote









                Depends on what you want to do with the epoch and validation error and at what time in the training, but you can implement your own callback functionality quite easily. What you want to get the training metrics is the logs object, which is passed to the callback in each of the callback events (see here).



                If for example you need to call a certain function f with the epoch and validation error at the end of every epoch, you could implement this using the LambdaCallback:



                keras.callbacks.LambdaCallback(on_epoch_end=lambda epoch, logs: f(epoch, logs['val_loss']))


                If instead you want to use the ModelCheckpoint callback but don't want it to write to file, you can create a custom CallBack that rewrites the ModelCheck callback and changes the save behavior (code here).



                (Don't know if that answers your question, not entirely sure what the requirements are)






                share|improve this answer












                Depends on what you want to do with the epoch and validation error and at what time in the training, but you can implement your own callback functionality quite easily. What you want to get the training metrics is the logs object, which is passed to the callback in each of the callback events (see here).



                If for example you need to call a certain function f with the epoch and validation error at the end of every epoch, you could implement this using the LambdaCallback:



                keras.callbacks.LambdaCallback(on_epoch_end=lambda epoch, logs: f(epoch, logs['val_loss']))


                If instead you want to use the ModelCheckpoint callback but don't want it to write to file, you can create a custom CallBack that rewrites the ModelCheck callback and changes the save behavior (code here).



                (Don't know if that answers your question, not entirely sure what the requirements are)







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Nov 20 at 0:04









                lmartens

                647517




                647517






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.





                    Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                    Please pay close attention to the following guidance:


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53380417%2fin-keras-how-to-get-epoch-and-validation-loss-from-model-checkpoint%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Wiesbaden

                    Marschland

                    Dieringhausen