How to create a serving input function to provide a prediction for images on google cloud platform?





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ height:90px;width:728px;box-sizing:border-box;
}







3















From this google cloud doc and this one,
and the stackoverflow answer in this post by
rhaertel80, I think the recommended format of a json request to send images to a model for prediction on Google cloud is:



{'instances': {'image_bytes': {'b64': base64.b64encode(jpeg_data)}}, {'image_bytes':...}}



The next step is to create the serving_input_fn() (described in the google cloud docs and this GCP tutorial), which can cope with the nested dictionary that the request will send.



To do this I need to create 'features' and a 'receiver_tensor' to pass into the ServingInputReciever function which the serving_input_fn() needs to return.



However, I do not see how the requirement of the receiver_tensor to be a dictionary with keys and tensors as values can fit the nested format of the json request. (As I understand it, the receiver_tensors are placeholders for the request).



If the request does not contain nested dictionaries the approach seems to be fairly simple as shown in the tutorials and this answer.



Question



So, how can the serving_input_fn() be formatted to receive the image request in the described form and create features and receiver_tensors which fill the requirements of the ServingInputReceiver function?



Part of the difficulty may be that I do not understand what the serving_input_fn() will need to process. Will it be the entire request in one go? Or will each instance be passed one at a time? Or is there some other way to understand what the function will be processing



More details



For more context, I am using tf.Estimator and the train_and_evaluate function to train a model and deploy it to google cloud. The input to the model is a tf.dataset containing tuples of ({'spectrogram': image}, label) where image is a tensor.



My attempt to create the input_fn assumes one element of the instances list is passed at a time:



def serving_input_fn():
feature_placeholders = {'image_bytes': {'b64': tf.placeholder(dtype=tf.string,
shape=[None],
name='source')}}
input_images = convert_bytestrings_to_images(feature_placeholders['image_bytes']['b64'])
features = {'spectrogram': image for image in input_images}
return tf.estimator.export.ServingInputReceiver(features, feature_placeholders)


Which leads to the error



ValueError: receiver_tensor image_bytes must be a Tensor.


And I am not sure if features would be in the correct form either.










share|improve this question

























  • I asked a similar question here, which received an answer that may help people

    – NickDGreg
    Feb 28 '18 at 10:22


















3















From this google cloud doc and this one,
and the stackoverflow answer in this post by
rhaertel80, I think the recommended format of a json request to send images to a model for prediction on Google cloud is:



{'instances': {'image_bytes': {'b64': base64.b64encode(jpeg_data)}}, {'image_bytes':...}}



The next step is to create the serving_input_fn() (described in the google cloud docs and this GCP tutorial), which can cope with the nested dictionary that the request will send.



To do this I need to create 'features' and a 'receiver_tensor' to pass into the ServingInputReciever function which the serving_input_fn() needs to return.



However, I do not see how the requirement of the receiver_tensor to be a dictionary with keys and tensors as values can fit the nested format of the json request. (As I understand it, the receiver_tensors are placeholders for the request).



If the request does not contain nested dictionaries the approach seems to be fairly simple as shown in the tutorials and this answer.



Question



So, how can the serving_input_fn() be formatted to receive the image request in the described form and create features and receiver_tensors which fill the requirements of the ServingInputReceiver function?



Part of the difficulty may be that I do not understand what the serving_input_fn() will need to process. Will it be the entire request in one go? Or will each instance be passed one at a time? Or is there some other way to understand what the function will be processing



More details



For more context, I am using tf.Estimator and the train_and_evaluate function to train a model and deploy it to google cloud. The input to the model is a tf.dataset containing tuples of ({'spectrogram': image}, label) where image is a tensor.



My attempt to create the input_fn assumes one element of the instances list is passed at a time:



def serving_input_fn():
feature_placeholders = {'image_bytes': {'b64': tf.placeholder(dtype=tf.string,
shape=[None],
name='source')}}
input_images = convert_bytestrings_to_images(feature_placeholders['image_bytes']['b64'])
features = {'spectrogram': image for image in input_images}
return tf.estimator.export.ServingInputReceiver(features, feature_placeholders)


Which leads to the error



ValueError: receiver_tensor image_bytes must be a Tensor.


And I am not sure if features would be in the correct form either.










share|improve this question

























  • I asked a similar question here, which received an answer that may help people

    – NickDGreg
    Feb 28 '18 at 10:22














3












3








3


0






From this google cloud doc and this one,
and the stackoverflow answer in this post by
rhaertel80, I think the recommended format of a json request to send images to a model for prediction on Google cloud is:



{'instances': {'image_bytes': {'b64': base64.b64encode(jpeg_data)}}, {'image_bytes':...}}



The next step is to create the serving_input_fn() (described in the google cloud docs and this GCP tutorial), which can cope with the nested dictionary that the request will send.



To do this I need to create 'features' and a 'receiver_tensor' to pass into the ServingInputReciever function which the serving_input_fn() needs to return.



However, I do not see how the requirement of the receiver_tensor to be a dictionary with keys and tensors as values can fit the nested format of the json request. (As I understand it, the receiver_tensors are placeholders for the request).



If the request does not contain nested dictionaries the approach seems to be fairly simple as shown in the tutorials and this answer.



Question



So, how can the serving_input_fn() be formatted to receive the image request in the described form and create features and receiver_tensors which fill the requirements of the ServingInputReceiver function?



Part of the difficulty may be that I do not understand what the serving_input_fn() will need to process. Will it be the entire request in one go? Or will each instance be passed one at a time? Or is there some other way to understand what the function will be processing



More details



For more context, I am using tf.Estimator and the train_and_evaluate function to train a model and deploy it to google cloud. The input to the model is a tf.dataset containing tuples of ({'spectrogram': image}, label) where image is a tensor.



My attempt to create the input_fn assumes one element of the instances list is passed at a time:



def serving_input_fn():
feature_placeholders = {'image_bytes': {'b64': tf.placeholder(dtype=tf.string,
shape=[None],
name='source')}}
input_images = convert_bytestrings_to_images(feature_placeholders['image_bytes']['b64'])
features = {'spectrogram': image for image in input_images}
return tf.estimator.export.ServingInputReceiver(features, feature_placeholders)


Which leads to the error



ValueError: receiver_tensor image_bytes must be a Tensor.


And I am not sure if features would be in the correct form either.










share|improve this question
















From this google cloud doc and this one,
and the stackoverflow answer in this post by
rhaertel80, I think the recommended format of a json request to send images to a model for prediction on Google cloud is:



{'instances': {'image_bytes': {'b64': base64.b64encode(jpeg_data)}}, {'image_bytes':...}}



The next step is to create the serving_input_fn() (described in the google cloud docs and this GCP tutorial), which can cope with the nested dictionary that the request will send.



To do this I need to create 'features' and a 'receiver_tensor' to pass into the ServingInputReciever function which the serving_input_fn() needs to return.



However, I do not see how the requirement of the receiver_tensor to be a dictionary with keys and tensors as values can fit the nested format of the json request. (As I understand it, the receiver_tensors are placeholders for the request).



If the request does not contain nested dictionaries the approach seems to be fairly simple as shown in the tutorials and this answer.



Question



So, how can the serving_input_fn() be formatted to receive the image request in the described form and create features and receiver_tensors which fill the requirements of the ServingInputReceiver function?



Part of the difficulty may be that I do not understand what the serving_input_fn() will need to process. Will it be the entire request in one go? Or will each instance be passed one at a time? Or is there some other way to understand what the function will be processing



More details



For more context, I am using tf.Estimator and the train_and_evaluate function to train a model and deploy it to google cloud. The input to the model is a tf.dataset containing tuples of ({'spectrogram': image}, label) where image is a tensor.



My attempt to create the input_fn assumes one element of the instances list is passed at a time:



def serving_input_fn():
feature_placeholders = {'image_bytes': {'b64': tf.placeholder(dtype=tf.string,
shape=[None],
name='source')}}
input_images = convert_bytestrings_to_images(feature_placeholders['image_bytes']['b64'])
features = {'spectrogram': image for image in input_images}
return tf.estimator.export.ServingInputReceiver(features, feature_placeholders)


Which leads to the error



ValueError: receiver_tensor image_bytes must be a Tensor.


And I am not sure if features would be in the correct form either.







python tensorflow machine-learning google-cloud-platform google-cloud-ml






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Feb 26 '18 at 15:56









Guillem Xercavins

2,2521519




2,2521519










asked Feb 14 '18 at 14:26









NickDGregNickDGreg

9810




9810













  • I asked a similar question here, which received an answer that may help people

    – NickDGreg
    Feb 28 '18 at 10:22



















  • I asked a similar question here, which received an answer that may help people

    – NickDGreg
    Feb 28 '18 at 10:22

















I asked a similar question here, which received an answer that may help people

– NickDGreg
Feb 28 '18 at 10:22





I asked a similar question here, which received an answer that may help people

– NickDGreg
Feb 28 '18 at 10:22












1 Answer
1






active

oldest

votes


















0














The b64 is handled transparently by ML Engine. So, while the input JSON needs to have the b64, your serving input function will just receive the raw bytes. So your serving input fn needs to be:



def serving_input_fn():
feature_placeholders = {'image_bytes':
tf.placeholder(tf.string, shape=())}
input_images = convert_bytestrings_to_images(feature_placeholders['image_bytes']))
features = {'spectrogram': image for image in input_images}
return tf.estimator.export.ServingInputReceiver(features, feature_placeholders)





share|improve this answer
























    Your Answer






    StackExchange.ifUsing("editor", function () {
    StackExchange.using("externalEditor", function () {
    StackExchange.using("snippets", function () {
    StackExchange.snippets.init();
    });
    });
    }, "code-snippets");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "1"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f48789532%2fhow-to-create-a-serving-input-function-to-provide-a-prediction-for-images-on-goo%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0














    The b64 is handled transparently by ML Engine. So, while the input JSON needs to have the b64, your serving input function will just receive the raw bytes. So your serving input fn needs to be:



    def serving_input_fn():
    feature_placeholders = {'image_bytes':
    tf.placeholder(tf.string, shape=())}
    input_images = convert_bytestrings_to_images(feature_placeholders['image_bytes']))
    features = {'spectrogram': image for image in input_images}
    return tf.estimator.export.ServingInputReceiver(features, feature_placeholders)





    share|improve this answer




























      0














      The b64 is handled transparently by ML Engine. So, while the input JSON needs to have the b64, your serving input function will just receive the raw bytes. So your serving input fn needs to be:



      def serving_input_fn():
      feature_placeholders = {'image_bytes':
      tf.placeholder(tf.string, shape=())}
      input_images = convert_bytestrings_to_images(feature_placeholders['image_bytes']))
      features = {'spectrogram': image for image in input_images}
      return tf.estimator.export.ServingInputReceiver(features, feature_placeholders)





      share|improve this answer


























        0












        0








        0







        The b64 is handled transparently by ML Engine. So, while the input JSON needs to have the b64, your serving input function will just receive the raw bytes. So your serving input fn needs to be:



        def serving_input_fn():
        feature_placeholders = {'image_bytes':
        tf.placeholder(tf.string, shape=())}
        input_images = convert_bytestrings_to_images(feature_placeholders['image_bytes']))
        features = {'spectrogram': image for image in input_images}
        return tf.estimator.export.ServingInputReceiver(features, feature_placeholders)





        share|improve this answer













        The b64 is handled transparently by ML Engine. So, while the input JSON needs to have the b64, your serving input function will just receive the raw bytes. So your serving input fn needs to be:



        def serving_input_fn():
        feature_placeholders = {'image_bytes':
        tf.placeholder(tf.string, shape=())}
        input_images = convert_bytestrings_to_images(feature_placeholders['image_bytes']))
        features = {'spectrogram': image for image in input_images}
        return tf.estimator.export.ServingInputReceiver(features, feature_placeholders)






        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Nov 27 '18 at 0:59









        LakLak

        1,725715




        1,725715
































            draft saved

            draft discarded




















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f48789532%2fhow-to-create-a-serving-input-function-to-provide-a-prediction-for-images-on-goo%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Wiesbaden

            Marschland

            Dieringhausen