Amazon S3 Uploading via Java API : InputStream Sources












0















I'm testing different ways to upload small objects toS3 using "aws-java-sdk-s3".
Being small objects I use the default api (the Transfer API for large and huge objects....)





  1. Uploading a File as a source, perfect !



     File file = ....
    s3Client.putObject(new PutObjectRequest(bucket, key, file));



  2. Uploading ByteArrayInputStream, perfect !



    InputStream  stream = new ByteArrayInputStream("How are you?".getBytes()))
    s3Client.putObject(new PutObjectRequest(bucket, key, stream ));



  3. Updloading a Resource As Stream , problems .!



    InputStream stream = this.getClass().getResourceAsStream("myFile.data"); s3Client.putObject(new PutObjectRequest(bucket, key, stream ));




The Exception:



com.amazonaws.ResetException: The request to the service failed with a retryable reason, but resetting the request input stream has failed.
See exception.getExtraInfo or debug-level logging for the original failure that caused this retry.;
If the request involves an input stream, the maximum stream buffer size can be configured via request.getRequestClientOptions().setReadLimit(int)

Caused by: java.io.IOException: Resetting to invalid mark
at java.io.BufferedInputStream.reset(BufferedInputStream.java:448)
at com.amazonaws.internal.SdkFilterInputStream.reset(SdkFilterInputStream.java:112)
at com.amazonaws.internal.SdkFilterInputStream.reset(SdkFilterInputStream.java:112)
at com.amazonaws.util.LengthCheckInputStream.reset(LengthCheckInputStream.java:126)
at com.amazonaws.internal.SdkFilterInputStream.reset(SdkFilterInputStream.java:112)


I can convert the classpath resource to a File Object using som Apache File Utils, but its a bit shit......




  1. Do I have to configure the ReadLimit depending on the type of Stream ?¿?

  2. What value is recommended ¿?


API VersionM
"aws-java-sdk-s3" rev="1.11.442"










share|improve this question





























    0















    I'm testing different ways to upload small objects toS3 using "aws-java-sdk-s3".
    Being small objects I use the default api (the Transfer API for large and huge objects....)





    1. Uploading a File as a source, perfect !



       File file = ....
      s3Client.putObject(new PutObjectRequest(bucket, key, file));



    2. Uploading ByteArrayInputStream, perfect !



      InputStream  stream = new ByteArrayInputStream("How are you?".getBytes()))
      s3Client.putObject(new PutObjectRequest(bucket, key, stream ));



    3. Updloading a Resource As Stream , problems .!



      InputStream stream = this.getClass().getResourceAsStream("myFile.data"); s3Client.putObject(new PutObjectRequest(bucket, key, stream ));




    The Exception:



    com.amazonaws.ResetException: The request to the service failed with a retryable reason, but resetting the request input stream has failed.
    See exception.getExtraInfo or debug-level logging for the original failure that caused this retry.;
    If the request involves an input stream, the maximum stream buffer size can be configured via request.getRequestClientOptions().setReadLimit(int)

    Caused by: java.io.IOException: Resetting to invalid mark
    at java.io.BufferedInputStream.reset(BufferedInputStream.java:448)
    at com.amazonaws.internal.SdkFilterInputStream.reset(SdkFilterInputStream.java:112)
    at com.amazonaws.internal.SdkFilterInputStream.reset(SdkFilterInputStream.java:112)
    at com.amazonaws.util.LengthCheckInputStream.reset(LengthCheckInputStream.java:126)
    at com.amazonaws.internal.SdkFilterInputStream.reset(SdkFilterInputStream.java:112)


    I can convert the classpath resource to a File Object using som Apache File Utils, but its a bit shit......




    1. Do I have to configure the ReadLimit depending on the type of Stream ?¿?

    2. What value is recommended ¿?


    API VersionM
    "aws-java-sdk-s3" rev="1.11.442"










    share|improve this question



























      0












      0








      0








      I'm testing different ways to upload small objects toS3 using "aws-java-sdk-s3".
      Being small objects I use the default api (the Transfer API for large and huge objects....)





      1. Uploading a File as a source, perfect !



         File file = ....
        s3Client.putObject(new PutObjectRequest(bucket, key, file));



      2. Uploading ByteArrayInputStream, perfect !



        InputStream  stream = new ByteArrayInputStream("How are you?".getBytes()))
        s3Client.putObject(new PutObjectRequest(bucket, key, stream ));



      3. Updloading a Resource As Stream , problems .!



        InputStream stream = this.getClass().getResourceAsStream("myFile.data"); s3Client.putObject(new PutObjectRequest(bucket, key, stream ));




      The Exception:



      com.amazonaws.ResetException: The request to the service failed with a retryable reason, but resetting the request input stream has failed.
      See exception.getExtraInfo or debug-level logging for the original failure that caused this retry.;
      If the request involves an input stream, the maximum stream buffer size can be configured via request.getRequestClientOptions().setReadLimit(int)

      Caused by: java.io.IOException: Resetting to invalid mark
      at java.io.BufferedInputStream.reset(BufferedInputStream.java:448)
      at com.amazonaws.internal.SdkFilterInputStream.reset(SdkFilterInputStream.java:112)
      at com.amazonaws.internal.SdkFilterInputStream.reset(SdkFilterInputStream.java:112)
      at com.amazonaws.util.LengthCheckInputStream.reset(LengthCheckInputStream.java:126)
      at com.amazonaws.internal.SdkFilterInputStream.reset(SdkFilterInputStream.java:112)


      I can convert the classpath resource to a File Object using som Apache File Utils, but its a bit shit......




      1. Do I have to configure the ReadLimit depending on the type of Stream ?¿?

      2. What value is recommended ¿?


      API VersionM
      "aws-java-sdk-s3" rev="1.11.442"










      share|improve this question
















      I'm testing different ways to upload small objects toS3 using "aws-java-sdk-s3".
      Being small objects I use the default api (the Transfer API for large and huge objects....)





      1. Uploading a File as a source, perfect !



         File file = ....
        s3Client.putObject(new PutObjectRequest(bucket, key, file));



      2. Uploading ByteArrayInputStream, perfect !



        InputStream  stream = new ByteArrayInputStream("How are you?".getBytes()))
        s3Client.putObject(new PutObjectRequest(bucket, key, stream ));



      3. Updloading a Resource As Stream , problems .!



        InputStream stream = this.getClass().getResourceAsStream("myFile.data"); s3Client.putObject(new PutObjectRequest(bucket, key, stream ));




      The Exception:



      com.amazonaws.ResetException: The request to the service failed with a retryable reason, but resetting the request input stream has failed.
      See exception.getExtraInfo or debug-level logging for the original failure that caused this retry.;
      If the request involves an input stream, the maximum stream buffer size can be configured via request.getRequestClientOptions().setReadLimit(int)

      Caused by: java.io.IOException: Resetting to invalid mark
      at java.io.BufferedInputStream.reset(BufferedInputStream.java:448)
      at com.amazonaws.internal.SdkFilterInputStream.reset(SdkFilterInputStream.java:112)
      at com.amazonaws.internal.SdkFilterInputStream.reset(SdkFilterInputStream.java:112)
      at com.amazonaws.util.LengthCheckInputStream.reset(LengthCheckInputStream.java:126)
      at com.amazonaws.internal.SdkFilterInputStream.reset(SdkFilterInputStream.java:112)


      I can convert the classpath resource to a File Object using som Apache File Utils, but its a bit shit......




      1. Do I have to configure the ReadLimit depending on the type of Stream ?¿?

      2. What value is recommended ¿?


      API VersionM
      "aws-java-sdk-s3" rev="1.11.442"







      java amazon-web-services amazon-s3






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 23 '18 at 14:29









      Koray Tugay

      8,85826114222




      8,85826114222










      asked Nov 23 '18 at 12:34









      AzimutsAzimuts

      6321027




      6321027
























          1 Answer
          1






          active

          oldest

          votes


















          0














          I have implemented a use case which is pretty similar to yours(though not completely).I have to write some data in a JSON file(zipped format) and store it in S3. The data is available in a hash map.Hence the contents of Hashmap would be copied to JSON file.Please feel free to ignore if it does not help.Also I have never set any sort of limiting anywhere.



          public void serializeResults(AmazonS3Client s3, Map<String, Object> dm, String environment)
          throws IOException {
          logger.info("start writeZipToS3");
          Gson gson = new GsonBuilder().create();
          try {
          ByteArrayOutputStream byteOut = new ByteArrayOutputStream();
          ZipOutputStream zout = new ZipOutputStream(byteOut);

          ZipEntry ze = new ZipEntry(String.format("results-%s.json", environment));
          zout.putNextEntry(ze);
          String json = gson.toJson(dm);
          zout.write(json.getBytes());
          zout.closeEntry();
          zout.close();
          byte bites = byteOut.toByteArray();
          ObjectMetadata om = new ObjectMetadata();
          om.setContentLength(bites.length);
          PutObjectRequest por = new PutObjectRequest("home",
          String.format("zc-service/results-%s.zip", environment),
          new ByteArrayInputStream(bites), om);
          s3.putObject(por);

          } catch (IOException e) {
          e.printStackTrace();
          }
          logger.info("stop writeZipToS3");
          }


          I hope that helps you.



          Regards






          share|improve this answer
























          • Thank you @Sinchan . When the InputStream is a ByteArrayInputStream works perfectly. The matter is with certain subtypes of InputStream, f.e a BufferedInputStream of a Resource Stream.

            – Azimuts
            Nov 24 '18 at 8:22











          • There are many tricks to work..and it seems that problems come for using mark-supported InputStreams

            – Azimuts
            Nov 24 '18 at 8:22











          • . Also tried to wrapp a marked-based InputStream into a non-marked but that way ..i get another error. (com.amazonaws.SdkClientException: More data read than expected: dataLength=8192; expectedLength=0; includeSkipped=false; in.getClass()=class com.amazonaws.internal.ReleasableInputStream; markedSupported=false; marked=0; resetSinceLastMarked=false; markCount=0; resetCount=0 ).. The question is thar if you make a general purpose interface it's not easy to control what kind of InputStream will be used...

            – Azimuts
            Nov 24 '18 at 8:22











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53446818%2famazon-s3-uploading-via-java-api-inputstream-sources%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0














          I have implemented a use case which is pretty similar to yours(though not completely).I have to write some data in a JSON file(zipped format) and store it in S3. The data is available in a hash map.Hence the contents of Hashmap would be copied to JSON file.Please feel free to ignore if it does not help.Also I have never set any sort of limiting anywhere.



          public void serializeResults(AmazonS3Client s3, Map<String, Object> dm, String environment)
          throws IOException {
          logger.info("start writeZipToS3");
          Gson gson = new GsonBuilder().create();
          try {
          ByteArrayOutputStream byteOut = new ByteArrayOutputStream();
          ZipOutputStream zout = new ZipOutputStream(byteOut);

          ZipEntry ze = new ZipEntry(String.format("results-%s.json", environment));
          zout.putNextEntry(ze);
          String json = gson.toJson(dm);
          zout.write(json.getBytes());
          zout.closeEntry();
          zout.close();
          byte bites = byteOut.toByteArray();
          ObjectMetadata om = new ObjectMetadata();
          om.setContentLength(bites.length);
          PutObjectRequest por = new PutObjectRequest("home",
          String.format("zc-service/results-%s.zip", environment),
          new ByteArrayInputStream(bites), om);
          s3.putObject(por);

          } catch (IOException e) {
          e.printStackTrace();
          }
          logger.info("stop writeZipToS3");
          }


          I hope that helps you.



          Regards






          share|improve this answer
























          • Thank you @Sinchan . When the InputStream is a ByteArrayInputStream works perfectly. The matter is with certain subtypes of InputStream, f.e a BufferedInputStream of a Resource Stream.

            – Azimuts
            Nov 24 '18 at 8:22











          • There are many tricks to work..and it seems that problems come for using mark-supported InputStreams

            – Azimuts
            Nov 24 '18 at 8:22











          • . Also tried to wrapp a marked-based InputStream into a non-marked but that way ..i get another error. (com.amazonaws.SdkClientException: More data read than expected: dataLength=8192; expectedLength=0; includeSkipped=false; in.getClass()=class com.amazonaws.internal.ReleasableInputStream; markedSupported=false; marked=0; resetSinceLastMarked=false; markCount=0; resetCount=0 ).. The question is thar if you make a general purpose interface it's not easy to control what kind of InputStream will be used...

            – Azimuts
            Nov 24 '18 at 8:22
















          0














          I have implemented a use case which is pretty similar to yours(though not completely).I have to write some data in a JSON file(zipped format) and store it in S3. The data is available in a hash map.Hence the contents of Hashmap would be copied to JSON file.Please feel free to ignore if it does not help.Also I have never set any sort of limiting anywhere.



          public void serializeResults(AmazonS3Client s3, Map<String, Object> dm, String environment)
          throws IOException {
          logger.info("start writeZipToS3");
          Gson gson = new GsonBuilder().create();
          try {
          ByteArrayOutputStream byteOut = new ByteArrayOutputStream();
          ZipOutputStream zout = new ZipOutputStream(byteOut);

          ZipEntry ze = new ZipEntry(String.format("results-%s.json", environment));
          zout.putNextEntry(ze);
          String json = gson.toJson(dm);
          zout.write(json.getBytes());
          zout.closeEntry();
          zout.close();
          byte bites = byteOut.toByteArray();
          ObjectMetadata om = new ObjectMetadata();
          om.setContentLength(bites.length);
          PutObjectRequest por = new PutObjectRequest("home",
          String.format("zc-service/results-%s.zip", environment),
          new ByteArrayInputStream(bites), om);
          s3.putObject(por);

          } catch (IOException e) {
          e.printStackTrace();
          }
          logger.info("stop writeZipToS3");
          }


          I hope that helps you.



          Regards






          share|improve this answer
























          • Thank you @Sinchan . When the InputStream is a ByteArrayInputStream works perfectly. The matter is with certain subtypes of InputStream, f.e a BufferedInputStream of a Resource Stream.

            – Azimuts
            Nov 24 '18 at 8:22











          • There are many tricks to work..and it seems that problems come for using mark-supported InputStreams

            – Azimuts
            Nov 24 '18 at 8:22











          • . Also tried to wrapp a marked-based InputStream into a non-marked but that way ..i get another error. (com.amazonaws.SdkClientException: More data read than expected: dataLength=8192; expectedLength=0; includeSkipped=false; in.getClass()=class com.amazonaws.internal.ReleasableInputStream; markedSupported=false; marked=0; resetSinceLastMarked=false; markCount=0; resetCount=0 ).. The question is thar if you make a general purpose interface it's not easy to control what kind of InputStream will be used...

            – Azimuts
            Nov 24 '18 at 8:22














          0












          0








          0







          I have implemented a use case which is pretty similar to yours(though not completely).I have to write some data in a JSON file(zipped format) and store it in S3. The data is available in a hash map.Hence the contents of Hashmap would be copied to JSON file.Please feel free to ignore if it does not help.Also I have never set any sort of limiting anywhere.



          public void serializeResults(AmazonS3Client s3, Map<String, Object> dm, String environment)
          throws IOException {
          logger.info("start writeZipToS3");
          Gson gson = new GsonBuilder().create();
          try {
          ByteArrayOutputStream byteOut = new ByteArrayOutputStream();
          ZipOutputStream zout = new ZipOutputStream(byteOut);

          ZipEntry ze = new ZipEntry(String.format("results-%s.json", environment));
          zout.putNextEntry(ze);
          String json = gson.toJson(dm);
          zout.write(json.getBytes());
          zout.closeEntry();
          zout.close();
          byte bites = byteOut.toByteArray();
          ObjectMetadata om = new ObjectMetadata();
          om.setContentLength(bites.length);
          PutObjectRequest por = new PutObjectRequest("home",
          String.format("zc-service/results-%s.zip", environment),
          new ByteArrayInputStream(bites), om);
          s3.putObject(por);

          } catch (IOException e) {
          e.printStackTrace();
          }
          logger.info("stop writeZipToS3");
          }


          I hope that helps you.



          Regards






          share|improve this answer













          I have implemented a use case which is pretty similar to yours(though not completely).I have to write some data in a JSON file(zipped format) and store it in S3. The data is available in a hash map.Hence the contents of Hashmap would be copied to JSON file.Please feel free to ignore if it does not help.Also I have never set any sort of limiting anywhere.



          public void serializeResults(AmazonS3Client s3, Map<String, Object> dm, String environment)
          throws IOException {
          logger.info("start writeZipToS3");
          Gson gson = new GsonBuilder().create();
          try {
          ByteArrayOutputStream byteOut = new ByteArrayOutputStream();
          ZipOutputStream zout = new ZipOutputStream(byteOut);

          ZipEntry ze = new ZipEntry(String.format("results-%s.json", environment));
          zout.putNextEntry(ze);
          String json = gson.toJson(dm);
          zout.write(json.getBytes());
          zout.closeEntry();
          zout.close();
          byte bites = byteOut.toByteArray();
          ObjectMetadata om = new ObjectMetadata();
          om.setContentLength(bites.length);
          PutObjectRequest por = new PutObjectRequest("home",
          String.format("zc-service/results-%s.zip", environment),
          new ByteArrayInputStream(bites), om);
          s3.putObject(por);

          } catch (IOException e) {
          e.printStackTrace();
          }
          logger.info("stop writeZipToS3");
          }


          I hope that helps you.



          Regards







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 23 '18 at 14:21









          ShinchanShinchan

          66111




          66111













          • Thank you @Sinchan . When the InputStream is a ByteArrayInputStream works perfectly. The matter is with certain subtypes of InputStream, f.e a BufferedInputStream of a Resource Stream.

            – Azimuts
            Nov 24 '18 at 8:22











          • There are many tricks to work..and it seems that problems come for using mark-supported InputStreams

            – Azimuts
            Nov 24 '18 at 8:22











          • . Also tried to wrapp a marked-based InputStream into a non-marked but that way ..i get another error. (com.amazonaws.SdkClientException: More data read than expected: dataLength=8192; expectedLength=0; includeSkipped=false; in.getClass()=class com.amazonaws.internal.ReleasableInputStream; markedSupported=false; marked=0; resetSinceLastMarked=false; markCount=0; resetCount=0 ).. The question is thar if you make a general purpose interface it's not easy to control what kind of InputStream will be used...

            – Azimuts
            Nov 24 '18 at 8:22



















          • Thank you @Sinchan . When the InputStream is a ByteArrayInputStream works perfectly. The matter is with certain subtypes of InputStream, f.e a BufferedInputStream of a Resource Stream.

            – Azimuts
            Nov 24 '18 at 8:22











          • There are many tricks to work..and it seems that problems come for using mark-supported InputStreams

            – Azimuts
            Nov 24 '18 at 8:22











          • . Also tried to wrapp a marked-based InputStream into a non-marked but that way ..i get another error. (com.amazonaws.SdkClientException: More data read than expected: dataLength=8192; expectedLength=0; includeSkipped=false; in.getClass()=class com.amazonaws.internal.ReleasableInputStream; markedSupported=false; marked=0; resetSinceLastMarked=false; markCount=0; resetCount=0 ).. The question is thar if you make a general purpose interface it's not easy to control what kind of InputStream will be used...

            – Azimuts
            Nov 24 '18 at 8:22

















          Thank you @Sinchan . When the InputStream is a ByteArrayInputStream works perfectly. The matter is with certain subtypes of InputStream, f.e a BufferedInputStream of a Resource Stream.

          – Azimuts
          Nov 24 '18 at 8:22





          Thank you @Sinchan . When the InputStream is a ByteArrayInputStream works perfectly. The matter is with certain subtypes of InputStream, f.e a BufferedInputStream of a Resource Stream.

          – Azimuts
          Nov 24 '18 at 8:22













          There are many tricks to work..and it seems that problems come for using mark-supported InputStreams

          – Azimuts
          Nov 24 '18 at 8:22





          There are many tricks to work..and it seems that problems come for using mark-supported InputStreams

          – Azimuts
          Nov 24 '18 at 8:22













          . Also tried to wrapp a marked-based InputStream into a non-marked but that way ..i get another error. (com.amazonaws.SdkClientException: More data read than expected: dataLength=8192; expectedLength=0; includeSkipped=false; in.getClass()=class com.amazonaws.internal.ReleasableInputStream; markedSupported=false; marked=0; resetSinceLastMarked=false; markCount=0; resetCount=0 ).. The question is thar if you make a general purpose interface it's not easy to control what kind of InputStream will be used...

          – Azimuts
          Nov 24 '18 at 8:22





          . Also tried to wrapp a marked-based InputStream into a non-marked but that way ..i get another error. (com.amazonaws.SdkClientException: More data read than expected: dataLength=8192; expectedLength=0; includeSkipped=false; in.getClass()=class com.amazonaws.internal.ReleasableInputStream; markedSupported=false; marked=0; resetSinceLastMarked=false; markCount=0; resetCount=0 ).. The question is thar if you make a general purpose interface it's not easy to control what kind of InputStream will be used...

          – Azimuts
          Nov 24 '18 at 8:22




















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53446818%2famazon-s3-uploading-via-java-api-inputstream-sources%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Wiesbaden

          Marschland

          Dieringhausen