Spark Structured Streaming from Files on S3/Disk - add batch filename to records/lines?





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ height:90px;width:728px;box-sizing:border-box;
}







1















I am implementing a spark structured streaming application that processes webserver log files from a folder on disk or perhaps S3.
Spark Structured Streaming fits the use case almost perfectly, with one wrinkle.
The filenames in the folder also contain the machine name eg. like:



/node1_20181101.json.gz



/node1_20181102.json.gz



/node2_20181101.json.gz



/node3_20181102.json.gz



/node4_20181102.json.gz



...and so on.



A (simplified) version of the source looks something like this ( I would turn the below to a continuous stream with windowing etc):



val inputDF = spark.read
.option("codec", classOf[GzipCodec].getName)
.option("maxFilesPerTrigger", 1.toString)
.json(config.directory)
.transform { ds =>
logger.info(ds.inputFiles)
ds
}.foreach(println(_))


I would like to transform the batch and add the node ID from the filename to each record line, - I can't seem to see any kind of an onBatch trigger that I could use to enrich the record schema with the node ID from the file name.



I have looked at the following and nothing seems to fit:
[FileStreamSource][https://jaceklaskowski.gitbooks.io/spark-structured-streaming/spark-sql-streaming-FileStreamSource.html#metadataLog]



Unfortunately getting a handle on the machine name from the file name is key to the analytics I do later, and I have no control over how the logs are populated



Any clues?










share|improve this question

























  • Do you want to get access to the file name in transform method?

    – Jacek Laskowski
    Dec 3 '18 at 20:25


















1















I am implementing a spark structured streaming application that processes webserver log files from a folder on disk or perhaps S3.
Spark Structured Streaming fits the use case almost perfectly, with one wrinkle.
The filenames in the folder also contain the machine name eg. like:



/node1_20181101.json.gz



/node1_20181102.json.gz



/node2_20181101.json.gz



/node3_20181102.json.gz



/node4_20181102.json.gz



...and so on.



A (simplified) version of the source looks something like this ( I would turn the below to a continuous stream with windowing etc):



val inputDF = spark.read
.option("codec", classOf[GzipCodec].getName)
.option("maxFilesPerTrigger", 1.toString)
.json(config.directory)
.transform { ds =>
logger.info(ds.inputFiles)
ds
}.foreach(println(_))


I would like to transform the batch and add the node ID from the filename to each record line, - I can't seem to see any kind of an onBatch trigger that I could use to enrich the record schema with the node ID from the file name.



I have looked at the following and nothing seems to fit:
[FileStreamSource][https://jaceklaskowski.gitbooks.io/spark-structured-streaming/spark-sql-streaming-FileStreamSource.html#metadataLog]



Unfortunately getting a handle on the machine name from the file name is key to the analytics I do later, and I have no control over how the logs are populated



Any clues?










share|improve this question

























  • Do you want to get access to the file name in transform method?

    – Jacek Laskowski
    Dec 3 '18 at 20:25














1












1








1








I am implementing a spark structured streaming application that processes webserver log files from a folder on disk or perhaps S3.
Spark Structured Streaming fits the use case almost perfectly, with one wrinkle.
The filenames in the folder also contain the machine name eg. like:



/node1_20181101.json.gz



/node1_20181102.json.gz



/node2_20181101.json.gz



/node3_20181102.json.gz



/node4_20181102.json.gz



...and so on.



A (simplified) version of the source looks something like this ( I would turn the below to a continuous stream with windowing etc):



val inputDF = spark.read
.option("codec", classOf[GzipCodec].getName)
.option("maxFilesPerTrigger", 1.toString)
.json(config.directory)
.transform { ds =>
logger.info(ds.inputFiles)
ds
}.foreach(println(_))


I would like to transform the batch and add the node ID from the filename to each record line, - I can't seem to see any kind of an onBatch trigger that I could use to enrich the record schema with the node ID from the file name.



I have looked at the following and nothing seems to fit:
[FileStreamSource][https://jaceklaskowski.gitbooks.io/spark-structured-streaming/spark-sql-streaming-FileStreamSource.html#metadataLog]



Unfortunately getting a handle on the machine name from the file name is key to the analytics I do later, and I have no control over how the logs are populated



Any clues?










share|improve this question
















I am implementing a spark structured streaming application that processes webserver log files from a folder on disk or perhaps S3.
Spark Structured Streaming fits the use case almost perfectly, with one wrinkle.
The filenames in the folder also contain the machine name eg. like:



/node1_20181101.json.gz



/node1_20181102.json.gz



/node2_20181101.json.gz



/node3_20181102.json.gz



/node4_20181102.json.gz



...and so on.



A (simplified) version of the source looks something like this ( I would turn the below to a continuous stream with windowing etc):



val inputDF = spark.read
.option("codec", classOf[GzipCodec].getName)
.option("maxFilesPerTrigger", 1.toString)
.json(config.directory)
.transform { ds =>
logger.info(ds.inputFiles)
ds
}.foreach(println(_))


I would like to transform the batch and add the node ID from the filename to each record line, - I can't seem to see any kind of an onBatch trigger that I could use to enrich the record schema with the node ID from the file name.



I have looked at the following and nothing seems to fit:
[FileStreamSource][https://jaceklaskowski.gitbooks.io/spark-structured-streaming/spark-sql-streaming-FileStreamSource.html#metadataLog]



Unfortunately getting a handle on the machine name from the file name is key to the analytics I do later, and I have no control over how the logs are populated



Any clues?







scala apache-spark spark-structured-streaming






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Dec 3 '18 at 20:23









Jacek Laskowski

46.3k18138278




46.3k18138278










asked Nov 26 '18 at 15:38









ARocheARoche

61




61













  • Do you want to get access to the file name in transform method?

    – Jacek Laskowski
    Dec 3 '18 at 20:25



















  • Do you want to get access to the file name in transform method?

    – Jacek Laskowski
    Dec 3 '18 at 20:25

















Do you want to get access to the file name in transform method?

– Jacek Laskowski
Dec 3 '18 at 20:25





Do you want to get access to the file name in transform method?

– Jacek Laskowski
Dec 3 '18 at 20:25












0






active

oldest

votes












Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53484511%2fspark-structured-streaming-from-files-on-s3-disk-add-batch-filename-to-records%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53484511%2fspark-structured-streaming-from-files-on-s3-disk-add-batch-filename-to-records%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Wiesbaden

To store a contact into the json file from server.js file using a class in NodeJS

Marschland