Apache Spark does not create a new Session





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ height:90px;width:728px;box-sizing:border-box;
}







0















I'm trying to implement a simple Apache Spark RDD system but it seems I'm not able to access that session.



I started by doing:
./start-all.sh on /usr/local/spark/sbin



then I created a new session by doing this:



spark = (SparkSession.builder
.appName("Oncofinder -- Preprocessing")
.getOrCreate())

dirname = "oncofinder"
zipname = dirname + ".zip"
shutil.make_archive(dirname, 'zip', dirname + "/..", dirname)
spark.sparkContext.addPyFile(zipname)


and shipping a fresh copy of my app package to the Spark workers.



I'm using the Python library pyspark.



Then, I'm using my spark session on a function called preprocess:



train_rdd = preprocess(spark, [1, 2], tile_size=tile_size, sample_size=sample_size,
grayscale=grayscale, num_partitions=num_partitions, folder=folder)


and my function:



def preprocess(spark, slide_nums, folder="data", training=True, tile_size=1024, overlap=0,
tissue_threshold=0.9, sample_size=256, grayscale=False, normalize_stains=True,
num_partitions=20000):

print("===PREPROCESSING===")

slides = (spark.sparkContext
.parallelize(slide_nums)
.filter(lambda slide: open_slide(slide, folder, training) is not None))


and when I run this piece of code, I get:



2018-11-27 00:36:30 WARN  Utils:66 - Your hostname, luiscosta-GT62VR-6RD resolves to a loopback address: 127.0.1.1; using 192.168.1.67 instead (on interface wlp2s0)
2018-11-27 00:36:30 WARN Utils:66 - Set SPARK_LOCAL_IP if you need to bind to another address
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/home/luiscosta/PycharmProjects/wsi_preprocessing/oncofinder/lib/python3.6/site-packages/pyspark/jars/hadoop-auth-2.7.3.jar) to method sun.security.krb5.Config.getInstance()
WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
2018-11-27 00:36:30 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
===PREPROCESSING===


It reaches my ===PREPROCESSING=== checkpoint but it does not run my open_slide function.



I'm kind of new to Apache Spark and I apologize if this is a silly question but when I read the docs it looked really straightforward.



Kind Regardsspar










share|improve this question


















  • 1





    That's normal behavior. I would strongly recommend reading how Spark works, in particular about difference between transformations and actions. filter is the former one, hence it is lazy and won't be scheduled, unless there is a subsequent action that requires its output.

    – user6910411
    Nov 27 '18 at 11:42













  • Possible duplicate of How can I force Spark to execute code?

    – user6910411
    Nov 27 '18 at 11:44


















0















I'm trying to implement a simple Apache Spark RDD system but it seems I'm not able to access that session.



I started by doing:
./start-all.sh on /usr/local/spark/sbin



then I created a new session by doing this:



spark = (SparkSession.builder
.appName("Oncofinder -- Preprocessing")
.getOrCreate())

dirname = "oncofinder"
zipname = dirname + ".zip"
shutil.make_archive(dirname, 'zip', dirname + "/..", dirname)
spark.sparkContext.addPyFile(zipname)


and shipping a fresh copy of my app package to the Spark workers.



I'm using the Python library pyspark.



Then, I'm using my spark session on a function called preprocess:



train_rdd = preprocess(spark, [1, 2], tile_size=tile_size, sample_size=sample_size,
grayscale=grayscale, num_partitions=num_partitions, folder=folder)


and my function:



def preprocess(spark, slide_nums, folder="data", training=True, tile_size=1024, overlap=0,
tissue_threshold=0.9, sample_size=256, grayscale=False, normalize_stains=True,
num_partitions=20000):

print("===PREPROCESSING===")

slides = (spark.sparkContext
.parallelize(slide_nums)
.filter(lambda slide: open_slide(slide, folder, training) is not None))


and when I run this piece of code, I get:



2018-11-27 00:36:30 WARN  Utils:66 - Your hostname, luiscosta-GT62VR-6RD resolves to a loopback address: 127.0.1.1; using 192.168.1.67 instead (on interface wlp2s0)
2018-11-27 00:36:30 WARN Utils:66 - Set SPARK_LOCAL_IP if you need to bind to another address
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/home/luiscosta/PycharmProjects/wsi_preprocessing/oncofinder/lib/python3.6/site-packages/pyspark/jars/hadoop-auth-2.7.3.jar) to method sun.security.krb5.Config.getInstance()
WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
2018-11-27 00:36:30 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
===PREPROCESSING===


It reaches my ===PREPROCESSING=== checkpoint but it does not run my open_slide function.



I'm kind of new to Apache Spark and I apologize if this is a silly question but when I read the docs it looked really straightforward.



Kind Regardsspar










share|improve this question


















  • 1





    That's normal behavior. I would strongly recommend reading how Spark works, in particular about difference between transformations and actions. filter is the former one, hence it is lazy and won't be scheduled, unless there is a subsequent action that requires its output.

    – user6910411
    Nov 27 '18 at 11:42













  • Possible duplicate of How can I force Spark to execute code?

    – user6910411
    Nov 27 '18 at 11:44














0












0








0








I'm trying to implement a simple Apache Spark RDD system but it seems I'm not able to access that session.



I started by doing:
./start-all.sh on /usr/local/spark/sbin



then I created a new session by doing this:



spark = (SparkSession.builder
.appName("Oncofinder -- Preprocessing")
.getOrCreate())

dirname = "oncofinder"
zipname = dirname + ".zip"
shutil.make_archive(dirname, 'zip', dirname + "/..", dirname)
spark.sparkContext.addPyFile(zipname)


and shipping a fresh copy of my app package to the Spark workers.



I'm using the Python library pyspark.



Then, I'm using my spark session on a function called preprocess:



train_rdd = preprocess(spark, [1, 2], tile_size=tile_size, sample_size=sample_size,
grayscale=grayscale, num_partitions=num_partitions, folder=folder)


and my function:



def preprocess(spark, slide_nums, folder="data", training=True, tile_size=1024, overlap=0,
tissue_threshold=0.9, sample_size=256, grayscale=False, normalize_stains=True,
num_partitions=20000):

print("===PREPROCESSING===")

slides = (spark.sparkContext
.parallelize(slide_nums)
.filter(lambda slide: open_slide(slide, folder, training) is not None))


and when I run this piece of code, I get:



2018-11-27 00:36:30 WARN  Utils:66 - Your hostname, luiscosta-GT62VR-6RD resolves to a loopback address: 127.0.1.1; using 192.168.1.67 instead (on interface wlp2s0)
2018-11-27 00:36:30 WARN Utils:66 - Set SPARK_LOCAL_IP if you need to bind to another address
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/home/luiscosta/PycharmProjects/wsi_preprocessing/oncofinder/lib/python3.6/site-packages/pyspark/jars/hadoop-auth-2.7.3.jar) to method sun.security.krb5.Config.getInstance()
WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
2018-11-27 00:36:30 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
===PREPROCESSING===


It reaches my ===PREPROCESSING=== checkpoint but it does not run my open_slide function.



I'm kind of new to Apache Spark and I apologize if this is a silly question but when I read the docs it looked really straightforward.



Kind Regardsspar










share|improve this question














I'm trying to implement a simple Apache Spark RDD system but it seems I'm not able to access that session.



I started by doing:
./start-all.sh on /usr/local/spark/sbin



then I created a new session by doing this:



spark = (SparkSession.builder
.appName("Oncofinder -- Preprocessing")
.getOrCreate())

dirname = "oncofinder"
zipname = dirname + ".zip"
shutil.make_archive(dirname, 'zip', dirname + "/..", dirname)
spark.sparkContext.addPyFile(zipname)


and shipping a fresh copy of my app package to the Spark workers.



I'm using the Python library pyspark.



Then, I'm using my spark session on a function called preprocess:



train_rdd = preprocess(spark, [1, 2], tile_size=tile_size, sample_size=sample_size,
grayscale=grayscale, num_partitions=num_partitions, folder=folder)


and my function:



def preprocess(spark, slide_nums, folder="data", training=True, tile_size=1024, overlap=0,
tissue_threshold=0.9, sample_size=256, grayscale=False, normalize_stains=True,
num_partitions=20000):

print("===PREPROCESSING===")

slides = (spark.sparkContext
.parallelize(slide_nums)
.filter(lambda slide: open_slide(slide, folder, training) is not None))


and when I run this piece of code, I get:



2018-11-27 00:36:30 WARN  Utils:66 - Your hostname, luiscosta-GT62VR-6RD resolves to a loopback address: 127.0.1.1; using 192.168.1.67 instead (on interface wlp2s0)
2018-11-27 00:36:30 WARN Utils:66 - Set SPARK_LOCAL_IP if you need to bind to another address
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/home/luiscosta/PycharmProjects/wsi_preprocessing/oncofinder/lib/python3.6/site-packages/pyspark/jars/hadoop-auth-2.7.3.jar) to method sun.security.krb5.Config.getInstance()
WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
2018-11-27 00:36:30 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
===PREPROCESSING===


It reaches my ===PREPROCESSING=== checkpoint but it does not run my open_slide function.



I'm kind of new to Apache Spark and I apologize if this is a silly question but when I read the docs it looked really straightforward.



Kind Regardsspar







apache-spark pyspark






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 27 '18 at 0:56









Luís CostaLuís Costa

345219




345219








  • 1





    That's normal behavior. I would strongly recommend reading how Spark works, in particular about difference between transformations and actions. filter is the former one, hence it is lazy and won't be scheduled, unless there is a subsequent action that requires its output.

    – user6910411
    Nov 27 '18 at 11:42













  • Possible duplicate of How can I force Spark to execute code?

    – user6910411
    Nov 27 '18 at 11:44














  • 1





    That's normal behavior. I would strongly recommend reading how Spark works, in particular about difference between transformations and actions. filter is the former one, hence it is lazy and won't be scheduled, unless there is a subsequent action that requires its output.

    – user6910411
    Nov 27 '18 at 11:42













  • Possible duplicate of How can I force Spark to execute code?

    – user6910411
    Nov 27 '18 at 11:44








1




1





That's normal behavior. I would strongly recommend reading how Spark works, in particular about difference between transformations and actions. filter is the former one, hence it is lazy and won't be scheduled, unless there is a subsequent action that requires its output.

– user6910411
Nov 27 '18 at 11:42







That's normal behavior. I would strongly recommend reading how Spark works, in particular about difference between transformations and actions. filter is the former one, hence it is lazy and won't be scheduled, unless there is a subsequent action that requires its output.

– user6910411
Nov 27 '18 at 11:42















Possible duplicate of How can I force Spark to execute code?

– user6910411
Nov 27 '18 at 11:44





Possible duplicate of How can I force Spark to execute code?

– user6910411
Nov 27 '18 at 11:44












0






active

oldest

votes












Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53491287%2fapache-spark-does-not-create-a-new-session%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53491287%2fapache-spark-does-not-create-a-new-session%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

To store a contact into the json file from server.js file using a class in NodeJS

Redirect URL with Chrome Remote Debugging Android Devices

Dieringhausen