pyspark - find max and min in json streamed data usign createDataFrame
up vote
-1
down vote
favorite
I have a set of json messages streamed by Kafka, each describing a website user. Using pyspark, I need to count the number of users per country per streaming window, and return the countries with the max and min number of users.
Here is an example of the streamed json messages:
{"id":1,"first_name":"Barthel","last_name":"Kittel","email":"bkittel0@printfriendly.com","gender":"Male","ip_address":"130.187.82.195","date":"06/05/2018","country":"France"}
Here is my code:
from pyspark.sql.types import StructField, StructType, StringType
from pyspark.sql import Row
from pyspark import SparkContext
from pyspark.sql import SQLContext
fields = ['id', 'first_name', 'last_name', 'email', 'gender', 'ip_address', 'date', 'country']
schema = StructType([
StructField(field, StringType(), True) for field in fields
])
def parse(s, fields):
try:
d = json.loads(s[0])
return [tuple(d.get(field) for field in fields)]
except:
return
array_of_users = parsed.SQLContext.createDataFrame(parsed.flatMap(lambda s: parse(s, fields)), schema)
rdd = sc.parallelize(array_of_users)
# group by country and then substitute the list of messages for each country by its length, resulting into a rdd of (country, length) tuples
country_count = rdd.groupBy(lambda user: user['country']).mapValues(len)
# identify the min and max using as comparison key the second element of the (country, length) tuple
country_min = country_count.min(key = lambda grp: grp[1])
country_max = country_count.max(key = lambda grp: grp[1])
When I run it, I get the message
AttributeError Traceback (most recent call last)
<ipython-input-24-6e6b83935bc3> in <module>()
16 return
17
---> 18 array_of_users = parsed.SQLContext.createDataFrame(parsed.flatMap(lambda s: parse(s, fields)), schema)
19
20 rdd = sc.parallelize(array_of_users)
AttributeError: 'TransformedDStream' object has no attribute 'SQLContext'
How can I fix this?
python apache-spark pyspark apache-kafka
add a comment |
up vote
-1
down vote
favorite
I have a set of json messages streamed by Kafka, each describing a website user. Using pyspark, I need to count the number of users per country per streaming window, and return the countries with the max and min number of users.
Here is an example of the streamed json messages:
{"id":1,"first_name":"Barthel","last_name":"Kittel","email":"bkittel0@printfriendly.com","gender":"Male","ip_address":"130.187.82.195","date":"06/05/2018","country":"France"}
Here is my code:
from pyspark.sql.types import StructField, StructType, StringType
from pyspark.sql import Row
from pyspark import SparkContext
from pyspark.sql import SQLContext
fields = ['id', 'first_name', 'last_name', 'email', 'gender', 'ip_address', 'date', 'country']
schema = StructType([
StructField(field, StringType(), True) for field in fields
])
def parse(s, fields):
try:
d = json.loads(s[0])
return [tuple(d.get(field) for field in fields)]
except:
return
array_of_users = parsed.SQLContext.createDataFrame(parsed.flatMap(lambda s: parse(s, fields)), schema)
rdd = sc.parallelize(array_of_users)
# group by country and then substitute the list of messages for each country by its length, resulting into a rdd of (country, length) tuples
country_count = rdd.groupBy(lambda user: user['country']).mapValues(len)
# identify the min and max using as comparison key the second element of the (country, length) tuple
country_min = country_count.min(key = lambda grp: grp[1])
country_max = country_count.max(key = lambda grp: grp[1])
When I run it, I get the message
AttributeError Traceback (most recent call last)
<ipython-input-24-6e6b83935bc3> in <module>()
16 return
17
---> 18 array_of_users = parsed.SQLContext.createDataFrame(parsed.flatMap(lambda s: parse(s, fields)), schema)
19
20 rdd = sc.parallelize(array_of_users)
AttributeError: 'TransformedDStream' object has no attribute 'SQLContext'
How can I fix this?
python apache-spark pyspark apache-kafka
How are you getting a window of data?
– cricket_007
Nov 20 at 15:30
ssc = StreamingContext(sc, 60)
(using PySpark)
– albus_c
Nov 20 at 15:40
I'm not seeing that line, or where you definedparsed
in your code...
– cricket_007
Nov 20 at 19:44
Note: Kafka streaming 0.8 library is deprecated as of Spark 2.3.0, and it seems you have maybe followed this blog, which is using these same variable names rittmanmead.com/blog/2017/01/…
– cricket_007
Nov 20 at 20:14
add a comment |
up vote
-1
down vote
favorite
up vote
-1
down vote
favorite
I have a set of json messages streamed by Kafka, each describing a website user. Using pyspark, I need to count the number of users per country per streaming window, and return the countries with the max and min number of users.
Here is an example of the streamed json messages:
{"id":1,"first_name":"Barthel","last_name":"Kittel","email":"bkittel0@printfriendly.com","gender":"Male","ip_address":"130.187.82.195","date":"06/05/2018","country":"France"}
Here is my code:
from pyspark.sql.types import StructField, StructType, StringType
from pyspark.sql import Row
from pyspark import SparkContext
from pyspark.sql import SQLContext
fields = ['id', 'first_name', 'last_name', 'email', 'gender', 'ip_address', 'date', 'country']
schema = StructType([
StructField(field, StringType(), True) for field in fields
])
def parse(s, fields):
try:
d = json.loads(s[0])
return [tuple(d.get(field) for field in fields)]
except:
return
array_of_users = parsed.SQLContext.createDataFrame(parsed.flatMap(lambda s: parse(s, fields)), schema)
rdd = sc.parallelize(array_of_users)
# group by country and then substitute the list of messages for each country by its length, resulting into a rdd of (country, length) tuples
country_count = rdd.groupBy(lambda user: user['country']).mapValues(len)
# identify the min and max using as comparison key the second element of the (country, length) tuple
country_min = country_count.min(key = lambda grp: grp[1])
country_max = country_count.max(key = lambda grp: grp[1])
When I run it, I get the message
AttributeError Traceback (most recent call last)
<ipython-input-24-6e6b83935bc3> in <module>()
16 return
17
---> 18 array_of_users = parsed.SQLContext.createDataFrame(parsed.flatMap(lambda s: parse(s, fields)), schema)
19
20 rdd = sc.parallelize(array_of_users)
AttributeError: 'TransformedDStream' object has no attribute 'SQLContext'
How can I fix this?
python apache-spark pyspark apache-kafka
I have a set of json messages streamed by Kafka, each describing a website user. Using pyspark, I need to count the number of users per country per streaming window, and return the countries with the max and min number of users.
Here is an example of the streamed json messages:
{"id":1,"first_name":"Barthel","last_name":"Kittel","email":"bkittel0@printfriendly.com","gender":"Male","ip_address":"130.187.82.195","date":"06/05/2018","country":"France"}
Here is my code:
from pyspark.sql.types import StructField, StructType, StringType
from pyspark.sql import Row
from pyspark import SparkContext
from pyspark.sql import SQLContext
fields = ['id', 'first_name', 'last_name', 'email', 'gender', 'ip_address', 'date', 'country']
schema = StructType([
StructField(field, StringType(), True) for field in fields
])
def parse(s, fields):
try:
d = json.loads(s[0])
return [tuple(d.get(field) for field in fields)]
except:
return
array_of_users = parsed.SQLContext.createDataFrame(parsed.flatMap(lambda s: parse(s, fields)), schema)
rdd = sc.parallelize(array_of_users)
# group by country and then substitute the list of messages for each country by its length, resulting into a rdd of (country, length) tuples
country_count = rdd.groupBy(lambda user: user['country']).mapValues(len)
# identify the min and max using as comparison key the second element of the (country, length) tuple
country_min = country_count.min(key = lambda grp: grp[1])
country_max = country_count.max(key = lambda grp: grp[1])
When I run it, I get the message
AttributeError Traceback (most recent call last)
<ipython-input-24-6e6b83935bc3> in <module>()
16 return
17
---> 18 array_of_users = parsed.SQLContext.createDataFrame(parsed.flatMap(lambda s: parse(s, fields)), schema)
19
20 rdd = sc.parallelize(array_of_users)
AttributeError: 'TransformedDStream' object has no attribute 'SQLContext'
How can I fix this?
python apache-spark pyspark apache-kafka
python apache-spark pyspark apache-kafka
edited Nov 20 at 16:28
asked Nov 20 at 12:59
albus_c
1,24932246
1,24932246
How are you getting a window of data?
– cricket_007
Nov 20 at 15:30
ssc = StreamingContext(sc, 60)
(using PySpark)
– albus_c
Nov 20 at 15:40
I'm not seeing that line, or where you definedparsed
in your code...
– cricket_007
Nov 20 at 19:44
Note: Kafka streaming 0.8 library is deprecated as of Spark 2.3.0, and it seems you have maybe followed this blog, which is using these same variable names rittmanmead.com/blog/2017/01/…
– cricket_007
Nov 20 at 20:14
add a comment |
How are you getting a window of data?
– cricket_007
Nov 20 at 15:30
ssc = StreamingContext(sc, 60)
(using PySpark)
– albus_c
Nov 20 at 15:40
I'm not seeing that line, or where you definedparsed
in your code...
– cricket_007
Nov 20 at 19:44
Note: Kafka streaming 0.8 library is deprecated as of Spark 2.3.0, and it seems you have maybe followed this blog, which is using these same variable names rittmanmead.com/blog/2017/01/…
– cricket_007
Nov 20 at 20:14
How are you getting a window of data?
– cricket_007
Nov 20 at 15:30
How are you getting a window of data?
– cricket_007
Nov 20 at 15:30
ssc = StreamingContext(sc, 60)
(using PySpark)– albus_c
Nov 20 at 15:40
ssc = StreamingContext(sc, 60)
(using PySpark)– albus_c
Nov 20 at 15:40
I'm not seeing that line, or where you defined
parsed
in your code...– cricket_007
Nov 20 at 19:44
I'm not seeing that line, or where you defined
parsed
in your code...– cricket_007
Nov 20 at 19:44
Note: Kafka streaming 0.8 library is deprecated as of Spark 2.3.0, and it seems you have maybe followed this blog, which is using these same variable names rittmanmead.com/blog/2017/01/…
– cricket_007
Nov 20 at 20:14
Note: Kafka streaming 0.8 library is deprecated as of Spark 2.3.0, and it seems you have maybe followed this blog, which is using these same variable names rittmanmead.com/blog/2017/01/…
– cricket_007
Nov 20 at 20:14
add a comment |
1 Answer
1
active
oldest
votes
up vote
1
down vote
If I understood correctly, you need to group the list of messages by country, then count the number of messages in each group and then select the groups with the min and max number of messages.
Out of my head, the code would be something like:
# assuming the array_of_users is your array of messages
rdd = sc.parallelize(array_of_users)
# group by country and then substitute the list of messages for each country by its length, resulting into a rdd of (country, length) tuples
country_count = rdd.groupBy(lambda user: user['country']).mapValues(len)
# identify the min and max using as comparison key the second element of the (country, length) tuple
country_min = country_count.min(key = lambda grp: grp[1])
country_max = country_count.max(key = lambda grp: grp[1])
Hi, thanks for the hint! As far as I understand, my messages are inparsed = kafkaStream.map(lambda v: json.loads(v[1]))
. How can I go from this to thearray_of_users
you suggest?
– albus_c
Nov 20 at 15:49
I updated the question including your suggestion.
– albus_c
Nov 20 at 16:28
This may come in handy, take a look at the use of transform: rittmanmead.com/blog/2017/01/…
– M. F.
Nov 21 at 8:03
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53393554%2fpyspark-find-max-and-min-in-json-streamed-data-usign-createdataframe%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
If I understood correctly, you need to group the list of messages by country, then count the number of messages in each group and then select the groups with the min and max number of messages.
Out of my head, the code would be something like:
# assuming the array_of_users is your array of messages
rdd = sc.parallelize(array_of_users)
# group by country and then substitute the list of messages for each country by its length, resulting into a rdd of (country, length) tuples
country_count = rdd.groupBy(lambda user: user['country']).mapValues(len)
# identify the min and max using as comparison key the second element of the (country, length) tuple
country_min = country_count.min(key = lambda grp: grp[1])
country_max = country_count.max(key = lambda grp: grp[1])
Hi, thanks for the hint! As far as I understand, my messages are inparsed = kafkaStream.map(lambda v: json.loads(v[1]))
. How can I go from this to thearray_of_users
you suggest?
– albus_c
Nov 20 at 15:49
I updated the question including your suggestion.
– albus_c
Nov 20 at 16:28
This may come in handy, take a look at the use of transform: rittmanmead.com/blog/2017/01/…
– M. F.
Nov 21 at 8:03
add a comment |
up vote
1
down vote
If I understood correctly, you need to group the list of messages by country, then count the number of messages in each group and then select the groups with the min and max number of messages.
Out of my head, the code would be something like:
# assuming the array_of_users is your array of messages
rdd = sc.parallelize(array_of_users)
# group by country and then substitute the list of messages for each country by its length, resulting into a rdd of (country, length) tuples
country_count = rdd.groupBy(lambda user: user['country']).mapValues(len)
# identify the min and max using as comparison key the second element of the (country, length) tuple
country_min = country_count.min(key = lambda grp: grp[1])
country_max = country_count.max(key = lambda grp: grp[1])
Hi, thanks for the hint! As far as I understand, my messages are inparsed = kafkaStream.map(lambda v: json.loads(v[1]))
. How can I go from this to thearray_of_users
you suggest?
– albus_c
Nov 20 at 15:49
I updated the question including your suggestion.
– albus_c
Nov 20 at 16:28
This may come in handy, take a look at the use of transform: rittmanmead.com/blog/2017/01/…
– M. F.
Nov 21 at 8:03
add a comment |
up vote
1
down vote
up vote
1
down vote
If I understood correctly, you need to group the list of messages by country, then count the number of messages in each group and then select the groups with the min and max number of messages.
Out of my head, the code would be something like:
# assuming the array_of_users is your array of messages
rdd = sc.parallelize(array_of_users)
# group by country and then substitute the list of messages for each country by its length, resulting into a rdd of (country, length) tuples
country_count = rdd.groupBy(lambda user: user['country']).mapValues(len)
# identify the min and max using as comparison key the second element of the (country, length) tuple
country_min = country_count.min(key = lambda grp: grp[1])
country_max = country_count.max(key = lambda grp: grp[1])
If I understood correctly, you need to group the list of messages by country, then count the number of messages in each group and then select the groups with the min and max number of messages.
Out of my head, the code would be something like:
# assuming the array_of_users is your array of messages
rdd = sc.parallelize(array_of_users)
# group by country and then substitute the list of messages for each country by its length, resulting into a rdd of (country, length) tuples
country_count = rdd.groupBy(lambda user: user['country']).mapValues(len)
# identify the min and max using as comparison key the second element of the (country, length) tuple
country_min = country_count.min(key = lambda grp: grp[1])
country_max = country_count.max(key = lambda grp: grp[1])
answered Nov 20 at 15:23
M. F.
88459
88459
Hi, thanks for the hint! As far as I understand, my messages are inparsed = kafkaStream.map(lambda v: json.loads(v[1]))
. How can I go from this to thearray_of_users
you suggest?
– albus_c
Nov 20 at 15:49
I updated the question including your suggestion.
– albus_c
Nov 20 at 16:28
This may come in handy, take a look at the use of transform: rittmanmead.com/blog/2017/01/…
– M. F.
Nov 21 at 8:03
add a comment |
Hi, thanks for the hint! As far as I understand, my messages are inparsed = kafkaStream.map(lambda v: json.loads(v[1]))
. How can I go from this to thearray_of_users
you suggest?
– albus_c
Nov 20 at 15:49
I updated the question including your suggestion.
– albus_c
Nov 20 at 16:28
This may come in handy, take a look at the use of transform: rittmanmead.com/blog/2017/01/…
– M. F.
Nov 21 at 8:03
Hi, thanks for the hint! As far as I understand, my messages are in
parsed = kafkaStream.map(lambda v: json.loads(v[1]))
. How can I go from this to the array_of_users
you suggest?– albus_c
Nov 20 at 15:49
Hi, thanks for the hint! As far as I understand, my messages are in
parsed = kafkaStream.map(lambda v: json.loads(v[1]))
. How can I go from this to the array_of_users
you suggest?– albus_c
Nov 20 at 15:49
I updated the question including your suggestion.
– albus_c
Nov 20 at 16:28
I updated the question including your suggestion.
– albus_c
Nov 20 at 16:28
This may come in handy, take a look at the use of transform: rittmanmead.com/blog/2017/01/…
– M. F.
Nov 21 at 8:03
This may come in handy, take a look at the use of transform: rittmanmead.com/blog/2017/01/…
– M. F.
Nov 21 at 8:03
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53393554%2fpyspark-find-max-and-min-in-json-streamed-data-usign-createdataframe%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
How are you getting a window of data?
– cricket_007
Nov 20 at 15:30
ssc = StreamingContext(sc, 60)
(using PySpark)– albus_c
Nov 20 at 15:40
I'm not seeing that line, or where you defined
parsed
in your code...– cricket_007
Nov 20 at 19:44
Note: Kafka streaming 0.8 library is deprecated as of Spark 2.3.0, and it seems you have maybe followed this blog, which is using these same variable names rittmanmead.com/blog/2017/01/…
– cricket_007
Nov 20 at 20:14