flatten a spark data frame's column values and put it into a variable












-2















Spark version 1.60, Scala version 2.10.5.



I have a spark-sql dataframe df like this,



+-------------------------------------------------+
|addess | attributes |
+-------------------------------------------------+
|1314 44 Avenue | Tours, Mechanics, Shopping |
|115 25th Ave | Restaurant, Mechanics, Brewery|
+-------------------------------------------------+


From this dataframe, I would like values as below,



Tours, Mechanics, Shopping, Brewery


If I do this,



df.select(df("attributes")).collect().foreach(println)


I get,



[Tours, Mechanics, Shopping]
[Restaurant, Mechanics, Brewery]


I thought I could use flatMapinstead found this, so, tried to put this into a variable using,



val allValues = df.withColumn(df("attributes"), explode("attributes"))


but I am getting an error:




error: type mismatch;



found:org.apache.spark.sql.column



required:string




I was thinking if I can get an output using explode I can use distinct to get the unique values after flattening them.



How can I get the desired output?










share|improve this question

























  • Anyone downvoting, please give your reasoning (especially for newcomers). That gives a guidance to correct ourselves and are encouraged to learn. Then it makes everyone visible what is wrong there. No offense, we all learn together, and learn from mistakes. (there is documentation to ask a good questions, but sometimes, its just difficult and need a little push :) )

    – user9431057
    Nov 25 '18 at 16:39
















-2















Spark version 1.60, Scala version 2.10.5.



I have a spark-sql dataframe df like this,



+-------------------------------------------------+
|addess | attributes |
+-------------------------------------------------+
|1314 44 Avenue | Tours, Mechanics, Shopping |
|115 25th Ave | Restaurant, Mechanics, Brewery|
+-------------------------------------------------+


From this dataframe, I would like values as below,



Tours, Mechanics, Shopping, Brewery


If I do this,



df.select(df("attributes")).collect().foreach(println)


I get,



[Tours, Mechanics, Shopping]
[Restaurant, Mechanics, Brewery]


I thought I could use flatMapinstead found this, so, tried to put this into a variable using,



val allValues = df.withColumn(df("attributes"), explode("attributes"))


but I am getting an error:




error: type mismatch;



found:org.apache.spark.sql.column



required:string




I was thinking if I can get an output using explode I can use distinct to get the unique values after flattening them.



How can I get the desired output?










share|improve this question

























  • Anyone downvoting, please give your reasoning (especially for newcomers). That gives a guidance to correct ourselves and are encouraged to learn. Then it makes everyone visible what is wrong there. No offense, we all learn together, and learn from mistakes. (there is documentation to ask a good questions, but sometimes, its just difficult and need a little push :) )

    – user9431057
    Nov 25 '18 at 16:39














-2












-2








-2








Spark version 1.60, Scala version 2.10.5.



I have a spark-sql dataframe df like this,



+-------------------------------------------------+
|addess | attributes |
+-------------------------------------------------+
|1314 44 Avenue | Tours, Mechanics, Shopping |
|115 25th Ave | Restaurant, Mechanics, Brewery|
+-------------------------------------------------+


From this dataframe, I would like values as below,



Tours, Mechanics, Shopping, Brewery


If I do this,



df.select(df("attributes")).collect().foreach(println)


I get,



[Tours, Mechanics, Shopping]
[Restaurant, Mechanics, Brewery]


I thought I could use flatMapinstead found this, so, tried to put this into a variable using,



val allValues = df.withColumn(df("attributes"), explode("attributes"))


but I am getting an error:




error: type mismatch;



found:org.apache.spark.sql.column



required:string




I was thinking if I can get an output using explode I can use distinct to get the unique values after flattening them.



How can I get the desired output?










share|improve this question
















Spark version 1.60, Scala version 2.10.5.



I have a spark-sql dataframe df like this,



+-------------------------------------------------+
|addess | attributes |
+-------------------------------------------------+
|1314 44 Avenue | Tours, Mechanics, Shopping |
|115 25th Ave | Restaurant, Mechanics, Brewery|
+-------------------------------------------------+


From this dataframe, I would like values as below,



Tours, Mechanics, Shopping, Brewery


If I do this,



df.select(df("attributes")).collect().foreach(println)


I get,



[Tours, Mechanics, Shopping]
[Restaurant, Mechanics, Brewery]


I thought I could use flatMapinstead found this, so, tried to put this into a variable using,



val allValues = df.withColumn(df("attributes"), explode("attributes"))


but I am getting an error:




error: type mismatch;



found:org.apache.spark.sql.column



required:string




I was thinking if I can get an output using explode I can use distinct to get the unique values after flattening them.



How can I get the desired output?







scala apache-spark dataframe






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 26 '18 at 15:33







user9431057

















asked Nov 25 '18 at 1:25









user9431057user9431057

32318




32318













  • Anyone downvoting, please give your reasoning (especially for newcomers). That gives a guidance to correct ourselves and are encouraged to learn. Then it makes everyone visible what is wrong there. No offense, we all learn together, and learn from mistakes. (there is documentation to ask a good questions, but sometimes, its just difficult and need a little push :) )

    – user9431057
    Nov 25 '18 at 16:39



















  • Anyone downvoting, please give your reasoning (especially for newcomers). That gives a guidance to correct ourselves and are encouraged to learn. Then it makes everyone visible what is wrong there. No offense, we all learn together, and learn from mistakes. (there is documentation to ask a good questions, but sometimes, its just difficult and need a little push :) )

    – user9431057
    Nov 25 '18 at 16:39

















Anyone downvoting, please give your reasoning (especially for newcomers). That gives a guidance to correct ourselves and are encouraged to learn. Then it makes everyone visible what is wrong there. No offense, we all learn together, and learn from mistakes. (there is documentation to ask a good questions, but sometimes, its just difficult and need a little push :) )

– user9431057
Nov 25 '18 at 16:39





Anyone downvoting, please give your reasoning (especially for newcomers). That gives a guidance to correct ourselves and are encouraged to learn. Then it makes everyone visible what is wrong there. No offense, we all learn together, and learn from mistakes. (there is documentation to ask a good questions, but sometimes, its just difficult and need a little push :) )

– user9431057
Nov 25 '18 at 16:39












2 Answers
2






active

oldest

votes


















1














I strongly recommend you to use spark 2.x version. In Cloudera, when you issue "spark-shell", it launches 1.6.x version.. however, if you issue "spark2-shell", you get the 2.x shell. Check with your admin



But if you need with Spark 1.6 and rdd solution, try this.



import spark.implicits._
import scala.collection.mutable._
val df = Seq(("1314 44 Avenue",Array("Tours", "Mechanics", "Shopping")),
("115 25th Ave",Array("Restaurant", "Mechanics", "Brewery"))).toDF("address","attributes")
df.rdd.flatMap( x => x.getAs[mutable.WrappedArray[String]]("attributes") ).distinct().collect.foreach(println)


Results:



Brewery
Shopping
Mechanics
Restaurant
Tours


If the "attribute" column is not an array, but comma separated string, then use the below one which gives you same results



val df = Seq(("1314 44 Avenue","Tours,Mechanics,Shopping"),
("115 25th Ave","Restaurant,Mechanics,Brewery")).toDF("address","attributes")
df.rdd.flatMap( x => x.getAs[String]("attributes").split(",") ).distinct().collect.foreach(println)





share|improve this answer































    1














    The problem is that withColumn expects a String in its first argument (which is the name of the added column), but you're passing it a Column here df.withColumn(df("attributes").

    You only need to pass "attributes" as a String.



    Additionally, you need to pass a Column to the explode function, but you're passing a String - to make it a column you can use df("columName") or the Scala shorthand $ syntax, $"columnName".



    Hope this example can help you.



    import org.apache.spark.sql.functions._
    val allValues = df.select(explode($"attributes").as("attributes")).distinct


    Note that this will only preserve the attributes Column, since you want the distinct elements on that one.






    share|improve this answer





















    • 1





      Hi @user9431057, I updated the answer, hope it helps now. Also I would suggest you to search for an scala introduction tutorial, there are plenty of them on the internet - many of them focused especially for spark newcomers.

      – Luis Miguel Mejía Suárez
      Nov 25 '18 at 1:57











    • @user9431057 which Spark & Scala versions are you using?

      – Luis Miguel Mejía Suárez
      Nov 25 '18 at 2:08













    • I am using Spark 1.60, scala version 2.10.5

      – user9431057
      Nov 25 '18 at 2:10








    • 1





      @user9431057 ah, that explains it. The problem is that array_distinct is a new function added in spark 2.4.0. But also I noticed that it wouldn't solve your problem since you need the unique values over the entire column, not just in each array. I will update the answer.

      – Luis Miguel Mejía Suárez
      Nov 25 '18 at 2:16











    • Sorry still getting an error cannot resolve explode ("attributes") due to data type mismatch. input to function explode should be array or map type, not StringType

      – user9431057
      Nov 25 '18 at 2:32











    Your Answer






    StackExchange.ifUsing("editor", function () {
    StackExchange.using("externalEditor", function () {
    StackExchange.using("snippets", function () {
    StackExchange.snippets.init();
    });
    });
    }, "code-snippets");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "1"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53463911%2fflatten-a-spark-data-frames-column-values-and-put-it-into-a-variable%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    1














    I strongly recommend you to use spark 2.x version. In Cloudera, when you issue "spark-shell", it launches 1.6.x version.. however, if you issue "spark2-shell", you get the 2.x shell. Check with your admin



    But if you need with Spark 1.6 and rdd solution, try this.



    import spark.implicits._
    import scala.collection.mutable._
    val df = Seq(("1314 44 Avenue",Array("Tours", "Mechanics", "Shopping")),
    ("115 25th Ave",Array("Restaurant", "Mechanics", "Brewery"))).toDF("address","attributes")
    df.rdd.flatMap( x => x.getAs[mutable.WrappedArray[String]]("attributes") ).distinct().collect.foreach(println)


    Results:



    Brewery
    Shopping
    Mechanics
    Restaurant
    Tours


    If the "attribute" column is not an array, but comma separated string, then use the below one which gives you same results



    val df = Seq(("1314 44 Avenue","Tours,Mechanics,Shopping"),
    ("115 25th Ave","Restaurant,Mechanics,Brewery")).toDF("address","attributes")
    df.rdd.flatMap( x => x.getAs[String]("attributes").split(",") ).distinct().collect.foreach(println)





    share|improve this answer




























      1














      I strongly recommend you to use spark 2.x version. In Cloudera, when you issue "spark-shell", it launches 1.6.x version.. however, if you issue "spark2-shell", you get the 2.x shell. Check with your admin



      But if you need with Spark 1.6 and rdd solution, try this.



      import spark.implicits._
      import scala.collection.mutable._
      val df = Seq(("1314 44 Avenue",Array("Tours", "Mechanics", "Shopping")),
      ("115 25th Ave",Array("Restaurant", "Mechanics", "Brewery"))).toDF("address","attributes")
      df.rdd.flatMap( x => x.getAs[mutable.WrappedArray[String]]("attributes") ).distinct().collect.foreach(println)


      Results:



      Brewery
      Shopping
      Mechanics
      Restaurant
      Tours


      If the "attribute" column is not an array, but comma separated string, then use the below one which gives you same results



      val df = Seq(("1314 44 Avenue","Tours,Mechanics,Shopping"),
      ("115 25th Ave","Restaurant,Mechanics,Brewery")).toDF("address","attributes")
      df.rdd.flatMap( x => x.getAs[String]("attributes").split(",") ).distinct().collect.foreach(println)





      share|improve this answer


























        1












        1








        1







        I strongly recommend you to use spark 2.x version. In Cloudera, when you issue "spark-shell", it launches 1.6.x version.. however, if you issue "spark2-shell", you get the 2.x shell. Check with your admin



        But if you need with Spark 1.6 and rdd solution, try this.



        import spark.implicits._
        import scala.collection.mutable._
        val df = Seq(("1314 44 Avenue",Array("Tours", "Mechanics", "Shopping")),
        ("115 25th Ave",Array("Restaurant", "Mechanics", "Brewery"))).toDF("address","attributes")
        df.rdd.flatMap( x => x.getAs[mutable.WrappedArray[String]]("attributes") ).distinct().collect.foreach(println)


        Results:



        Brewery
        Shopping
        Mechanics
        Restaurant
        Tours


        If the "attribute" column is not an array, but comma separated string, then use the below one which gives you same results



        val df = Seq(("1314 44 Avenue","Tours,Mechanics,Shopping"),
        ("115 25th Ave","Restaurant,Mechanics,Brewery")).toDF("address","attributes")
        df.rdd.flatMap( x => x.getAs[String]("attributes").split(",") ).distinct().collect.foreach(println)





        share|improve this answer













        I strongly recommend you to use spark 2.x version. In Cloudera, when you issue "spark-shell", it launches 1.6.x version.. however, if you issue "spark2-shell", you get the 2.x shell. Check with your admin



        But if you need with Spark 1.6 and rdd solution, try this.



        import spark.implicits._
        import scala.collection.mutable._
        val df = Seq(("1314 44 Avenue",Array("Tours", "Mechanics", "Shopping")),
        ("115 25th Ave",Array("Restaurant", "Mechanics", "Brewery"))).toDF("address","attributes")
        df.rdd.flatMap( x => x.getAs[mutable.WrappedArray[String]]("attributes") ).distinct().collect.foreach(println)


        Results:



        Brewery
        Shopping
        Mechanics
        Restaurant
        Tours


        If the "attribute" column is not an array, but comma separated string, then use the below one which gives you same results



        val df = Seq(("1314 44 Avenue","Tours,Mechanics,Shopping"),
        ("115 25th Ave","Restaurant,Mechanics,Brewery")).toDF("address","attributes")
        df.rdd.flatMap( x => x.getAs[String]("attributes").split(",") ).distinct().collect.foreach(println)






        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Nov 26 '18 at 12:06









        stack0114106stack0114106

        4,1782421




        4,1782421

























            1














            The problem is that withColumn expects a String in its first argument (which is the name of the added column), but you're passing it a Column here df.withColumn(df("attributes").

            You only need to pass "attributes" as a String.



            Additionally, you need to pass a Column to the explode function, but you're passing a String - to make it a column you can use df("columName") or the Scala shorthand $ syntax, $"columnName".



            Hope this example can help you.



            import org.apache.spark.sql.functions._
            val allValues = df.select(explode($"attributes").as("attributes")).distinct


            Note that this will only preserve the attributes Column, since you want the distinct elements on that one.






            share|improve this answer





















            • 1





              Hi @user9431057, I updated the answer, hope it helps now. Also I would suggest you to search for an scala introduction tutorial, there are plenty of them on the internet - many of them focused especially for spark newcomers.

              – Luis Miguel Mejía Suárez
              Nov 25 '18 at 1:57











            • @user9431057 which Spark & Scala versions are you using?

              – Luis Miguel Mejía Suárez
              Nov 25 '18 at 2:08













            • I am using Spark 1.60, scala version 2.10.5

              – user9431057
              Nov 25 '18 at 2:10








            • 1





              @user9431057 ah, that explains it. The problem is that array_distinct is a new function added in spark 2.4.0. But also I noticed that it wouldn't solve your problem since you need the unique values over the entire column, not just in each array. I will update the answer.

              – Luis Miguel Mejía Suárez
              Nov 25 '18 at 2:16











            • Sorry still getting an error cannot resolve explode ("attributes") due to data type mismatch. input to function explode should be array or map type, not StringType

              – user9431057
              Nov 25 '18 at 2:32
















            1














            The problem is that withColumn expects a String in its first argument (which is the name of the added column), but you're passing it a Column here df.withColumn(df("attributes").

            You only need to pass "attributes" as a String.



            Additionally, you need to pass a Column to the explode function, but you're passing a String - to make it a column you can use df("columName") or the Scala shorthand $ syntax, $"columnName".



            Hope this example can help you.



            import org.apache.spark.sql.functions._
            val allValues = df.select(explode($"attributes").as("attributes")).distinct


            Note that this will only preserve the attributes Column, since you want the distinct elements on that one.






            share|improve this answer





















            • 1





              Hi @user9431057, I updated the answer, hope it helps now. Also I would suggest you to search for an scala introduction tutorial, there are plenty of them on the internet - many of them focused especially for spark newcomers.

              – Luis Miguel Mejía Suárez
              Nov 25 '18 at 1:57











            • @user9431057 which Spark & Scala versions are you using?

              – Luis Miguel Mejía Suárez
              Nov 25 '18 at 2:08













            • I am using Spark 1.60, scala version 2.10.5

              – user9431057
              Nov 25 '18 at 2:10








            • 1





              @user9431057 ah, that explains it. The problem is that array_distinct is a new function added in spark 2.4.0. But also I noticed that it wouldn't solve your problem since you need the unique values over the entire column, not just in each array. I will update the answer.

              – Luis Miguel Mejía Suárez
              Nov 25 '18 at 2:16











            • Sorry still getting an error cannot resolve explode ("attributes") due to data type mismatch. input to function explode should be array or map type, not StringType

              – user9431057
              Nov 25 '18 at 2:32














            1












            1








            1







            The problem is that withColumn expects a String in its first argument (which is the name of the added column), but you're passing it a Column here df.withColumn(df("attributes").

            You only need to pass "attributes" as a String.



            Additionally, you need to pass a Column to the explode function, but you're passing a String - to make it a column you can use df("columName") or the Scala shorthand $ syntax, $"columnName".



            Hope this example can help you.



            import org.apache.spark.sql.functions._
            val allValues = df.select(explode($"attributes").as("attributes")).distinct


            Note that this will only preserve the attributes Column, since you want the distinct elements on that one.






            share|improve this answer















            The problem is that withColumn expects a String in its first argument (which is the name of the added column), but you're passing it a Column here df.withColumn(df("attributes").

            You only need to pass "attributes" as a String.



            Additionally, you need to pass a Column to the explode function, but you're passing a String - to make it a column you can use df("columName") or the Scala shorthand $ syntax, $"columnName".



            Hope this example can help you.



            import org.apache.spark.sql.functions._
            val allValues = df.select(explode($"attributes").as("attributes")).distinct


            Note that this will only preserve the attributes Column, since you want the distinct elements on that one.







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Nov 25 '18 at 2:24

























            answered Nov 25 '18 at 1:30









            Luis Miguel Mejía SuárezLuis Miguel Mejía Suárez

            2,7391822




            2,7391822








            • 1





              Hi @user9431057, I updated the answer, hope it helps now. Also I would suggest you to search for an scala introduction tutorial, there are plenty of them on the internet - many of them focused especially for spark newcomers.

              – Luis Miguel Mejía Suárez
              Nov 25 '18 at 1:57











            • @user9431057 which Spark & Scala versions are you using?

              – Luis Miguel Mejía Suárez
              Nov 25 '18 at 2:08













            • I am using Spark 1.60, scala version 2.10.5

              – user9431057
              Nov 25 '18 at 2:10








            • 1





              @user9431057 ah, that explains it. The problem is that array_distinct is a new function added in spark 2.4.0. But also I noticed that it wouldn't solve your problem since you need the unique values over the entire column, not just in each array. I will update the answer.

              – Luis Miguel Mejía Suárez
              Nov 25 '18 at 2:16











            • Sorry still getting an error cannot resolve explode ("attributes") due to data type mismatch. input to function explode should be array or map type, not StringType

              – user9431057
              Nov 25 '18 at 2:32














            • 1





              Hi @user9431057, I updated the answer, hope it helps now. Also I would suggest you to search for an scala introduction tutorial, there are plenty of them on the internet - many of them focused especially for spark newcomers.

              – Luis Miguel Mejía Suárez
              Nov 25 '18 at 1:57











            • @user9431057 which Spark & Scala versions are you using?

              – Luis Miguel Mejía Suárez
              Nov 25 '18 at 2:08













            • I am using Spark 1.60, scala version 2.10.5

              – user9431057
              Nov 25 '18 at 2:10








            • 1





              @user9431057 ah, that explains it. The problem is that array_distinct is a new function added in spark 2.4.0. But also I noticed that it wouldn't solve your problem since you need the unique values over the entire column, not just in each array. I will update the answer.

              – Luis Miguel Mejía Suárez
              Nov 25 '18 at 2:16











            • Sorry still getting an error cannot resolve explode ("attributes") due to data type mismatch. input to function explode should be array or map type, not StringType

              – user9431057
              Nov 25 '18 at 2:32








            1




            1





            Hi @user9431057, I updated the answer, hope it helps now. Also I would suggest you to search for an scala introduction tutorial, there are plenty of them on the internet - many of them focused especially for spark newcomers.

            – Luis Miguel Mejía Suárez
            Nov 25 '18 at 1:57





            Hi @user9431057, I updated the answer, hope it helps now. Also I would suggest you to search for an scala introduction tutorial, there are plenty of them on the internet - many of them focused especially for spark newcomers.

            – Luis Miguel Mejía Suárez
            Nov 25 '18 at 1:57













            @user9431057 which Spark & Scala versions are you using?

            – Luis Miguel Mejía Suárez
            Nov 25 '18 at 2:08







            @user9431057 which Spark & Scala versions are you using?

            – Luis Miguel Mejía Suárez
            Nov 25 '18 at 2:08















            I am using Spark 1.60, scala version 2.10.5

            – user9431057
            Nov 25 '18 at 2:10







            I am using Spark 1.60, scala version 2.10.5

            – user9431057
            Nov 25 '18 at 2:10






            1




            1





            @user9431057 ah, that explains it. The problem is that array_distinct is a new function added in spark 2.4.0. But also I noticed that it wouldn't solve your problem since you need the unique values over the entire column, not just in each array. I will update the answer.

            – Luis Miguel Mejía Suárez
            Nov 25 '18 at 2:16





            @user9431057 ah, that explains it. The problem is that array_distinct is a new function added in spark 2.4.0. But also I noticed that it wouldn't solve your problem since you need the unique values over the entire column, not just in each array. I will update the answer.

            – Luis Miguel Mejía Suárez
            Nov 25 '18 at 2:16













            Sorry still getting an error cannot resolve explode ("attributes") due to data type mismatch. input to function explode should be array or map type, not StringType

            – user9431057
            Nov 25 '18 at 2:32





            Sorry still getting an error cannot resolve explode ("attributes") due to data type mismatch. input to function explode should be array or map type, not StringType

            – user9431057
            Nov 25 '18 at 2:32


















            draft saved

            draft discarded




















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53463911%2fflatten-a-spark-data-frames-column-values-and-put-it-into-a-variable%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Wiesbaden

            Marschland

            Dieringhausen