Oozie Spark HBase job, invalid credentials exception





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ height:90px;width:728px;box-sizing:border-box;
}







1















i do have an issue with Kerberos credentials.
This work is based on a cluster and the keytabs are provided on each datanode.
Basically it is an oozie workflow shell action, and it's purpose is to write to HBase by a spark job.
If the job is run on cluster mode without oozie, it works as excpected. But with oozie it throws an Exception as follows:



WARN AbstractRpcClient: Exception encountered while connecting to the server 
: javax.security.sasl.SaslException: GSS initiate failed [Caused by
GSSException: No valid credentials provided (Mechanism level: Failed to find
any Kerberos tgt)]
18/11/26 15:30:24 ERROR AbstractRpcClient: SASL authentication failed. The
most likely cause is missing or invalid credentials. Consider 'kinit'.
javax.security.sasl.SaslException: GSS initiate failed [Caused by
GSSException: No valid credentials provided (Mechanism level: Failed to find
any Kerberos tgt)]
at
com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge
(GssKrb5Client.java:211)
at
org.apache.hadoop.hbase.security.HBaseSaslRpcClient.
saslConnect(HBaseSaslRpcClient.java:179)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection
.setupSaslConnection(RpcClientImpl.java:611)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection
.access$600(RpcClientImpl.java:156)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2
.run(RpcClientImpl.java:737)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2
.run(RpcClientImpl.java:734)
at java.security.AccessController.doPrivileged(Native Method)


The oozie shell action looks like:



<action name="spark-hbase" retry-max="${retryMax}" retry-interval="${retryInterval}">
<shell xmlns="uri:oozie:shell-action:0.3">
<exec>submit.sh</exec>
<env-var>QUEUE_NAME=${queueName}</env-var>
<env-var>PRINCIPAL=${principal}</env-var>
<env-var>KEYTAB=${keytab}</env-var>
<env-var>VERBOSE=${verbose}</env-var>
<env-var>CURR_DATE=${firstNotNull(currentDate, "")}</env-var>
<env-var>DATA_TABLE=${dataTable}</env-var>
<file>bin/submit.sh</file>
</shell>
<ok to="end"/>
<error to="kill"/>
</action>


submit.sh file's spark-submit command looks like:



enter code here
CLASS="App class location"
JAR="compiled jar file"

HBASE_JARS="HBase jars"
HBASE_CONF='hbase-site.xml location'

HIVE_JARS="Hive jars"
HIVE_CONF='tez-site.xml location'

HADOOP_CONF='hdfs-site.xml location'

SPARK_BIN_DIR="spark2-client bin directory location"

${SPARK_BIN_DIR}/spark-submit
--class ${CLASS}
--principal "${PRINCIPAL}"
--keytab "${KEYTAB}"
--master yarn
--deploy-mode cluster
--driver-memory 10G
--executor-memory 4G
--num-executors 10
--conf spark.default.parallelism=24
--jars ${HBASE_JARS},${HIVE_JARS}
--files ${HBASE_CONF},${HIVE_CONF},${HADOOP_CONF}
--conf spark.ui.port=4042
--conf "spark.executor.extraJavaOptions=-verbose:class -
Dsun.security.krb5.debug=true"
--conf "spark.driver.extraJavaOptions=-verbose:class -
Dsun.security.krb5.debug=true"
--queue "${QUEUE_NAME}"
${JAR}
--app.name "spark-hbase"
--data.table "${DATA_TABLE}"
--verbose









share|improve this question























  • All I can tell you about Spark+Kerberos+HBase is in stackoverflow.com/questions/44265562/… - enjoy...

    – Samson Scharfrichter
    Nov 26 '18 at 17:28











  • Thanks for the reply, I took a look on this link, but my problem is that the obtainDelegationTokens for HBase via oozie are not working. One more thing, I try to write to HBase via a Hive table backed on HBase. So there is a spark job which writes to Hive table backed to HBase.

    – Ardian Koltraka
    Nov 27 '18 at 16:36











  • Oozie cannot create Kerberos tickets for your job, simply because it does not have your password... All it can do is request HDFS/Yarn/Hive/HBase to create auth tokens at your name (because oozie is a trusted, privileged "proxy" account). Except that Hive & HBase tokens are created only if you specify the appropriate credentials in your action. Cf. stackoverflow.com/questions/33212535/…

    – Samson Scharfrichter
    Nov 27 '18 at 23:08













  • Thank you for the suggestions, i got it working by doing basically two steps: 1) By creating a softlink of hbase-site.xml in /etc/spark2/conf on the host where the Spark job is submitted from: ln -s /etc/hbase/conf/hbase-site.xml /etc/spark2/conf/hbase-site.xml 2) By adding a kinit command in the shell script before the spark-submit command: kinit -kt "${KEYTAB}" "${PRINCIPAL}"

    – Ardian Koltraka
    Nov 28 '18 at 16:04




















1















i do have an issue with Kerberos credentials.
This work is based on a cluster and the keytabs are provided on each datanode.
Basically it is an oozie workflow shell action, and it's purpose is to write to HBase by a spark job.
If the job is run on cluster mode without oozie, it works as excpected. But with oozie it throws an Exception as follows:



WARN AbstractRpcClient: Exception encountered while connecting to the server 
: javax.security.sasl.SaslException: GSS initiate failed [Caused by
GSSException: No valid credentials provided (Mechanism level: Failed to find
any Kerberos tgt)]
18/11/26 15:30:24 ERROR AbstractRpcClient: SASL authentication failed. The
most likely cause is missing or invalid credentials. Consider 'kinit'.
javax.security.sasl.SaslException: GSS initiate failed [Caused by
GSSException: No valid credentials provided (Mechanism level: Failed to find
any Kerberos tgt)]
at
com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge
(GssKrb5Client.java:211)
at
org.apache.hadoop.hbase.security.HBaseSaslRpcClient.
saslConnect(HBaseSaslRpcClient.java:179)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection
.setupSaslConnection(RpcClientImpl.java:611)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection
.access$600(RpcClientImpl.java:156)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2
.run(RpcClientImpl.java:737)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2
.run(RpcClientImpl.java:734)
at java.security.AccessController.doPrivileged(Native Method)


The oozie shell action looks like:



<action name="spark-hbase" retry-max="${retryMax}" retry-interval="${retryInterval}">
<shell xmlns="uri:oozie:shell-action:0.3">
<exec>submit.sh</exec>
<env-var>QUEUE_NAME=${queueName}</env-var>
<env-var>PRINCIPAL=${principal}</env-var>
<env-var>KEYTAB=${keytab}</env-var>
<env-var>VERBOSE=${verbose}</env-var>
<env-var>CURR_DATE=${firstNotNull(currentDate, "")}</env-var>
<env-var>DATA_TABLE=${dataTable}</env-var>
<file>bin/submit.sh</file>
</shell>
<ok to="end"/>
<error to="kill"/>
</action>


submit.sh file's spark-submit command looks like:



enter code here
CLASS="App class location"
JAR="compiled jar file"

HBASE_JARS="HBase jars"
HBASE_CONF='hbase-site.xml location'

HIVE_JARS="Hive jars"
HIVE_CONF='tez-site.xml location'

HADOOP_CONF='hdfs-site.xml location'

SPARK_BIN_DIR="spark2-client bin directory location"

${SPARK_BIN_DIR}/spark-submit
--class ${CLASS}
--principal "${PRINCIPAL}"
--keytab "${KEYTAB}"
--master yarn
--deploy-mode cluster
--driver-memory 10G
--executor-memory 4G
--num-executors 10
--conf spark.default.parallelism=24
--jars ${HBASE_JARS},${HIVE_JARS}
--files ${HBASE_CONF},${HIVE_CONF},${HADOOP_CONF}
--conf spark.ui.port=4042
--conf "spark.executor.extraJavaOptions=-verbose:class -
Dsun.security.krb5.debug=true"
--conf "spark.driver.extraJavaOptions=-verbose:class -
Dsun.security.krb5.debug=true"
--queue "${QUEUE_NAME}"
${JAR}
--app.name "spark-hbase"
--data.table "${DATA_TABLE}"
--verbose









share|improve this question























  • All I can tell you about Spark+Kerberos+HBase is in stackoverflow.com/questions/44265562/… - enjoy...

    – Samson Scharfrichter
    Nov 26 '18 at 17:28











  • Thanks for the reply, I took a look on this link, but my problem is that the obtainDelegationTokens for HBase via oozie are not working. One more thing, I try to write to HBase via a Hive table backed on HBase. So there is a spark job which writes to Hive table backed to HBase.

    – Ardian Koltraka
    Nov 27 '18 at 16:36











  • Oozie cannot create Kerberos tickets for your job, simply because it does not have your password... All it can do is request HDFS/Yarn/Hive/HBase to create auth tokens at your name (because oozie is a trusted, privileged "proxy" account). Except that Hive & HBase tokens are created only if you specify the appropriate credentials in your action. Cf. stackoverflow.com/questions/33212535/…

    – Samson Scharfrichter
    Nov 27 '18 at 23:08













  • Thank you for the suggestions, i got it working by doing basically two steps: 1) By creating a softlink of hbase-site.xml in /etc/spark2/conf on the host where the Spark job is submitted from: ln -s /etc/hbase/conf/hbase-site.xml /etc/spark2/conf/hbase-site.xml 2) By adding a kinit command in the shell script before the spark-submit command: kinit -kt "${KEYTAB}" "${PRINCIPAL}"

    – Ardian Koltraka
    Nov 28 '18 at 16:04
















1












1








1








i do have an issue with Kerberos credentials.
This work is based on a cluster and the keytabs are provided on each datanode.
Basically it is an oozie workflow shell action, and it's purpose is to write to HBase by a spark job.
If the job is run on cluster mode without oozie, it works as excpected. But with oozie it throws an Exception as follows:



WARN AbstractRpcClient: Exception encountered while connecting to the server 
: javax.security.sasl.SaslException: GSS initiate failed [Caused by
GSSException: No valid credentials provided (Mechanism level: Failed to find
any Kerberos tgt)]
18/11/26 15:30:24 ERROR AbstractRpcClient: SASL authentication failed. The
most likely cause is missing or invalid credentials. Consider 'kinit'.
javax.security.sasl.SaslException: GSS initiate failed [Caused by
GSSException: No valid credentials provided (Mechanism level: Failed to find
any Kerberos tgt)]
at
com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge
(GssKrb5Client.java:211)
at
org.apache.hadoop.hbase.security.HBaseSaslRpcClient.
saslConnect(HBaseSaslRpcClient.java:179)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection
.setupSaslConnection(RpcClientImpl.java:611)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection
.access$600(RpcClientImpl.java:156)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2
.run(RpcClientImpl.java:737)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2
.run(RpcClientImpl.java:734)
at java.security.AccessController.doPrivileged(Native Method)


The oozie shell action looks like:



<action name="spark-hbase" retry-max="${retryMax}" retry-interval="${retryInterval}">
<shell xmlns="uri:oozie:shell-action:0.3">
<exec>submit.sh</exec>
<env-var>QUEUE_NAME=${queueName}</env-var>
<env-var>PRINCIPAL=${principal}</env-var>
<env-var>KEYTAB=${keytab}</env-var>
<env-var>VERBOSE=${verbose}</env-var>
<env-var>CURR_DATE=${firstNotNull(currentDate, "")}</env-var>
<env-var>DATA_TABLE=${dataTable}</env-var>
<file>bin/submit.sh</file>
</shell>
<ok to="end"/>
<error to="kill"/>
</action>


submit.sh file's spark-submit command looks like:



enter code here
CLASS="App class location"
JAR="compiled jar file"

HBASE_JARS="HBase jars"
HBASE_CONF='hbase-site.xml location'

HIVE_JARS="Hive jars"
HIVE_CONF='tez-site.xml location'

HADOOP_CONF='hdfs-site.xml location'

SPARK_BIN_DIR="spark2-client bin directory location"

${SPARK_BIN_DIR}/spark-submit
--class ${CLASS}
--principal "${PRINCIPAL}"
--keytab "${KEYTAB}"
--master yarn
--deploy-mode cluster
--driver-memory 10G
--executor-memory 4G
--num-executors 10
--conf spark.default.parallelism=24
--jars ${HBASE_JARS},${HIVE_JARS}
--files ${HBASE_CONF},${HIVE_CONF},${HADOOP_CONF}
--conf spark.ui.port=4042
--conf "spark.executor.extraJavaOptions=-verbose:class -
Dsun.security.krb5.debug=true"
--conf "spark.driver.extraJavaOptions=-verbose:class -
Dsun.security.krb5.debug=true"
--queue "${QUEUE_NAME}"
${JAR}
--app.name "spark-hbase"
--data.table "${DATA_TABLE}"
--verbose









share|improve this question














i do have an issue with Kerberos credentials.
This work is based on a cluster and the keytabs are provided on each datanode.
Basically it is an oozie workflow shell action, and it's purpose is to write to HBase by a spark job.
If the job is run on cluster mode without oozie, it works as excpected. But with oozie it throws an Exception as follows:



WARN AbstractRpcClient: Exception encountered while connecting to the server 
: javax.security.sasl.SaslException: GSS initiate failed [Caused by
GSSException: No valid credentials provided (Mechanism level: Failed to find
any Kerberos tgt)]
18/11/26 15:30:24 ERROR AbstractRpcClient: SASL authentication failed. The
most likely cause is missing or invalid credentials. Consider 'kinit'.
javax.security.sasl.SaslException: GSS initiate failed [Caused by
GSSException: No valid credentials provided (Mechanism level: Failed to find
any Kerberos tgt)]
at
com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge
(GssKrb5Client.java:211)
at
org.apache.hadoop.hbase.security.HBaseSaslRpcClient.
saslConnect(HBaseSaslRpcClient.java:179)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection
.setupSaslConnection(RpcClientImpl.java:611)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection
.access$600(RpcClientImpl.java:156)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2
.run(RpcClientImpl.java:737)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2
.run(RpcClientImpl.java:734)
at java.security.AccessController.doPrivileged(Native Method)


The oozie shell action looks like:



<action name="spark-hbase" retry-max="${retryMax}" retry-interval="${retryInterval}">
<shell xmlns="uri:oozie:shell-action:0.3">
<exec>submit.sh</exec>
<env-var>QUEUE_NAME=${queueName}</env-var>
<env-var>PRINCIPAL=${principal}</env-var>
<env-var>KEYTAB=${keytab}</env-var>
<env-var>VERBOSE=${verbose}</env-var>
<env-var>CURR_DATE=${firstNotNull(currentDate, "")}</env-var>
<env-var>DATA_TABLE=${dataTable}</env-var>
<file>bin/submit.sh</file>
</shell>
<ok to="end"/>
<error to="kill"/>
</action>


submit.sh file's spark-submit command looks like:



enter code here
CLASS="App class location"
JAR="compiled jar file"

HBASE_JARS="HBase jars"
HBASE_CONF='hbase-site.xml location'

HIVE_JARS="Hive jars"
HIVE_CONF='tez-site.xml location'

HADOOP_CONF='hdfs-site.xml location'

SPARK_BIN_DIR="spark2-client bin directory location"

${SPARK_BIN_DIR}/spark-submit
--class ${CLASS}
--principal "${PRINCIPAL}"
--keytab "${KEYTAB}"
--master yarn
--deploy-mode cluster
--driver-memory 10G
--executor-memory 4G
--num-executors 10
--conf spark.default.parallelism=24
--jars ${HBASE_JARS},${HIVE_JARS}
--files ${HBASE_CONF},${HIVE_CONF},${HADOOP_CONF}
--conf spark.ui.port=4042
--conf "spark.executor.extraJavaOptions=-verbose:class -
Dsun.security.krb5.debug=true"
--conf "spark.driver.extraJavaOptions=-verbose:class -
Dsun.security.krb5.debug=true"
--queue "${QUEUE_NAME}"
${JAR}
--app.name "spark-hbase"
--data.table "${DATA_TABLE}"
--verbose






apache-spark hbase kerberos oozie-workflow






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 26 '18 at 15:24









Ardian KoltrakaArdian Koltraka

61




61













  • All I can tell you about Spark+Kerberos+HBase is in stackoverflow.com/questions/44265562/… - enjoy...

    – Samson Scharfrichter
    Nov 26 '18 at 17:28











  • Thanks for the reply, I took a look on this link, but my problem is that the obtainDelegationTokens for HBase via oozie are not working. One more thing, I try to write to HBase via a Hive table backed on HBase. So there is a spark job which writes to Hive table backed to HBase.

    – Ardian Koltraka
    Nov 27 '18 at 16:36











  • Oozie cannot create Kerberos tickets for your job, simply because it does not have your password... All it can do is request HDFS/Yarn/Hive/HBase to create auth tokens at your name (because oozie is a trusted, privileged "proxy" account). Except that Hive & HBase tokens are created only if you specify the appropriate credentials in your action. Cf. stackoverflow.com/questions/33212535/…

    – Samson Scharfrichter
    Nov 27 '18 at 23:08













  • Thank you for the suggestions, i got it working by doing basically two steps: 1) By creating a softlink of hbase-site.xml in /etc/spark2/conf on the host where the Spark job is submitted from: ln -s /etc/hbase/conf/hbase-site.xml /etc/spark2/conf/hbase-site.xml 2) By adding a kinit command in the shell script before the spark-submit command: kinit -kt "${KEYTAB}" "${PRINCIPAL}"

    – Ardian Koltraka
    Nov 28 '18 at 16:04





















  • All I can tell you about Spark+Kerberos+HBase is in stackoverflow.com/questions/44265562/… - enjoy...

    – Samson Scharfrichter
    Nov 26 '18 at 17:28











  • Thanks for the reply, I took a look on this link, but my problem is that the obtainDelegationTokens for HBase via oozie are not working. One more thing, I try to write to HBase via a Hive table backed on HBase. So there is a spark job which writes to Hive table backed to HBase.

    – Ardian Koltraka
    Nov 27 '18 at 16:36











  • Oozie cannot create Kerberos tickets for your job, simply because it does not have your password... All it can do is request HDFS/Yarn/Hive/HBase to create auth tokens at your name (because oozie is a trusted, privileged "proxy" account). Except that Hive & HBase tokens are created only if you specify the appropriate credentials in your action. Cf. stackoverflow.com/questions/33212535/…

    – Samson Scharfrichter
    Nov 27 '18 at 23:08













  • Thank you for the suggestions, i got it working by doing basically two steps: 1) By creating a softlink of hbase-site.xml in /etc/spark2/conf on the host where the Spark job is submitted from: ln -s /etc/hbase/conf/hbase-site.xml /etc/spark2/conf/hbase-site.xml 2) By adding a kinit command in the shell script before the spark-submit command: kinit -kt "${KEYTAB}" "${PRINCIPAL}"

    – Ardian Koltraka
    Nov 28 '18 at 16:04



















All I can tell you about Spark+Kerberos+HBase is in stackoverflow.com/questions/44265562/… - enjoy...

– Samson Scharfrichter
Nov 26 '18 at 17:28





All I can tell you about Spark+Kerberos+HBase is in stackoverflow.com/questions/44265562/… - enjoy...

– Samson Scharfrichter
Nov 26 '18 at 17:28













Thanks for the reply, I took a look on this link, but my problem is that the obtainDelegationTokens for HBase via oozie are not working. One more thing, I try to write to HBase via a Hive table backed on HBase. So there is a spark job which writes to Hive table backed to HBase.

– Ardian Koltraka
Nov 27 '18 at 16:36





Thanks for the reply, I took a look on this link, but my problem is that the obtainDelegationTokens for HBase via oozie are not working. One more thing, I try to write to HBase via a Hive table backed on HBase. So there is a spark job which writes to Hive table backed to HBase.

– Ardian Koltraka
Nov 27 '18 at 16:36













Oozie cannot create Kerberos tickets for your job, simply because it does not have your password... All it can do is request HDFS/Yarn/Hive/HBase to create auth tokens at your name (because oozie is a trusted, privileged "proxy" account). Except that Hive & HBase tokens are created only if you specify the appropriate credentials in your action. Cf. stackoverflow.com/questions/33212535/…

– Samson Scharfrichter
Nov 27 '18 at 23:08







Oozie cannot create Kerberos tickets for your job, simply because it does not have your password... All it can do is request HDFS/Yarn/Hive/HBase to create auth tokens at your name (because oozie is a trusted, privileged "proxy" account). Except that Hive & HBase tokens are created only if you specify the appropriate credentials in your action. Cf. stackoverflow.com/questions/33212535/…

– Samson Scharfrichter
Nov 27 '18 at 23:08















Thank you for the suggestions, i got it working by doing basically two steps: 1) By creating a softlink of hbase-site.xml in /etc/spark2/conf on the host where the Spark job is submitted from: ln -s /etc/hbase/conf/hbase-site.xml /etc/spark2/conf/hbase-site.xml 2) By adding a kinit command in the shell script before the spark-submit command: kinit -kt "${KEYTAB}" "${PRINCIPAL}"

– Ardian Koltraka
Nov 28 '18 at 16:04







Thank you for the suggestions, i got it working by doing basically two steps: 1) By creating a softlink of hbase-site.xml in /etc/spark2/conf on the host where the Spark job is submitted from: ln -s /etc/hbase/conf/hbase-site.xml /etc/spark2/conf/hbase-site.xml 2) By adding a kinit command in the shell script before the spark-submit command: kinit -kt "${KEYTAB}" "${PRINCIPAL}"

– Ardian Koltraka
Nov 28 '18 at 16:04














1 Answer
1






active

oldest

votes


















0














Creating soft link on all the nodes in cluster may not always be feasible. We resolved it by adding hbase configuration directory in spark configuration by overriding SPARK_CONF_DIR environment variable in the shell before the spark-submit command.



export SPARK_CONF_DIR=/etc/spark2/conf:/etc/hbase/conf





share|improve this answer
























    Your Answer






    StackExchange.ifUsing("editor", function () {
    StackExchange.using("externalEditor", function () {
    StackExchange.using("snippets", function () {
    StackExchange.snippets.init();
    });
    });
    }, "code-snippets");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "1"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53484288%2foozie-spark-hbase-job-invalid-credentials-exception%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0














    Creating soft link on all the nodes in cluster may not always be feasible. We resolved it by adding hbase configuration directory in spark configuration by overriding SPARK_CONF_DIR environment variable in the shell before the spark-submit command.



    export SPARK_CONF_DIR=/etc/spark2/conf:/etc/hbase/conf





    share|improve this answer




























      0














      Creating soft link on all the nodes in cluster may not always be feasible. We resolved it by adding hbase configuration directory in spark configuration by overriding SPARK_CONF_DIR environment variable in the shell before the spark-submit command.



      export SPARK_CONF_DIR=/etc/spark2/conf:/etc/hbase/conf





      share|improve this answer


























        0












        0








        0







        Creating soft link on all the nodes in cluster may not always be feasible. We resolved it by adding hbase configuration directory in spark configuration by overriding SPARK_CONF_DIR environment variable in the shell before the spark-submit command.



        export SPARK_CONF_DIR=/etc/spark2/conf:/etc/hbase/conf





        share|improve this answer













        Creating soft link on all the nodes in cluster may not always be feasible. We resolved it by adding hbase configuration directory in spark configuration by overriding SPARK_CONF_DIR environment variable in the shell before the spark-submit command.



        export SPARK_CONF_DIR=/etc/spark2/conf:/etc/hbase/conf






        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Feb 23 at 11:04









        Ami RanjanAmi Ranjan

        161




        161
































            draft saved

            draft discarded




















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53484288%2foozie-spark-hbase-job-invalid-credentials-exception%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Wiesbaden

            Marschland

            Dieringhausen