Skip to main content

No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]

Mechanism level: Failed to find any Kerberos tgt

Most of the information is there on the Cloudera Website.
You might want to check on the site first, if you see any thing similar.
http://www.cloudera.com/content/cloudera/en/documentation/cdh5/v5-0-0/CDH5-Security-Guide/cdh5sg_troubleshooting.html
http://www.cloudera.com/content/cloudera/en/documentation/cdh5/v5-0-0/CDH5-Security-Guide/cdh5sg_kerbprin_to_sn.html
http://www.cloudera.com/content/cloudera/en/documentation/cdh5/v5-0-0/CDH5-Security-Guide/cdh5sg_debug_sun_kerberos_enable.html
http://www.cloudera.com/content/cloudera/en/documentation/cdh5/v5-0-0/CDH5-Security-Guide/cdh5sg_ldap_mapping.html
Since non of them fit our issue, we had to slog it out.
We have 2 domains forests in our environment, ABC and XYZ.
We were not able to authenticate normal users from either of the domains.
we get an error when we try to execute hadoop fs -ls / even after getting a tgt successfully from Active Directory.
  1. We have added Trusted Kerberos Realms in cloudera manager, and restarted the cluster. ABC.MYDOMAIN.COM and XYZ.MYDOMAIN.COM
  2. When we use the keytab (auto generated by cloudera Manager) - we are able to execute hadoop fs -ls /
Here is how the hdfs is working.
[root@my-edge-server ~]# su - hdfs 
[hdfs@my-edge-server ~]$ kinit -kt hdfs.keytab hdfs/my-edge-server.subdomain.in.mydomain.com@XYZ.MYDOMAIN.COM 
[hdfs@my-edge-server ~]$ klist -e 
Ticket cache: FILE:/tmp/krb5cc_496 
Default principal: hdfs/my-edge-server.subdomain.in.mydomain.com@XYZ.MYDOMAIN.COM 

Valid starting Expires Service principal 
09/11/15 10:44:31 09/11/15 20:44:31 krbtgt/XYZ.MYDOMAIN.COM@XYZ.MYDOMAIN.COM 
renew until 09/18/15 10:44:31, Etype (skey, tkt): arcfour-hmac, aes256-cts-hmac-sha1-96 
[hdfs@my-edge-server ~]$ hadoop fs -ls / 
Found 6 items 
drwxr-xr-x - hdfs supergroup 0 2015-05-29 15:32 /benchmarks 
drwxr-xr-x - hbase hbase 0 2015-09-11 09:11 /hbase 
drwxrwxr-x - solr solr 0 2015-05-29 11:49 /solr 
drwxrwxrwx - hdfs supergroup 0 2015-09-10 10:29 /tmp 
drwxr-xr-x - hdfs supergroup 0 2015-05-29 16:22 /use 
drwxrwxr-x - hdfs supergroup 0 2015-09-10 11:36 /user 
[hdfs@my-edge-server ~]$ 
Here is the Complete ERROR for user in ABC.MYDOMAIN.COM, we get a similar error from XYZ domain as well.
[root@my-edge-server ~]# kinit ahmed-user@ABC.MYDOMAIN.COM 
Password for ahmed-user@ABC.MYDOMAIN.COM: 
[root@my-edge-server ~]# klist -e 
Ticket cache: FILE:/tmp/krb5cc_0 
Default principal: ahmed-user@ABC.MYDOMAIN.COM 

Valid starting Expires Service principal 
09/11/15 10:31:16 09/11/15 20:31:22 krbtgt/ABC.MYDOMAIN.COM@ABC.MYDOMAIN.COM 
renew until 09/18/15 10:31:16, Etype (skey, tkt): aes256-cts-hmac-sha1-96, aes256-cts-hmac-sha1-96 
Before you execute the below command, set the HADOOP_OPTS to get more verbose for debugging.
[root@my-edge-server ~]# export HADOOP_OPTS="-Dsun.security.krb5.debug=true"
Then we execute the command.
[root@my-edge-server ~]# hadoop fs -ls / 
Java config name: null 
Native config name: /etc/krb5.conf 
Loaded from native config 
KinitOptions cache name is /tmp/krb5cc_0 
DEBUG CCacheInputStream client principal is ahmed-user@ABC.MYDOMAIN.COM 
DEBUG CCacheInputStream server principal is krbtgt/ABC.MYDOMAIN.COM@ABC.MYDOMAIN.COM 
DEBUG CCacheInputStream key type: 18 
DEBUG CCacheInputStream auth time: Fri Sep 11 10:31:22 BST 2015 
DEBUG CCacheInputStream start time: Fri Sep 11 10:31:16 BST 2015 
DEBUG CCacheInputStream end time: Fri Sep 11 20:31:22 BST 2015 
DEBUG CCacheInputStream renew_till time: Fri Sep 18 10:31:16 BST 2015 
 CCacheInputStream: readFlags() FORWARDABLE; RENEWABLE; INITIAL; PRE_AUTH; 
 unsupported key type found the default TGT: 18 
15/09/11 10:31:39 WARN security.UserGroupInformation: PriviledgedActionException as:root (auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] 
15/09/11 10:31:39 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] 
15/09/11 10:31:39 WARN security.UserGroupInformation: PriviledgedActionException as:root (auth:KERBEROS) cause:java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] 
15/09/11 10:31:39 WARN security.UserGroupInformation: PriviledgedActionException as:root (auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] 
15/09/11 10:31:39 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] 
15/09/11 10:31:39 WARN security.UserGroupInformation: PriviledgedActionException as:root (auth:KERBEROS) cause:java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] 
15/09/11 10:31:39 INFO retry.RetryInvocationHandler: Exception while invoking getFileInfo of class ClientNamenodeProtocolTranslatorPB over master-node.subdomain.in.mydomain.com/172.14.14.11:8020 after 1 fail over attempts. Trying to fail over immediately. 
java.io.IOException: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: "my-edge-server.subdomain.in.mydomain.com/172.14.14.8"; destination host is: "master-node.subdomain.in.mydomain.com":8020; 
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) 
at org.apache.hadoop.ipc.Client.call(Client.java:1472) 
at org.apache.hadoop.ipc.Client.call(Client.java:1399) 
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) 
at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source) 
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:752) 
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
at java.lang.reflect.Method.invoke(Method.java:606) 
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) 
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) 
at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source) 
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1982) 
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1128) 
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1124) 
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) 
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1124) 
at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57) 
at org.apache.hadoop.fs.Globber.glob(Globber.java:265) 
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1625) 
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:326) 
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:224) 
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:207) 
at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:190) 
at org.apache.hadoop.fs.shell.Command.run(Command.java:154) 
at org.apache.hadoop.fs.FsShell.run(FsShell.java:287) 
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) 
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) 
at org.apache.hadoop.fs.FsShell.main(FsShell.java:340) 
Caused by: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
Solution:
[ahmed-user@my-edge-server ~]$ kinit ahmed-user@ABC.MYDOMAIN.COM
Password for ahmed-user@ABC.MYDOMAIN.COM:
[ahmed-user@my-edge-server ~]$ klist -e
Ticket cache: FILE:/tmp/krb5cc_1001
Default principal: ahmed-user@ABC.MYDOMAIN.COM

Valid starting Expires Service principal
09/11/15 11:38:46 09/11/15 21:38:54 krbtgt/ABC.MYDOMAIN.COM@ABC.MYDOMAIN.COM
        renew until 09/18/15 11:38:46, Etype (skey, tkt): aes256-cts-hmac-sha1-96, aes256-cts-hmac-sha1-96
But the cluster was expecting skey to be arcfour-hmac
So using ktuil, created a keytab with arcfour-hmac then it started working.
[ahmed-user@my-edge-server ~]$ ktutil     
 addent -password -p ahmed-user@ABC.MYDOMAIN.COM -k 1 -e RC4-HMAC
 enter password for ahmed-user
 wkt ahmed-user_new.keytab
 quit
[ahmed-user@my-edge-server ~]$  
[ahmed-user@my-edge-server ~]$ kinit -kt ahmed-user_new.keytab ahmed-user@ABC.MYDOMAIN.COM
[ahmed-user@my-edge-server ~]$ klist -e
Ticket cache: FILE:/tmp/krb5cc_1001
Default principal: ahmed-user@ABC.MYDOMAIN.COM

Valid starting     Expires            Service principal
09/11/15 11:45:29  09/11/15 21:45:30  krbtgt/ABC.MYDOMAIN.COM@ABC.MYDOMAIN.COM
        renew until 09/18/15 11:45:29, Etype (skey, tkt): arcfour-hmac, aes256-cts-hmac-sha1-96
We had already created a directory for ahmed-user using the hdfs superuser.
[ahmed-user@my-edge-server ~]$ hadoop fs -ls /
Found 6 items
drwxr-xr-x   - hdfs  supergroup          0 2015-05-29 15:32 /benchmarks
drwxr-xr-x   - hbase hbase               0 2015-09-11 09:11 /hbase
drwxrwxr-x   - solr  solr                0 2015-05-29 11:49 /solr
drwxrwxrwx   - hdfs  supergroup          0 2015-09-10 10:29 /tmp
drwxr-xr-x   - hdfs  supergroup          0 2015-05-29 16:22 /use
drwxrwxr-x   - hdfs  supergroup          0 2015-09-10 11:36 /user
[ahmed-user@my-edge-server ~]$ hadoop fs -mkdir /user/ahmed-user/test_directory
[ahmed-user@my-edge-server ~]$ hadoop fs -ls /user/ahmed-user
Found 2 items
drwx------   - ahmed-user ahmed-user          0 2015-09-11 11:17 /user/ahmed-user/.staging
drwxr-xr-x   - ahmed-user ahmed-user          0 2015-09-11 11:45 /user/ahmed-user/test_directory
[ahmed-user@my-edge-server ~]$

Comments

Popular posts from this blog

Cloudera Manager - Duplicate entry 'zookeeper' for key 'NAME'.

We had recently built a cluster using cloudera API’s and had all the services running on it with Kerberos enabled. Next we had a requirement to add another kafka cluster to our already exsisting cluster in cloudera manager. Since it is a quick task to get the zookeeper and kafka up and running. We decided to get this done using the cloudera manager instead of the API’s. But we faced the Duplicate entry 'zookeeper' for key 'NAME' issue as described in the bug below. https://issues.cloudera.org/browse/DISTRO-790 I have set up two clusters that share a Cloudera Manger. The first I set up with the API and created the services with capital letter names, e.g., ZOOKEEPER, HDFS, HIVE. Now, I add the second cluster using the Wizard. Add Cluster->Select Hosts->Distribute Parcels->Select base HDFS Cluster install On the next page i get SQL errros telling that the services i want to add already exist. I suspect that the check for existing service names does n

Zabbix History Table Clean Up

Zabbix history table gets really big, and if you are in a situation where you want to clean it up. Then we can do so, using the below steps. Stop zabbix server. Take table backup - just in case. Create a temporary table. Update the temporary table with data required, upto a specific date using epoch . Move old table to a different table name. Move updated (new temporary) table to original table which needs to be cleaned-up. Drop the old table. (Optional) Restart Zabbix Since this is not offical procedure, but it has worked for me so use it at your own risk. Here is another post which will help is reducing the size of history tables - http://zabbixzone.com/zabbix/history-and-trends/ Zabbix Version : Zabbix v2.4 Make sure MySql 5.1 is set with InnoDB as innodb_file_per_table=ON Step 1 Stop the Zabbix server sudo service zabbix-server stop Script. echo "------------------------------------------" echo " 1. Stopping Zabbix Server &quo

Access Filter in SSSD `ldap_access_filter` [SSSD Access denied / Permission denied ]

Access Filter Setup with SSSD ldap_access_filter (string) If using access_provider = ldap , this option is mandatory. It specifies an LDAP search filter criteria that must be met for the user to be granted access on this host. If access_provider = ldap and this option is not set, it will result in all users being denied access. Use access_provider = allow to change this default behaviour. Example: access_provider = ldap ldap_access_filter = memberOf=cn=allowed_user_groups,ou=Groups,dc=example,dc=com Prerequisites yum install sssd Single LDAP Group Under domain/default in /etc/sssd/sssd.conf add: access_provider = ldap ldap_access_filter = memberOf=cn=Group Name,ou=Groups,dc=example,dc=com Multiple LDAP Groups Under domain/default in /etc/sssd/sssd.conf add: access_provider = ldap ldap_access_filter = (|(memberOf=cn=System Adminstrators,ou=Groups,dc=example,dc=com)(memberOf=cn=Database Users,ou=Groups,dc=example,dc=com)) ldap_access_filter accepts standa