Skip to main content

Redhat Integration with Active Directory using SSSD.

Redhat Intergration with Active Directory using SSSD.

Introduction

Intro from https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Windows_Integration_Guide/sssd-ad-integration.html
There are inherent structural differences between how Windows and Linux handle system users. The user schemas used in Active Directory and standard LDAPv3 directory services also differ significantly. When using an Active Directory identity provider with SSSD to manage system users, it is necessary to reconcile Active Directory-style users to the new SSSD users. There are two ways to achieve it:
  • ID mapping in SSSD can create a map between Active Directory security IDs (SIDs) and the generated UIDs on Linux. ID mapping is the simplest option for most environments because it requires no additional packages or configuration on Active Directory.
  • Unix services can manage POSIX attributes on Windows user and group entries. This requires more configuration and information within the Active Directory environment, but it provides more administrative control over the specific UID/GID values and other POSIX attributes.
Active Directory can replicate user entries and attributes from its local directory into a global catalog, which makes the information available to other domains within the forest. Performance-wise, the global catalog replication is the recommended way for SSSD to get information about users and groups, so that SSSD has access to all user data for all domains within the topology. As a result, SSSD can be used by applications which need to query the Active Directory global catalog for user or group information.
Before we start, here are few of the links which are helpful.
http://techuniqe.blogspot.co.uk/2015/04/using-sssd-for-active-directory.html
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Windows_Integration_Guide/sssd-ad-integration.html
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html-single/Deployment_Guide/index.html#SSSD-Introduction
http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/cdh_sg_hadoop_security_active_directory_integrate.html

Background about the setup.

We have our setup as below.
  1. Two Active Directory servers. XYZDOMAIN and ABCDOMAIN
  2. Two Edge Nodes runnning RHEL 6.6, which can directly communicate with both AD servers.
  3. Two Slave Nodes are running behind a firewall, which can only communicate to EDGE nodes.
We have to configure Slave to send the traffic to EDGE nodes which will forward the traffic to AD servers.

Preparation for the setup.

[Interface Forwarding] from eth1 to eth0 on EDGE node.

Adding route to all the slaves which reside on a private network to communicate with External Server directly using an EDGE node using Interface Forwarding.
NOTE : Below testing was done on RHEL 6.6
What we are trying to do.
  1. All the slave nodes will send their data to Edge nodes on a private interface.
  2. Edge Node will take the data arriving on the private interface and forward it over a external interface.
NOTE: below I have used slaves for all the nodes which are communicating with EDGE, in the this case making EDGE as the master which acts like a router.
Slave ifconfig
Slaves will only run on Private network.
  1. 192.168.0.8 aka eth0 Private Interface.
Here is the ifconfig.
[root@slave-node ~]# ifconfig
eth0      Link encap:Ethernet  HWaddr 
          inet addr:192.168.0.8  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::21d:d8ff:feb7:1efe/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:131581 errors:0 dropped:0 overruns:0 frame:0
          TX packets:148636 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:11583580 (11.0 MiB)  TX bytes:35866144 (34.2 MiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:245626 errors:0 dropped:0 overruns:0 frame:0
          TX packets:245626 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:286415155 (273.1 MiB)  TX bytes:286415155 (273.1 MiB)
Edge Node ifconfig
  1. 172.14.14.214 aka eth0 External Interface
  2. 192.168.0.11 aka eth1 Private Interface.
Here is the ifconfig.
[root@edge-node ~]# ifconfig
eth0      Link encap:Ethernet  HWaddr 
          inet addr:172.14.14.214  Bcast:172.14.14.255  Mask:255.255.255.0
          inet6 addr: fe80::21d:d8ff:feb7:1f7b/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:908442 errors:0 dropped:0 overruns:0 frame:0
          TX packets:235173 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:77363514 (73.7 MiB)  TX bytes:33167098 (31.6 MiB)

eth1      Link encap:Ethernet  HWaddr 
          inet addr:192.168.0.11  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::21d:d8ff:feb7:1f7a/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:210510 errors:0 dropped:0 overruns:0 frame:0
          TX packets:177170 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:61583138 (58.7 MiB)  TX bytes:16125613 (15.3 MiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:13799253 errors:0 dropped:0 overruns:0 frame:0
          TX packets:13799253 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:27734863794 (25.8 GiB)  TX bytes:27734863794 (25.8 GiB)

[root@edge-node ~]#

Configuration.

  1. Create FORWARDer on the Edge node.
  2. Create route on all the slave.
  3. Update /etc/hosts on slave nodes.
1. Create FORWARDer on the Edge node.
  1. If you haven’t already enabled forwarding in the kernel, do so.
  2. Open /etc/sysctl.conf and uncomment net.ipv4.ip_forward = 1
  3. Then execute $ sudo sysctl -p
  4. Add the following rules to iptables
Commands.
[root@edge-node ~]# iptables -t nat -A POSTROUTING --out-interface eth0 -j MASQUERADE  
[root@edge-node ~]# iptables -A FORWARD --in-interface eth1 -j ACCEPT
2. Create route on all the slave.
Here is the command to add the route in slaves.
[root@slave-node ~]# route add -net 172.0.0.0 netmask 255.0.0.0 gw 192.168.0.11 eth0
We are tell all the traffic trying to go to 172.x.x.x will have to use 192.168.0.11 as the gateway. Which is the Private Interface on the Edge Node.
3. Update /etc/resolve.conf on slave nodes.
And then we update the /etc/resolve.conf file with the direct IP of External Server 172.14.14.174, as slave node now should be able to communicate to the External Server.
; generated by /sbin/dhclient-script
nameserver 172.14.14.174 ; IP for the DNS server, this happens to be the xyzserver.
nameserver 172.14.14.141 ; IP for the DNS server, this happens to be the abcserver.
Testing if ping works.
[root@slave-server ~]# ping xyzserver.xyzdomain.com
PING xyzserver.xyzdomain.com (172.14.14.174) 56(84) bytes of data.
64 bytes from 172.14.14.174: icmp_seq=1 ttl=127 time=0.866 ms
64 bytes from 172.14.14.174: icmp_seq=2 ttl=127 time=1.09 ms
64 bytes from 172.14.14.174: icmp_seq=3 ttl=127 time=1.12 ms
64 bytes from 172.14.14.174: icmp_seq=4 ttl=127 time=0.933 ms
^C
--- xyzserver.xyzdomain.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 7042ms
rtt min/avg/max/mdev = 0.866/1.004/1.122/0.112 ms
[root@slave-server ~]#


[root@slave-server ~]# ping abcserver.abcdomain.com
PING abcserver.abcdomain.com (172.14.14.141) 56(84) bytes of data.
64 bytes from 172.14.14.141: icmp_seq=1 ttl=127 time=0.866 ms
64 bytes from 172.14.14.141: icmp_seq=2 ttl=127 time=1.09 ms
64 bytes from 172.14.14.141: icmp_seq=3 ttl=127 time=1.12 ms
64 bytes from 172.14.14.141: icmp_seq=4 ttl=127 time=0.933 ms
^C
--- abcserver.abcdomain.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 7042ms
rtt min/avg/max/mdev = 0.866/1.004/1.122/0.112 ms
[root@slave-server ~]#

Preparation for SSSD.

Prerequisite installations.
yum install sssd sssd-client krb5-workstation samba openldap-clients open-ssl authconfig

Create a bind user on both domains ABCDOMAIN and XYZDOMAIN.

Open the Active directory and create a user called xyzdomainuser user in XYZDOMAIN, and abcdomainuser in ABCDOMAIN.
We will using these users to join DOMAIN using the ldap_default_bind_dn, will get to this later on.

Setting krb5 configuration.

Setting up the krb setting to communicate with AD using Kerberos.
[libdefaults]
default_realm = XYZDOMAIN.COM
dns_lookup_kdc = false
dns_lookup_realm = false
ticket_lifetime = 86400
renew_lifetime = 604800
forwardable = true
default_tgs_enctypes = rc4-hmac
default_tkt_enctypes = rc4-hmac
permitted_enctypes = rc4-hmac
udp_preference_limit = 1
[realms]
XYZDOMAIN.COM = {
kdc = xyzserver.xyzdomain.com
admin_server = xyzserver.xyzdomain.com
}

ABCDOMAIN.COM = {
kdc = abcserver.abcdomain.com
admin_server = abcserver.abcdomain.com
}

[domain_realm]
abcdomain.com = ABCDOMAIN.COM
.abcdomain.com = .ABCDOMAIN.COM
xyzdomain.com = XYZDOMAIN.COM
.xyzdomain.com = .XYZDOMAIN.COM

[logging]
kdc = FILE:/var/krb5/log/krb5kdc.log
admin_server = FILE:/var/krb5/log/kadmin.log
default = FILE:/var/krb5/log/krb5lib.log

Testing krb5 setup.

Once we have the configuration we will use kinit to test (Test both users).
[root@slave-server ~]# kinit xyzdomainuser@XYZDOMAIN.COM
Password for xyzdomainuser@XYZDOMAIN.COM:
[root@slave-server ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: xyzdomainuser@XYZDOMAIN.COM

Valid starting     Expires            Service principal
09/12/15 08:37:56  09/12/15 18:38:03  krbtgt/XYZDOMAIN.COM@XYZDOMAIN.COM
        renew until 09/19/15 08:37:56
[root@slave-server ~]# klist -e
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: xyzdomainuser@XYZDOMAIN.COM

Valid starting     Expires            Service principal
09/12/15 08:37:56  09/12/15 18:38:03  krbtgt/XYZDOMAIN.COM@XYZDOMAIN.COM
        renew until 09/19/15 08:37:56, Etype (skey, tkt): arcfour-hmac, aes256-cts-hmac-sha1-96
Now we are able to connect to ldap and get the tgt as well. so we are ready for the next steps.

Testing ldapsearch from the Linux server.

This step is to make sure that our active directory is accessible. And we are able to search users and groups from Linux nodes. Goto linux machine and execute the below command.
 ldapsearch -v -x -H ldap://xyzserver.xyzdomain.com/ -D "cn=xyzdomainuser,cn=Users,dc=xyzdomain,dc=com" -W -b "cn=xyzuser2,ou=cmlab,dc=xyzdomain,dc=com"
ldapsearch -v -x -H ldap://abcserver.xyzdomain.com/ -D "cn=abcdomainuser,cn=Users,dc=abcdomain,dc=com" -W -b "cn=xyzuser2,ou=cmlab,dc=abcdomain,dc=com"
Here are some more details about the options above.
More details here : http://linuxcommand.org/man_pages/ldapsearch1.html
-v      Run in verbose mode, with many diagnostics written to standard output.
-x      Use simple authentication instead of SASL.
-H ldapuri
        Specify URI(s) referring to the ldap server(s).
-D binddn
        Use the Distinguished Name binddn to bind to the LDAP directory.
-W      Prompt for simple authentication.  This is used instead of specifying the password on the command line.
-b searchbase
        Use searchbase as the starting point for the search  instead  of the default.
Above we are trying to search for information about xyzuser2 using the user xyzdomainuser. When you execute the command above we need to enter the password for xyzdomainuser. Assuming xyzuser2 user is present in the DOMAIN.

Creating SSSD Configuration.

Finally we are ready to configure SSSD. Below are the SSSD configuration to connect to xyzdomain.com.
NOTE: IMPORTANT. Make sure we have AD server certificates stored in /etc/openldap/cacerts/.
Example :
XYZDOMAIN will have `/etc/openldap/cacerts/ssl-cacerts-xyzdomain.cer`
ABCDOMAIN will have `/etc/openldap/cacerts/ssl-cacerts-abcdomain.cer`
without this ldaps will not work and SSSD will also not work for authentication and might have to use krb5 to authenticate.
If we want to connect to multiple AD servers, then we need to add multiple [domain/abcdomain.com] in the configuration.
[sssd]
config_file_version = 2
debug_level = 0
domains = xyzdomain.com, abcdomain.com
services = nss, pam

[nss]
filter_groups = root
filter_users = root
reconnection_retries = 3
entry_cache_timeout = 3
entry_cache_nowait_percentage = 75
debug_level = 8
account_cache_expiration = 1

[pam]
reconnection_retries = 3

[domain/xyzdomain.com]
debug_level = 8
id_provider = ldap
auth_provider = ldap
chpass_provider = krb5
access_provider = simple
cache_credentials = false
min_id = 1000
ad_server = xyzserver.xyzdomain.com
ldap_uri = ldap://xyzserver.xyzdomain.com:389
ldap_schema = ad
krb5_realm = XYZDOMAIN.COM
ldap_id_mapping = true
cache_credentials = false
entry_cache_timeout = 3
ldap_referrals = false
ldap_default_bind_dn = CN=xyzdomainuser,CN=Users,DC=xyzdomain,DC=com
ldap_default_authtok_type = password
ldap_default_authtok = Welcome@123
fallback_homedir = /home/%u
ldap_user_home_directory = unixHomeDirectory
ldap_tls_cacert = /etc/openldap/cacerts/ssl-cacerts-xyzdomain.cer

###################################################
# Update below with another AD server as required #
###################################################

[domain/abcdomain.com]
debug_level = 8
id_provider = ldap
auth_provider = ldap
chpass_provider = krb5
access_provider = simple
cache_credentials = false
min_id = 1000
ad_server = abcserver.abcdomain.com
ldap_uri = ldap://abcserver.abcdomain.com/:389
ldap_schema = ad
krb5_realm = ABCDOMAIN.COM
ldap_id_mapping = true
cache_credentials = false
entry_cache_timeout = 3
ldap_referrals = false
ldap_default_bind_dn = CN=abcdomainuser,CN=Users,DC=abcdomain,DC=com
ldap_default_authtok_type = password
ldap_default_authtok = Welcome@123
fallback_homedir = /home/%u
ldap_user_home_directory = unixHomeDirectory
ldap_tls_cacert = /etc/openldap/cacerts/ssl-cacerts-abcdomain.cer
Install oddjob-mkhomedir to auto create the directory whenever a user logs in.
yum install oddjob-mkhomedir    
Enable sssd, localauth and update the configuration.
authconfig --enablesssd --enablesssdauth --enablelocauthorize --enablekrb5 --update    
NOTE : Check the sssd.conf again, sometimes authconfig will insert the default domain.
You can remove it and make the sssd.conf file similar to what we have above.
Start sssd services.
service sssd start
service oddjobd start
Testing our setup.
Checking our user, who is present in the Active Directory XYZDOMAIN.
[root@slave-server ~]# id xyzdomainuser
uid=62601149(xyzdomainuser) gid=62600513(Domain Users) groups=62600513(Domain Users),62601134(supergroup),62601133(hdfs)
[root@slave-server ~]# su xyzdomainuser
[xyzdomainuser@slave-server root]$ cd ~
[xyzdomainuser@slave-server ~]$ pwd
/home/xyzdomainuser
Next we try to login from remote.
[xyzdomainuser@slave-server ~]$ exit
exit
[root@slave-server ~]# ssh xyzdomainuser@192.168.0.9
xyzdomainuser@192.168.0.9's password:
Last login: Sat Sep 12 07:46:15 2015 from slave-server.xyzdomain.com
[xyzdomainuser@slave-server ~]$ pwd
/home/xyzdomainuser
[xyzdomainuser@slave-server ~]$ id
uid=62601149(xyzdomainuser) gid=62600513(Domain Users) groups=62600513(Domain Users),62601133(hdfs),62601134(supergroup)
[xyzdomainuser@slave-server ~]$
We are able to and the /home/xyzdomainuser is autocreated when the user logged-in.
Now checking users for ABCDOMAIN.
[root@slave-server ~]# id abcdomainuser
uid=1916401111(abcdomainuser) gid=1916400513 groups=1916400513,1916401114(supergroup-test),1916401113(hadoop-test),1916401112,1916401112
[root@slave-server ~]# id abcdomainuser
uid=1916401111(abcdomainuser) gid=1916400513 groups=1916400513,1916401114(supergroup-test),1916401113(hadoop-test),1916401112,1916401112
[root@slave-server ~]# su abcdomainuser
sh-4.1$ pwd
/root
sh-4.1$ cd ~
sh-4.1$ pwd
/home/abcdomainuser
sh-4.1$ id
uid=1916401111(abcdomainuser) gid=1916400513 groups=1916400513,1916401112,1916401113(hadoop-test),1916401114(supergroup-test)
sh-4.1$ exit
exit
We are done.

Comments

Popular posts from this blog

Cloudera Manager - Duplicate entry 'zookeeper' for key 'NAME'.

We had recently built a cluster using cloudera API’s and had all the services running on it with Kerberos enabled. Next we had a requirement to add another kafka cluster to our already exsisting cluster in cloudera manager. Since it is a quick task to get the zookeeper and kafka up and running. We decided to get this done using the cloudera manager instead of the API’s. But we faced the Duplicate entry 'zookeeper' for key 'NAME' issue as described in the bug below. https://issues.cloudera.org/browse/DISTRO-790 I have set up two clusters that share a Cloudera Manger. The first I set up with the API and created the services with capital letter names, e.g., ZOOKEEPER, HDFS, HIVE. Now, I add the second cluster using the Wizard. Add Cluster->Select Hosts->Distribute Parcels->Select base HDFS Cluster install On the next page i get SQL errros telling that the services i want to add already exist. I suspect that the check for existing service names does n

Zabbix History Table Clean Up

Zabbix history table gets really big, and if you are in a situation where you want to clean it up. Then we can do so, using the below steps. Stop zabbix server. Take table backup - just in case. Create a temporary table. Update the temporary table with data required, upto a specific date using epoch . Move old table to a different table name. Move updated (new temporary) table to original table which needs to be cleaned-up. Drop the old table. (Optional) Restart Zabbix Since this is not offical procedure, but it has worked for me so use it at your own risk. Here is another post which will help is reducing the size of history tables - http://zabbixzone.com/zabbix/history-and-trends/ Zabbix Version : Zabbix v2.4 Make sure MySql 5.1 is set with InnoDB as innodb_file_per_table=ON Step 1 Stop the Zabbix server sudo service zabbix-server stop Script. echo "------------------------------------------" echo " 1. Stopping Zabbix Server &quo

Access Filter in SSSD `ldap_access_filter` [SSSD Access denied / Permission denied ]

Access Filter Setup with SSSD ldap_access_filter (string) If using access_provider = ldap , this option is mandatory. It specifies an LDAP search filter criteria that must be met for the user to be granted access on this host. If access_provider = ldap and this option is not set, it will result in all users being denied access. Use access_provider = allow to change this default behaviour. Example: access_provider = ldap ldap_access_filter = memberOf=cn=allowed_user_groups,ou=Groups,dc=example,dc=com Prerequisites yum install sssd Single LDAP Group Under domain/default in /etc/sssd/sssd.conf add: access_provider = ldap ldap_access_filter = memberOf=cn=Group Name,ou=Groups,dc=example,dc=com Multiple LDAP Groups Under domain/default in /etc/sssd/sssd.conf add: access_provider = ldap ldap_access_filter = (|(memberOf=cn=System Adminstrators,ou=Groups,dc=example,dc=com)(memberOf=cn=Database Users,ou=Groups,dc=example,dc=com)) ldap_access_filter accepts standa