-
Notifications
You must be signed in to change notification settings - Fork 3.2k
Using SASL GSSAPI with librdkafka in a cross‐realm scenario with Windows SSPI and MIT Kerberos
In this tutorial we'll see how to set up SASL/GSSAPI in a cross-realm scenario with Windows Active Directory and MIT Kerberos. On Windows librdkafka uses SSPI to automatically authenticate with current user. If cross-realm trust is set up, Windows users can directly authenticate as Kafka principals.
Everything will be set up on a Windows Server instance, by using WSL2 for running Apache Kafka 4.0 on Ubuntu.
- Go to Server Manager
- Select "Add roles and Features"
- Check "Active Directory Domain Services"
- Install it
- After installation: "Promote this server to domain controller"
- Add a new forest with name
testwindomain.com
(NetBIOS name:TESTWINDOMAIN
) - Finish the configuration and restart
- To login now you've to use
TESTWINDOMAIN\<admin_user>
winget.exe install Microsoft.DotNet.SDK.8
winget.exe install Git.SDK
- Enable the WSL feature with this PowerShell command
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux, VirtualMachinePlatform
- Install the Ubuntu distribution
wsl --install Ubuntu
- Install Linux dependencies
apt update && apt install -y krb5* openjdk-21-jdk
- Get WSL IP address on PS
wsl hostname -I
- Go to Windows DNS Manager and add a new Forward Lookup Zone (Primary) with name
testkafkadomain.com
- Add two hosts:
kdc
andkafka
, both using the WSL IP address.
- Edit
/etc/krb5.conf
- Change default realm in
/etc/krb5.conf
toTESTKAFKADOMAIN.COM
- Add realm configuration:
TESTKAFKADOMAIN.COM = {
kdc = kdc.testkafkadomain.com
admin_server = kdc.testkafkadomain.com
default_domain = testkafkadomain.com
}
- Create the Kerberos DB
kdb5_util create -s
- Create an ACL at
/etc/krb5kdc/kadm5.acl
with contents*/[email protected] *
- Restart the servers:
systemctl restart krb5-admin-server
systemctl restart krb5-kdc
- Create the principal for Kafka brokers and add a keytab for it
mkdir /etc/security/keytabs
kadmin.local addprinc -randkey kafka/kafka.testkafkadomain.com
kadmin.local ktadd -k /etc/security/keytabs/kafka.testkafkadomain.com.keytab kafka/kafka.testkafkadomain.com
- Download and extract the binaries
wget -O kafka_2.13-4.0.0.tgz https://dlcdn.apache.org/kafka/4.0.0/kafka_2.13-4.0.0.tgz && \ tar -xvf kafka_2.13-4.0.0.tgz && cd kafka_2.13-4.0.0
- Edit
config/server.properties
- Add
SASL_PLAINTEXT://kafka.testkafkadomain.com:9094
toadvertised.listeners
and the corresponding value tolisteners
. - Add these new properties:
sasl.enabled.mechanisms=GSSAPI
sasl.kerberos.service.name=kafka
sasl.kerberos.principal.to.local.rules=RULE:[1:$1@$0](.*)s/(.*)@(.*)/$1_$2/,RULE:[2:$1@$0](.*)s/(.*)@(.*)/$1_$2/
authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer
super.users=User:kafka_TESTKAFKADOMAIN.COM
- The rules are respectively for
primary@REALM
andprimary/instance@REALM
, they strip the instance if present and replace@
with_
as it's not a valid character for Kafka principals. This allows to keep users coming from separate realm as different Kafka principals. The super user is an example of the result of this transformation. - Change
log.dirs
to<home_directory>/kafka_2.13-4.0.0/log-dirs
- Change
inter.broker.listener.name
toSASL_PLAINTEXT
- Change
CONTROLLER
security protocol map toSASL_PLAINTEXT
inlistener.security.protocol.map
- Set advertised.listeners to
PLAINTEXT://localhost:9092,SASL_PLAINTEXT://kafka.testkafkadomain.com:9094,CONTROLLER://kafka.testkafkadomain.com:9093
- Edit
kafka_server_jaas.conf
insidekafka_2.13-4.0.0
, use this configuration:
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/kafka.testkafkadomain.com.keytab"
principal="kafka/[email protected]";
};
- Still inside
kafka_2.13-4.0.0
, create the logs directorymkdir log-dirs
- Create a cluster ID
KAFKA_CLUSTER_ID="$(bin/kafka-storage.sh random-uuid)"
- Format the storage with the generated ID
bin/kafka-storage.sh format --standalone -t $KAFKA_CLUSTER_ID -c config/server.properties
- Tell Kafka to use the jaas file
export KAFKA_OPTS="-Djava.security.auth.login.config=$PWD/kafka_server_jaas.conf"
- Finally start Kafka
bin/kafka-server-start.sh config/server.properties
- Create a
command.properties
file with these contents:
security.protocol=SASL_PLAINTEXT
sasl.mechanism=GSSAPI
sasl.kerberos.service.name=kafka
sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
useKeyTab=true \
storeKey=true \
keyTab="/etc/security/keytabs/kafka.testkafkadomain.com.keytab" \
principal="kafka/[email protected]";
- Allow the Windows producer to produce to Kafka topic
test1
bin/kafka-acls.sh --bootstrap-server kafka.testkafkadomain.com:9094 --command-config command.properties --add --allow-principal "User:kafka_client_TESTWINDOMAIN.COM" --topic test1 --resource-pattern-type LITERAL --operation Write
This is the key part, we must add a shared principal with same password to obtain a cross realm TGT for a Windows realm principal to MIT Kerberos.
Replace the example
password.
- On MIT Kerberos add the
krbtgt
primary fromTESTWINDOMAIN.COM
realm toTESTKAFKADOMAIN.COM
onekadmin.local -q "addprinc -pw example krbtgt/[email protected]"
- On Windows add the corresponding trust with same password
netdom trust TESTKAFKADOMAIN.COM /Domain:TESTWINDOMAIN.COM /add /realm /passwordt:example
- On Windows add a
kdc
forTESTKAFKADOMAIN.COM
ksetup /addkdc TESTKAFKADOMAIN.COM kdc.testkafkadomain.com
- On Windows add the mapping for hostnames like
.testkafkadomain.com
to theTESTKAFKADOMAIN.COM
realmksetup /addhosttorealmmap .testkafkadomain.com TESTKAFKADOMAIN.COM
This is the final part. We now start a .NET producer on Windows using the logged in user and authenticating to Kafka using MIT Kerberos.
The .NET client uses the prebuilt binaries in librdkafka.redist
as it uses SSPI
and doesn't need libsasl2
that is only for Linux.
libsasl2
isn't included in prebuilt binaries for Linux because it dynamically loads other plugins that need to call its functions and it cannot be done from the already dynamically loaded shared library without adding the RTLD_GLOBAL
flag to dlopen
. The simplest and most secure way of using libsasl2 on Linux is to install librdkafka with the provided Debian or Red Hat Linux packages and dynamically link it.
Here are the steps for Windows:
- Create new Active Directory user
TESTWINDOWDOMAIN\kafka_client
New-ADUser -Name "kafka_client" -SamAccountName "kafka_client" -UserPrincipalName "[email protected]" -AccountPassword (ConvertTo-SecureString "SecurePassword@!" -AsPlainText -Force) -Enabled $true
- Add it to the
Administrators
group to allow to login locally (only for this example):Add-ADGroupMember -Identity "Administrators" -Members "kafka_client"
- Start a new shell with user
TESTWINDOMAIN\kafka_client
runas /user:TESTWINDOMAIN\kafka_client cmd
- Clone
confluent-kafka-dotnet
cd %USERPROFILE% && C:\git-sdk-64\cmd\git.exe clone https://github.com/confluentinc/confluent-kafka-dotnet.git
- Go to the
Producer
examplecd %USERPROFILE%\confluent-kafka-dotnet\examples\Producer
- Edit
Program.cs
and change the config to:var config = new ProducerConfig { BootstrapServers = brokerList, SecurityProtocol = SecurityProtocol.SaslPlaintext, SaslMechanism = SaslMechanism.Gssapi };
- Start the Producer and you can produce some messages
dotnet run kafka.testkafkadomain.com:9094 test1
- You've now setup cross-realm trust between the two realms. If you want to try a consumer too you can these additional ACLs and use the
Consumer
example after editing its configuration.bin/kafka-acls.sh --bootstrap-server kafka.testkafkadomain.com:9094 --command-config command.properties --add --allow-principal "User:kafka_client_TESTWINDOMAIN.COM" --topic test1 --resource-pattern-type LITERAL --operation Write
bin/kafka-acls.sh --bootstrap-server kafka.testkafkadomain.com:9094 --command-config command.properties --add --allow-principal "User:kafka_client_TESTWINDOMAIN.COM" --group csharp-consumer --resource-pattern-type LITERAL --operation All