尘世壹俗人 2026-01-26 23:41 采纳率: 76.5%
浏览 4
已采纳

krb5能否配置多域

kerberos如何正确配置多个域?问题背景是这样:使用centos 7 部署了一个单域的kerberos,配置文件基于默认自带的改动,部署和使用都没有问题后,就在想按照默认文件的格式来看是否kerberos能够有多个域,因此尝试该了相关配置

其中/etc/krb5.conf文件内容如下

[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

[libdefaults]
 default_realm = HADOOP.COM
 dns_lookup_realm = false
 dns_lookup_kdc = false
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true
 udp_preference_limit = 1
 default_tkt_enctypes = aes256-cts-hmac-sha1-96 aes128-cts-hmac-sha1-96 des3-hmac-sha1
 default_tgs_enctypes = aes256-cts-hmac-sha1-96 aes128-cts-hmac-sha1-96 des3-hmac-sha1
 permitted_enctypes = aes256-cts-hmac-sha1-96 aes128-cts-hmac-sha1-96 des3-hmac-sha1

[realms]
 HADOOP.COM = {
  kdc = node4:88
  admin_server = node4:749
 }

 SPARK.COM = {
  kdc = node4:88
  admin_server = node4:749
 }

[domain_realm]
 .hadoop.com = HADOOP.COM
 hadoop.com = HADOOP.COM
 .spark.com = SPARK.COM
 spark.com = SPARK.COM

[capaths]
 HADOOP.COM = {
  SPARK.COM = .
 }
 
 SPARK.COM = {
  HADOOP.COM = .
 }

/var/kerberos/krb5kdc/kdc.conf的内容如下

[kdcdefaults]
 kdc_ports = 88
 kdc_tcp_ports = 88

[realms]
 HADOOP.COM = {
  database_name = /var/kerberos/krb5kdc/principal.hadoop
  master_key_type = aes256-cts
  acl_file = /var/kerberos/krb5kdc/kadm5.hadoop.acl
  dict_file = /usr/share/dict/words
  admin_keytab = /var/kerberos/krb5kdc/kadm5.hadoop.keytab
  supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
 }

 SPARK.COM = {
  database_name = /var/kerberos/krb5kdc/principal.spark
  master_key_type = aes256-cts
  acl_file = /var/kerberos/krb5kdc/kadm5.spark.acl
  dict_file = /usr/share/dict/words
  admin_keytab = /var/kerberos/krb5kdc/kadm5.spark.keytab
  supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
 }

数据库文件都正常生成了,通过命令

kdb5_util create -s -r HADOOP.COM -d /var/kerberos/krb5kdc/principal.hadoop
kdb5_util create -s -r SPARK.COM -d /var/kerberos/krb5kdc/principal.spark

但是问题在启动服务的时候,日志中出现了找不到spark域的数据库文件错误

239.133: CLIENT_NOT_FOUND: admin/admin@SPARK.COM for kadmin/node4@SPARK.COM, Client not found in Kerberos database

并且通过命令获取票据时, HADOOP.COM 域的相关操作一切正常,但是 SPARK.COM 域所有主体客户端都会报错

kinit: Client 'admin/admin@SPARK.COM' not found in Kerberos database while getting initial credentials

但让我费解的是,如果SPARK.COM 所有东西都用不了,那问题很好找,可是SPARK.COM 能够读取所有主体

[root@node4 opt] # kadmin.local -r SPARK.COM -q "listprincs"
Authenticating as principal admin/admin@HADOOP.COM with password.
K/M@SPARK.COM
admin/admin@SPARK.COM
kadmin/admin@SPARK.COM
kadmin/changepw@SPARK.COM
kadmin/node4@SPARK.COM
kiprop/node4@SPARK.COM
krbtgt/SPARK.COM@SPARK.COM

我想不明白,问题出在哪了?

  • 写回答

3条回答 默认 最新

  • 尘世壹俗人 2026-02-08 22:42
    关注

    问题并没有解决,不过不重要了,公司内部是安全部搞这个东西,引擎这边拿到对应的主体用就行了

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论
查看更多回答(2条)

报告相同问题?

问题事件

  • 已采纳回答 今天
  • 创建了问题 1月26日