elasticsearch 两个节点都配置了SSL,证书配置如下
# 其他信息省略
xpack.security.enabled: true
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: "/usr/share/elasticsearch/config/http.p12"
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/config/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/config/elastic-certificates.p12
elasticsearch docker 运行后正常启动和使用
kibana 配置了SSL和连接elasticsearch的证书
# 其他配置信息省略
# 用HTTPS方式访问kibana
server.ssl.enabled: true
server.ssl.key: /usr/share/kibana/config/kibana-cert/kibana-cert.key
server.ssl.certificate: /usr/share/kibana/config/kibana-cert/kibana-cert.crt
# kibana访问es集群证书
elasticsearch.ssl.certificateAuthorities: /usr/share/kibana/config/elasticsearch-ca.pem
kibana docker 运行后正常启动和使用
下面重点来了
下面重点来了
下面重点来了
给logstash映射的端口在宿主机上都开放了的
logstash.yml 配置如下
node.name: logstash-node-${index} #节点名称,${index}是脚本变量,两个节点,1和2
http.host: "0.0.0.0" #监听地址
pipeline.workers: 8 #设置output或filter插件的工作线程数
pipeline.batch.size: 125 #设置批量执行event的最大值
pipeline.batch.delay: 5000 #批处理的最大等待值
queue.type: persisted #开启持久化
path.queue: /usr/share/logstash/data #队列存储路径;如果队列类型为persisted,则生效
queue.page_capacity: 250mb #队列为持久化,单个队列大小
queue.max_events: 0 #当启用持久化队列时,队列中未读事件的最大数量,0为不限制
queue.max_bytes: 1024mb #队列最大容量
queue.checkpoint.acks: 1024 #在启用持久队列时强制执行检查点的最大数量,0为不限制
queue.checkpoint.writes: 1024 #在启用持久队列时强制执行检查点之前的最大数量的写入事件,0为不限制
queue.checkpoint.interval: 1000 #当启用持久队列时,在头页面上强制一个检查点的时间间隔
xpack.monitoring.enabled: true #开启logstash指标监测,将指标数据发送到es集群
xpack.monitoring.elasticsearch.hosts: ["https://192.168.2.202:9201","https://192.168.2.202:9202"]
xpack.monitoring.elasticsearch.username: "logstash_system"
xpack.monitoring.elasticsearch.password: "EjRJ8hLgPj3GrViQmYxm"
xpack.monitoring.elasticsearch.ssl.certificate_authority: /usr/share/logstash/config/elasticsearch-ca.pem
logstash.conf 这个文件配不配置都报一样的错误,下面还是贴上配置信息
input {
beats {
host => "0.0.0.0"
port => 5044
codec => plain {
charset => "UTF-8"
}
ssl => true
ssl_certificate_authorities => ["/usr/share/logstash/config/elasticsearch-ca.pem"]
ssl_certificate => "/usr/share/logstash/config/logstash-cert/logstash-cert.crt"
ssl_key => "/usr/share/logstash/config/logstash-cert/logstash-cert.pkcs8.key"
ssl_verify_mode => "force_peer"
}
}
filter {
json {
source => "message"
}
}
output{
if "nginx-access-log-dev-202" in [tags] {
elasticsearch {
hosts => ["https://192.168.2.202:9201","https://192.168.2.202:9202"]
#action => "index"
# 写入到es中的索引
index => "nginx-access-log-dev-202-%{+YYYY.MM.dd}"
user => logstash_writer
password => HgYuHIKu77Ber66
ssl => true
truststore => "/usr/share/logstash/config/elasticsearch-ca.pem"
}
}
if "nginx-error-log-dev-202" in [tags] {
elasticsearch {
hosts => ["https://192.168.2.202:9201","https://192.168.2.202:9202"]
#action => "index"
# 写入到es中的索引
index => "nginx-error-log-dev-202-%{+YYYY.MM.dd}"
user => logstash_writer
password => HgYuHIKu77Ber66
ssl => true
ssl_certificate_verification => true
truststore => "/usr/share/logstash/config/elasticsearch-ca.pem"
}
}
}
logstash docker运行报错如下
[2023-08-02T03:27:14,437][WARN ][logstash.licensechecker.licensereader] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [https://192.168.2.202:9201/_xpack][Manticore::ClientProtocolException] PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target {:url=>https://logstash_kibana:xxxxxx@192.168.2.202:9201/, :error_message=>"Elasticsearch Unreachable: [https://192.168.2.202:9201/_xpack][Manticore::ClientProtocolException] PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2023-08-02T03:27:14,480][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
[2023-08-02T03:27:14,494][ERROR][logstash.configmanagement.elasticsearchsource] Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach a live Elasticsearch cluster.
异常看似说的很清楚,没有找到证书。
说明一下elasticsearch-ca.pem是和http.p12一起生成的命令是./bin/elasticsearch-certutil http
下面说一下排除过程的一些信息:
第一种、
首先查看了证书文件的权限,我授权的是777,然后进入docker容器内部查看配置文件中是否配置了证书以及验证配置文件中证书的目录位置是否存在,经过检查都没有问题,就是识别不了证书文件
第二种、
经过第一种排查,我以为证书格式的问题,我从*.p12证书 改为了 ca.crt证书,
logstash.yml配置中证书更改为:xpack.monitoring.elasticsearch.ssl.certificate_authority: "/usr/share/logstash/config/escert/ca.crt"
启动后还是报错,依然进入容器查看了配置路径这些,都没有问题
第三种、
将ca证书生成出pem证书配置到xpack.monitoring.elasticsearch.ssl.certificate_authority: "/usr/share/logstash/config/escert/ca.crt",结果和前面一样。
经过几天的折腾,重装了无数次,各种姿势都用了,就是报前面的错,跑不起来。
下面贴出爬坑过程中用过的配置信息
配置1:
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: ["https://es01:9200","https://es02:9200","https://es03:9200"]
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.sniffing: false
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password: CFGyTv55yg6
xpack.monitoring.elasticsearch.ssl.certificate_authority: "/usr/share/logstash/config/escert/ca.crt"
path.config: /usr/share/logstash/config/conf.d/*.conf
path.logs: /usr/share/logstash/logs
配置2:
如果像下面这样配置,关闭监测,就会报新的错误
node.name: logstash-node-${index} #节点名称,另外两个节点取名logstash-2、logstash-3
pipeline.workers: 2 #设置output或filter插件的工作线程数
pipeline.batch.size: 125 #设置批量执行event的最大值
pipeline.batch.delay: 5000 #批处理的最大等待值
queue.type: persisted #开启持久化
path.queue: /usr/share/logstash/data #队列存储路径;如果队列类型为persisted,则生效
queue.page_capacity: 250mb #队列为持久化,单个队列大小
queue.max_events: 0 #当启用持久化队列时,队列中未读事件的最大数量,0为不限制
queue.max_bytes: 1024mb #队列最大容量
queue.checkpoint.acks: 1024 #在启用持久队列时强制执行检查点的最大数量,0为不限制
queue.checkpoint.writes: 1024 #在启用持久队列时强制执行检查点之前的最大数量的写入事件,0为不限制
queue.checkpoint.interval: 1000 #当启用持久队列时,在头页面上强制一个检查点的时间间隔
xpack.monitoring.enabled: false #开启logstash指标监测,将指标数据发送到es集群
错误如下 10s报一次
[2023-08-02T03:55:23,738][INFO ][org.logstash.beats.BeatsHandler][main][0710cad67e8f47667bc7612580d5b91f691dd8262a4187d9eca8cf87229d04aa] [local: 172.66.0.152:5044, remote: 192.168.2.202:47610] Handling exception: io.netty.handler.codec.DecoderException: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 3 (caused by: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 3)
[2023-08-02T03:55:23,739][WARN ][io.netty.channel.DefaultChannelPipeline][main][0710cad67e8f47667bc7612580d5b91f691dd8262a4187d9eca8cf87229d04aa] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
配置3:
关闭监测,配置集中管理配置,依然报最前面那个异常,找不到证书
http.host: "0.0.0.0"
path.logs: /usr/share/logstash/logs
xpack.monitoring.enabled: false
xpack.management.enabled: true
xpack.management.pipeline.id: ["main", "apache_logs", "my_apache_logs"]
xpack.management.elasticsearch.hosts: ["https://192.168.2.202:9201","https://192.168.2.202:9202"]
xpack.management.elasticsearch.username: "logstash_kibana"
xpack.management.elasticsearch.password: "HgYuHIKu77Ber67"
xpack.management.elasticsearch.ssl.certificate_authority: /usr/share/logstash/config/elasticsearch-ca.pem
实在不知道说明原因,有没有大能可以解救下呀。
万分感激
万分感激
万分感激
如果可以解决问题,可谈
如果可以解决问题,可谈
如果可以解决问题,可谈