Docker实现安装ELK(单节点)

 更新时间:2024年08月14日 16:37:56   作者:wang18057  
这篇文章主要介绍了Docker实现安装ELK(单节点),具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教

Docker 安装ELK(单节点)

创建docker网络

docker network create -d bridge elastic

拉取elasticsearch 8.4.3版本

docker pull docker.elastic.co/elasticsearch/elasticsearch:8.4.3
也可能是这个
docker pull elasticsearch:8.4.3

第一次执行docker脚本

docker run -it \
-p 9200:9200 \
-p 9300:9300 \
--name elasticsearch \
--net elastic \
-e ES_JAVA_OPTS="-Xms1g -Xmx1g" \
-e "discovery.type=single-node" \
-e LANG=C.UTF-8 \
-e LC_ALL=C.UTF-8 \
elasticsearch:8.4.3

注意第一次执行脚本不要加-d这个参数,否则看不到服务首次运行时生成的随机密码和随机 enrollment token

拷贝日志中的以下内容,备用

✅ Elasticsearch security features have been automatically configured!
✅ Authentication is enabled and cluster connections are encrypted.

ℹ️  Password for the elastic user (reset with `bin/elasticsearch-reset-password -u elastic`):
  =HjjCu=tj1orDTLJbWPv

ℹ️  HTTP CA certificate SHA-256 fingerprint:
  9204867e59a004b04c44a98d93c4609937ce3f14175a3eed7afa98ee31bbd4c2

ℹ️  Configure Kibana to use this cluster:
• Run Kibana and click the configuration link in the terminal when Kibana starts.
• Copy the following enrollment token and paste it into Kibana in your browser (valid for the next 30 minutes):
  eyJ2ZXIiOiI4LjQuMyIsImFkciI6WyIxNzIuMjIuMC4yOjkyMDAiXSwiZmdyIjoiOTIwNDg2N2U1OWEwMDRiMDRjNDRhOThkOTNjNDYwOTkzN2NlM2YxNDE3NWEzZWVkN2FmYTk4ZWUzMWJiZDRjMiIsImtleSI6Img0bGNvSkFCYkJnR1BQQXRtb3VZOnpCcjZQMUtZVFhHb1VDS2paazRHRHcifQ==

ℹ️ Configure other nodes to join this cluster:
• Copy the following enrollment token and start new Elasticsearch nodes with `bin/elasticsearch --enrollment-token <token>` (valid for the next 30 minutes):
  eyJ2ZXIiOiI4LjQuMyIsImFkciI6WyIxNzIuMjIuMC4yOjkyMDAiXSwiZmdyIjoiOTIwNDg2N2U1OWEwMDRiMDRjNDRhOThkOTNjNDYwOTkzN2NlM2YxNDE3NWEzZWVkN2FmYTk4ZWUzMWJiZDRjMiIsImtleSI6ImhZbGNvSkFCYkJnR1BQQXRtb3VLOjRZWlFkN1JIUk5PcVJqZTlsX2p6LXcifQ==

  If you're running in Docker, copy the enrollment token and run:
  `docker run -e "ENROLLMENT_TOKEN=<token>" docker.elastic.co/elasticsearch/elasticsearch:8.4.3`

创建相应目录并复制配置文件到主机

mkdir -p apps/elk8.4.3/elasticsearch
# 这个cp命令是在 /home/ubuntu目录下执行的
docker cp elasticsearch:/usr/share/elasticsearch/config apps/elk8.4.3/elasticsearch/

docker cp elasticsearch:/usr/share/elasticsearch/data apps/elk8.4.3/elasticsearch/

docker cp elasticsearch:/usr/share/elasticsearch/plugins apps/elk8.4.3/elasticsearch/

docker cp elasticsearch:/usr/share/elasticsearch/logs apps/elk8.4.3/elasticsearch/

删除容器

docker rm -f elasticsearch

修改apps/elk8.4.3/elasticsearch/config/elasticsearch.yml

vim apps/elk8.4.3/elasticsearch/config/elasticsearch.yml

添加

  • 增加:xpack.monitoring.collection.enabled: true
  • 说明:添加这个配置以后在kibana中才会显示联机状态,否则会显示脱机状态

启动elasticsearch

docker run -it \
-d \
-p 9200:9200 \
-p 9300:9300 \
--name elasticsearch \
--net elastic \
-e ES_JAVA_OPTS="-Xms1g -Xmx1g" \
-e "discovery.type=single-node" \
-e LANG=C.UTF-8 \
-e LC_ALL=C.UTF-8 \
-v /home/ubuntu/apps/elk8.4.3/elasticsearch/config:/usr/share/elasticsearch/config \
-v /home/ubuntu/apps/elk8.4.3/elasticsearch/data:/usr/share/elasticsearch/data \
-v /home/ubuntu/apps/elk8.4.3/elasticsearch/plugins:/usr/share/elasticsearch/plugins \
-v /home/ubuntu/apps/elk8.4.3/elasticsearch/logs:/usr/share/elasticsearch/logs \
elasticsearch:8.4.3

启动验证

https://xxxxx:9200/
用户名:elastic
密码在第一次启动时保存下来的信息中查找

Kibana

安装Kibana

docker pull kibana:8.4.3

启动Kibana

docker run -it \
--restart=always \
--log-driver json-file \
--log-opt max-size=100m \
--log-opt max-file=2 \
--name kibana \
-p 5601:5601 \
--net elastic \
kibana:8.4.3

初始化Kibana鉴权凭证

http://xxxx:5601/?code=878708

注意:

在textarea中填入之前elasticsearch生成的相关信息,注意这个token只有30分钟的有效期,如果过期了只能进入容器重置token,进入容器执行

/bin/elasticsearch-create-enrollment-token -s kibana --url "https://127.0.0.1:9200"

kibana验证

将服务端的log中输出的验证码,输入到浏览器中,我这里是628503

创建kibana目录并copy相关配置信息

mkdir apps/elk8.4.3/kibana

# 这个cp命令是在 /home/ubuntu目录下执行的
docker cp kibana:/usr/share/kibana/config apps/elk8.4.3/kibana/


docker cp kibana:/usr/share/kibana/data apps/elk8.4.3/kibana/

docker cp kibana:/usr/share/kibana/plugins apps/elk8.4.3/kibana/

docker cp kibana:/usr/share/kibana/logs apps/elk8.4.3/kibana/

sudo chown -R 1000:1000 apps/elk8.4.3/kibana

修改apps/elk8.4.3/kibana/config/kibana.yml

### >>>>>>> BACKUP START: Kibana interactive setup (2024-03-25T07:30:11.689Z)

#
# ** THIS IS AN AUTO-GENERATED FILE **
#

# Default Kibana configuration for docker target
#server.host: "0.0.0.0"
#server.shutdownTimeout: "5s"
#elasticsearch.hosts: [ "http://elasticsearch:9200" ]
#monitoring.ui.container.elasticsearch.enabled: true
### >>>>>>> BACKUP END: Kibana interactive setup (2024-03-25T07:30:11.689Z)

# This section was automatically generated during setup.
i18n.locale: "zh-CN"
server.host: 0.0.0.0
server.shutdownTimeout: 5s
# #这个ip一定是elasticsearch的容器ip,可使用docker inspect | grep -i ipaddress
elasticsearch.hosts: ['https://your ip:9200']
monitoring.ui.container.elasticsearch.enabled: true
elasticsearch.serviceAccountToken: AAEAAWVsYXN0aWMva2liYW5hL2Vucm9sbC1wcm9jZXNzLXRva2VuLTE3MTEzNTE4MTA5NDM6ZHZ1R3M5cV9RRlc2NmQ3dE9WaWM0QQ
elasticsearch.ssl.certificateAuthorities: [/usr/share/kibana/data/ca_1711351811685.crt]
xpack.fleet.outputs: [{id: fleet-default-output, name: default, is_default: true, is_default_monitoring: true, type: elasticsearch, hosts: ['https://your ip:9200'], ca_trusted_fingerprint: 5e7d9fe48c485c2761f9e7a99b9d5737e4e34dc55b9bf6929d929fb34d61a11a}]

删除容器并重启

docker rm -f kibana

docker run -it \
-d \
--restart=always \
--log-driver json-file \
--log-opt max-size=100m \
--log-opt max-file=2 \
--name kibana \
-p 5601:5601 \
--net elastic \
-v /home/ubuntu/apps/elk8.4.3/kibana/config:/usr/share/kibana/config \
-v /home/ubuntu/apps/elk8.4.3/kibana/data:/usr/share/kibana/data \
-v /home/ubuntu/apps/elk8.4.3/kibana/plugins:/usr/share/kibana/plugins \
-v /home/ubuntu/apps/elk8.4.3/kibana/logs:/usr/share/kibana/logs \
kibana:8.4.3

Logstash

Logstash拉取镜像

docker pull logstash:8.4.3

启动

docker run -it \
-d \
--name logstash \
-p 9600:9600 \
-p 5044:5044 \
--net elastic \
logstash:8.4.3

创建目录并同步配置文件

mkdir apps/elk8.4.3/logstash

# 这个cp命令是在 /home/ubuntu目录下执行的
docker cp logstash:/usr/share/logstash/config apps/elk8.4.3/logstash/ 
docker cp logstash:/usr/share/logstash/pipeline apps/elk8.4.3/logstash/ 

sudo cp -rf apps/elk8.4.3/elasticsearch/config/certs apps/elk8.4.3/logstash/config/certs

sudo chown -R 1000:1000 apps/elk8.4.3/logstash

修改配置apps/elk8.4.3/logstash/config/logstash.yml

http.host: "0.0.0.0"
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.hosts: [ "http://your ip:9200" ]
xpack.monitoring.elasticsearch.username: "elastic"
# 第一次启动elasticsearch是保存的信息中查找 L3WKr6ROTiK_DbqzBr8c
xpack.monitoring.elasticsearch.password: "L3WKr6ROTiK_DbqzBr8c"
xpack.monitoring.elasticsearch.ssl.certificate_authority: "/usr/share/logstash/config/certs/http_ca.crt"
# 第一次启动elasticsearch是保存的信息中查找 5e7d9fe48c485c2761f9e7a99b9d5737e4e34dc55b9bf6929d929fb34d61a11a
xpack.monitoring.elasticsearch.ssl.ca_trusted_fingerprint: "5e7d9fe48c485c2761f9e7a99b9d5737e4e34dc55b9bf6929d929fb34d61a11a"

修改配置apps/elk8.4.3/logstash/pipeline/logstash.conf

input {
  beats {
    port => 5044
  }
}


filter {
  date {
        # 因为我的日志里,我的time字段格式是2024-03-14T15:34:03+08:00 ,所以要使用以下两行配置
        match => [ "time", "ISO8601" ]
        target => "@timestamp"
  }
  json {
    source => "message"
  }
  mutate {
    remove_field => ["message", "path", "version", "@version", "agent", "cloud", "host", "input", "log", "tags", "_index", "_source", "ecs", "event"]
  }
}


output {
  elasticsearch {
    hosts => ["https://your ip:9200"]
    index => "douyin-%{+YYYY.MM.dd}"
    ssl => true
    ssl_certificate_verification => false
    cacert => "/usr/share/logstash/config/certs/http_ca.crt"
    ca_trusted_fingerprint => "第一次启动elasticsearch是保存的信息中查找e924551c1453c893114a05656882eea81cb11dd87c1258f83e6f676d2428f8f2"
    user => "elastic"
    password => "第一次启动elasticsearch是保存的信息中查找UkNx8px1yrMYIht30QUc"
  }
}

删除容器并重新启动

docker rm -f logstash

docker run -it \
-d \
--name logstash \
-p 9600:9600 \
-p 5044:5044 \
--net elastic \
-v /home/ubuntu/apps/elk8.4.3/logstash/config:/usr/share/logstash/config \
-v /home/ubuntu/apps/elk8.4.3/logstash/pipeline:/usr/share/logstash/pipeline \
logstash:8.4.3

Filebeat

Filebeat拉取镜像

sudo docker pull elastic/filebeat:8.4.3

FileBeat启动

docker run -it \
-d \
--name filebeat \
--network elastic \
-e TZ=Asia/Shanghai \
elastic/filebeat:8.4.3 \
filebeat -e  -c /usr/share/filebeat/filebeat.yml


docker run -d --name filebeat \
  -v /home/linyanbo/docker_data/filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml \
  -v /home/linyanbo/docker_data/filebeat/data:/usr/share/filebeat/data \
  -v /var/logs/:/var/log \
  --link elasticsearch:elasticsearch \
  --network elastic \
  --user root \
 elastic/filebeat:8.4.3

设置开机启动

docker update elasticsearch --restart=always

配置文件

filebeat.yml

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/logs/duty-admin/spring.log/crmduty-admin-2024-07-12.log
  fields:
    log_source: oh-promotion
  fields_under_root: true
  multiline.pattern: ^\d{4}-\d{1,2}-\d{1,2}
  multiline.negate: true
  multiline.match: after
  scan_frequency: 5s
  close_inactive: 1h
  ignore_older: 24h

output.logstash:
  hosts: ["your ip:5044"]

logstash.conf

input {
  beats {
    port => 5044
  }
}
filter {
#	mutate {
#		split => {"message"=>" "}
#	}
	mutate {
		add_field => {
			"mm" => "%{message}"
		}
	}

}

output {
  elasticsearch {
    hosts => ["https://your ip:9200"]
    #index => "duty-admin%{+YYYY.MM.dd}"
    index => "duty-admin%{+YYYY}"
    ssl => true
    ssl_certificate_verification => false
    cacert => "/usr/share/logstash/config/certs/http_ca.crt"
    ca_trusted_fingerprint => "9204867e59a004b04c44a98d93c4609937ce3f14175a3eed7afa98ee31bbd4c2"
    user => "elastic"
    password => "=HjjCu=tj1orDTLJbWPv"
  }
}

 output {
  stdout {
    codec => rubydebug
  }
 }

elasticsearch.yml

cluster.name: "docker-cluster"
network.host: 0.0.0.0

#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically      
# generated to configure Elasticsearch security features on 11-07-2024 05:54:41
#
# --------------------------------------------------------------------------------

# Enable security features
xpack.security.enabled: true

# 说明:添加这个配置以后在kibana中才会显示联机状态,否则会显示脱机状态
xpack.monitoring.collection.enabled: true

xpack.security.enrollment.enabled: true

# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
  enabled: true
  keystore.path: certs/http.p12

# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
#----------------------- END SECURITY AUTO CONFIGURATION -------------------------

kibana.yml

### >>>>>>> BACKUP START: Kibana interactive setup (2024-07-11T06:09:35.897Z)

#
# ** THIS IS AN AUTO-GENERATED FILE **
#

# Default Kibana configuration for docker target
#server.host: "0.0.0.0"
#server.shutdownTimeout: "5s"
#elasticsearch.hosts: [ "http://elasticsearch:9200" ]
#monitoring.ui.container.elasticsearch.enabled: true
### >>>>>>> BACKUP END: Kibana interactive setup (2024-07-11T06:09:35.897Z)

# This section was automatically generated during setup.
server.host: 0.0.0.0
server.shutdownTimeout: 5s
elasticsearch.hosts: ['https://your ip:9200']
monitoring.ui.container.elasticsearch.enabled: true
elasticsearch.serviceAccountToken: AAEAAWVsYXN0aWMva2liYW5hL2Vucm9sbC1wcm9jZXNzLXRva2VuLTE3MjA2NzgxNzU2MzU6bU5RR25uQUVSaWExbUdHQ2tsODRmZw
elasticsearch.ssl.certificateAuthorities: [/usr/share/kibana/data/ca_1720678175894.crt]
xpack.fleet.outputs: [{id: fleet-default-output, name: default, is_default: true, is_default_monitoring: true, type: elasticsearch, hosts: ['https://your ip:9200'], ca_trusted_fingerprint: 9204867e59a004b04c44a98d93c4609937ce3f14175a3eed7afa98ee31bbd4c2}]

总结

以上为个人经验,希望能给大家一个参考,也希望大家多多支持脚本之家。

相关文章

  • Docker 进阶之镜像分层方案详解

    Docker 进阶之镜像分层方案详解

    这篇文章主要介绍了Docker 进阶之镜像分层详解,本文通过图文并茂的形式给大家介绍的非常详细,对大家的学习或工作具有一定的参考借鉴价值,需要的朋友可以参考下
    2022-07-07
  • 嵌入式移植docker报错问题(汇总)

    嵌入式移植docker报错问题(汇总)

    这篇文章主要介绍了嵌入式移植docker报错问题(汇总),文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友们下面随着小编来一起学习学习吧
    2019-09-09
  • Docker Dockerfile 定制镜像的方法

    Docker Dockerfile 定制镜像的方法

    这篇文章主要介绍了Docker Dockerfile 定制镜像的方法,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友们下面随着小编来一起学习学习吧
    2020-01-01
  • Docker attach 命令卡死的问题及解决方案

    Docker attach 命令卡死的问题及解决方案

    Docker 是一种轻量级的容器化平台,可以实现快速部署、运行和管理应用程序,这篇文章给大家介绍Docker attach 命令卡死的解决方案,感兴趣的朋友一起看看吧
    2023-12-12
  • Docker数据卷挂载命令volume(-v)与mount的使用总结

    Docker数据卷挂载命令volume(-v)与mount的使用总结

    本文主要介绍了Docker数据卷挂载命令volume(-v)与mount的使用总结,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友们下面随着小编来一起学习学习吧
    2022-08-08
  • docker保存镜像到本地并加载本地镜像文件详解

    docker保存镜像到本地并加载本地镜像文件详解

    平常我们下载docker镜像会通过配置国内源来加速下载,但是有时候会有另外的需求,比如某个机器不能联网,我们就需要从其他机器下载,打包后,拷贝到这个机器,下面这篇文章主要给大家介绍了关于docker保存镜像到本地并加载本地镜像文件的相关资料,需要的朋友可以参考下
    2022-08-08
  • 一文带你彻底搞懂Docker中的cgroup的具体使用

    一文带你彻底搞懂Docker中的cgroup的具体使用

    本文主要介绍了Docker中的cgroup的具体使用,文中通过示例代码介绍的非常详细,具有一定的参考价值,感兴趣的小伙伴们可以参考一下
    2021-11-11
  • 利用Volume在主机和Docker容器文件传输的方法

    利用Volume在主机和Docker容器文件传输的方法

    这篇文章主要介绍了利用Volume在主机和Docker容器文件传输的方法,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友们下面随着小编来一起学习学习吧
    2019-03-03
  • dockerfile健康检查HEALTHCHECK的命令学习

    dockerfile健康检查HEALTHCHECK的命令学习

    HEALTHCHECK 指令告诉 Docker 如何测试一个容器,以检查它是否仍在工作,本文主要介绍了dockerfile健康检查HEALTHCHECK的命令学习,感兴趣的可以了解一下
    2024-01-01
  • Dockerfile中的保留字指令的过程解析

    Dockerfile中的保留字指令的过程解析

    Dockerfile是用来构建Docker镜像的构建文件,由一系列命令和参数构成的脚本,本文重点给大家介绍Dockerfile中的保留字指令的过程解析,感兴趣的朋友跟随小编一起看看吧
    2021-11-11

最新评论