当前位置: 首页 > news >正文

浙江建设职业技术学院网站ih5做自适应网站

浙江建设职业技术学院网站,ih5做自适应网站,安远网站建设,wordpress调取某页面随笔记录 目录 1. 安装zookeeper 2. 安装Kafka 2.1 拉取kafka image 2.2 查询本地docker images 2.3 查看本地 容器#xff08;docker container#xff09; 2.3.1 查看本地已启动的 docker container 2.3.2 查看所有容器的列表#xff0c;包括已停止的容器。 2.3.…随笔记录 目录 1. 安装zookeeper 2. 安装Kafka 2.1 拉取kafka image 2.2 查询本地docker images 2.3 查看本地 容器docker container 2.3.1 查看本地已启动的 docker container 2.3.2 查看所有容器的列表包括已停止的容器。 2.3.3 停止的启动的某个容器 2.3.4 启动某个容器  2.4 删除指定容器 2.5 启动kafka 镜像 2.5.1 启动kafaka container 2.5.2 验证kafka 容器已启动  2.6 创建测试主题 2.6.1 进入kafka容器 2.6.2 创建topic 2.6.3 查询已创建的topic 2.6.4 在创建的主题中生产消息  2.6.5 kafka 消费者消费消息 3  安装完kafka 容器后修改docker 中kafka 容器配置文件 3.1 进入kafka 容器 3.2 修改配置文件 3.2.1 安装 vim 3.2.2 修改配置文件 3.2.3 重启Kafka容器以使配置更改生效 3.3  如修改配置文件后重启kafka容器后配置无法保存  3.3.1 停止运行的kafka 容器 3.3.2  删除kafka 容器 3.3.3  删除Kafka数据目录 3.3.4 重建kafk 容器  4. 安装完kafka 容器后设置容器与主机时间保持一致  1. 安装zookeeper docker会自动拉取对应镜像 # docker run -d --name zookeeper -p 2181:2181 -v /etc/localtime:/etc/localtime wurstmeister/zookeeper[rootlocalhost Docker-Compose-Master]# mkdir zookeeper [rootlocalhost Docker-Compose-Master]# ls docker-compose.yml kafka zookeeper [rootlocalhost Docker-Compose-Master]# cd zookeeper/ [rootlocalhost zookeeper]# ls [rootlocalhost zookeeper]# [rootlocalhost zookeeper]# docker run -d --name zookeeper -p 2181:2181 -v /etc/localtime:/etc/localtime wurstmeister/zookeeper Unable to find image wurstmeister/zookeeper:latest locally latest: Pulling from wurstmeister/zookeeper a3ed95caeb02: Pull complete ef38b711a50f: Pull complete e057c74597c7: Pull complete 666c214f6385: Pull complete c3d6a96f1ffc: Pull complete 3fe26a83e0ca: Pull complete 3d3a7dd3a3b1: Pull complete f8cc938abe5f: Pull complete 9978b75f7a58: Pull complete 4d4dbcc8f8cc: Pull complete 8b130a9baa49: Pull complete 6b9611650a73: Pull complete 5df5aac51927: Pull complete 76eea4448d9b: Pull complete 8b66990876c6: Pull complete f0dd38204b6f: Pull complete Digest: sha256:7a7fd44a72104bfbd24a77844bad5fabc86485b036f988ea927d1780782a6680 Status: Downloaded newer image for wurstmeister/zookeeper:latest 8dbbc5f4768e37b6049e7830e2c233476b629bdf3bafdf2eef9b0d2eb127b6c2 [rootlocalhost zookeeper]# # 以上命令参数解释说明docker run用于创建并启动一个新的容器--name zookeeper指定容器的名称为 zookeeper-p 2181:2181将主机的端口 2181 映射到容器的端口 2181允许从主机上的其他应用程序访问 ZooKeeper 服务-v /etc/localtime:/etc/localtime将主机的系统时间配置文件挂载到容器内以便容器内的时间与主机保持同步wurstmeister/zookeeper指定要使用的容器镜像总结 执行该命令后Docker 将使用 wurstmeister/zookeeper 镜像创建一个名为 zookeeper 的容器。ZooKeeper 是一个开源的分布式协调服务该容器提供了运行 ZooKeeper 服务器所需的环境。 通过将主机的端口 2181 映射到容器的端口 2181可以轻松地访问在容器中运行的 ZooKeeper 服务。使用 -v 参数将主机的系统时间配置文件挂载到容器内可以确保容器的时间与主机保持一致。 这条命令执行后ZooKeeper 容器将在后台运行并且您可以使用 docker ps 命令来查看正在运行的容器。 2. 安装Kafka 2.1 拉取kafka image # 拉取kafka 镜像 # docker pull wuristmeister/kafka[rootlocalhost kafka]# pwd /home/magx/Docker-Compose-Master/kafka [rootlocalhost kafka]# [rootlocalhost kafka]# ll total 8 -rw-r--r--. 1 root root 3112 Dec 4 17:48 docker-compose-kafka.yml drwxr-xr-x. 5 root root 4096 Dec 4 16:40 kafka-docker [rootlocalhost kafka]# [rootlocalhost kafka]# docker pull wurstmeister/kafka Using default tag: latest latest: Pulling from wurstmeister/kafka 42c077c10790: Pull complete 44b062e78fd7: Pull complete b3ba9647f279: Pull complete 10c9a58bd495: Pull complete ed9bd501c190: Pull complete 03346d650161: Pull complete 539ec416bc55: Pull complete Digest: sha256:2d4bbf9cc83d9854d36582987da5f939fb9255fb128d18e3cf2c6ad825a32751 Status: Downloaded newer image for wurstmeister/kafka:latest docker.io/wurstmeister/kafka:latest [rootlocalhost kafka]# # 以上命令参数解释说明docker pull用于从 Docker 镜像仓库中拉取下载一个镜像wurstmeister/kafka要拉取的镜像的名称总结 执行该命令后Docker 将尝试从 Docker 镜像仓库中下载名为 wurstmeister/kafka 的镜像。这个镜像是由 wurstmeister 团队维护的 Kafka 镜像Kafka 是一个流行的分布式流处理平台。注意 执行该命令需要在网络环境良好的情况下并且 Docker 需要与 Docker 镜像仓库建立连接。下载完成后可以使用 docker images 命令来查看已下载的镜像列表确认 wurstmeister/kafka 镜像已成功下载 2.2 查询本地docker images # 查询本地docker 镜像文件 # docker images[rootlocalhost kafka]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE hello-world latest 9c7a54a9a43c 7 months ago 13.3kB wurstmeister/kafka latest a692873757c0 19 months ago 468MB wurstmeister/zookeeper latest 3f43f72cb283 4 years ago 510MB [rootlocalhost kafka]# [rootlocalhost kafka]# 2.3 查看本地 容器docker container 2.3.1 查看本地已启动的 docker container # 查询本地已启动docker 容器 # docker ps[rootlocalhost kafka]# [rootlocalhost kafka]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 8dbbc5f4768e wurstmeister/zookeeper /bin/sh -c /usr/sb… 2 weeks ago Up 2 weeks 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181-2181/tcp, :::2181-2181/tcp zookeeper [rootlocalhost kafka]# 2.3.2 查看所有容器的列表包括已停止的容器。 # 查看本地所有 docker container # docker ps -a 命令来查看所有容器的列表包括已停止的容器。它会显示容器的 ID、状态、创建时间等信息。[rootlocalhost kafka]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 913b2a1d7f07 wurstmeister/kafka start-kafka.sh 11 minutes ago Exited (143) 8 minutes ago kafka 8dbbc5f4768e wurstmeister/zookeeper /bin/sh -c /usr/sb… 2 weeks ago Up 2 weeks 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181-2181/tcp, :::2181-2181/tcp zookeeper b38e9b5b6a2e hello-world /hello 2 weeks ago Exited (0) 2 weeks ago infallible_rosalind [rootlocalhost kafka]#2.3.3 停止的启动的某个容器 # 停止某个容器 # docker stop container_id[rootlocalhost ~]# docker ps # 停止前查询已启动容器list CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b03ba55d79cb wurstmeister/kafka start-kafka.sh 29 hours ago Up 29 hours 0.0.0.0:9092-9092/tcp, :::9092-9092/tcp kafka 8dbbc5f4768e wurstmeister/zookeeper /bin/sh -c /usr/sb… 2 weeks ago Up 2 weeks 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181-2181/tcp, :::2181-2181/tcp zookeeper [rootlocalhost ~]# [rootlocalhost ~]# docker stop b03ba55d79cb # 停止kafka 容器 b03ba55d79cb [rootlocalhost ~]## 停止后查询已启动容器list [rootlocalhost ~]# [rootlocalhost ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 8dbbc5f4768e wurstmeister/zookeeper /bin/sh -c /usr/sb… 2 weeks ago Up 2 weeks 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181-2181/tcp, :::2181-2181/tcp zookeeper [rootlocalhost ~]## 停止后查询所有容器list[rootlocalhost ~]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b03ba55d79cb wurstmeister/kafka start-kafka.sh 29 hours ago Exited (143) 8 seconds ago kafka 8dbbc5f4768e wurstmeister/zookeeper /bin/sh -c /usr/sb… 2 weeks ago Up 2 weeks 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181-2181/tcp, :::2181-2181/tcp zookeeper b38e9b5b6a2e hello-world /hello 2 weeks ago Exited (0) 2 weeks ago infallible_rosalind [rootlocalhost ~]#2.3.4 启动某个容器  # 启动某个容器 # docker start container_id or container_name[rootlocalhost ~]# docker start kafka # 以container_name 启动容器 kafka [rootlocalhost ~]# [rootlocalhost ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b03ba55d79cb wurstmeister/kafka start-kafka.sh 29 hours ago Up 3 seconds 0.0.0.0:9092-9092/tcp, :::9092-9092/tcp kafka 8dbbc5f4768e wurstmeister/zookeeper /bin/sh -c /usr/sb… 2 weeks ago Up 2 weeks 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181-2181/tcp, :::2181-2181/tcp zookeeper [rootlocalhost ~]#2.4 删除指定容器 # 要删除某个 Docker 容器您可以使用 docker rm 命令并提供要删除的容器的标识符或名称作为参数# docker rm CONTAINER_ID # 容器标识符 container_ID or # docker rm CONTAINER_NAME #容器名称 container_name# 删除前查询本地所有docker 容器 [rootlocalhost kafka]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 913b2a1d7f07 wurstmeister/kafka start-kafka.sh 19 minutes ago Exited (143) 16 minutes ago kafka 8dbbc5f4768e wurstmeister/zookeeper /bin/sh -c /usr/sb… 2 weeks ago Up 2 weeks 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181-2181/tcp, :::2181-2181/tcp zookeeper b38e9b5b6a2e hello-world /hello 2 weeks ago Exited (0) 2 weeks ago infallible_rosalind [rootlocalhost kafka]# [rootlocalhost kafka]## 删除指定docker 容器 [rootlocalhost kafka]# [rootlocalhost kafka]# docker rm 913b2a1d7f07 # docker rm container_ID 913b2a1d7f07 [rootlocalhost kafka]## 删除容器后再次查询本地所有容器不再显示已删除的容器 [rootlocalhost kafka]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 8dbbc5f4768e wurstmeister/zookeeper /bin/sh -c /usr/sb… 2 weeks ago Up 2 weeks 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181-2181/tcp, :::2181-2181/tcp zookeeper b38e9b5b6a2e hello-world /hello 2 weeks ago Exited (0) 2 weeks ago infallible_rosalind [rootlocalhost kafka]#2.5 启动kafka 镜像 2.5.1 启动kafaka container #启动kakfa 容器[rootlocalhost kafka]# [rootlocalhost kafka]# docker run -d --name kafka -p 9092:9092 --link zookeeper:zookeeper --env KAFKA_ZOOKEEPER_CONNECTzookeeper:2181 --env KAFKA_ADVERTISED_LISTENERSPLAINTEXT://localhost:9092 --env KAFKA_LISTENERSPLAINTEXT://0.0.0.0:9092 --env KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR1 wurstmeister/kafka b03ba55d79cb38423e17107bd2342842143a6b2a4010bab90bf096eb851ceb79 [rootlocalhost kafka]#或者 docker run -d --name kafka -v /etc/localtime:/etc/localtime:ro -p 9092:9092 --link zookeeper:zookeeper --env KAFKA_ZOOKEEPER_CONNECTzookeeper:2181 --env KAFKA_ADVERTISED_LISTENERSPLAINTEXT://192.168.2.247:9092 --env KAFKA_LISTENERSPLAINTEXT://192.168.2.247:9092 --env KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR1 wurstmeister/kafka通过以上命令我们链接了ZooKeeper容器并且设置了几个环境变量来配置Kafka。在这个命令中--name kafka: 设置容器的名字为“kafka”。 -v /etc/localtime:/etc/localtime:ro 文件挂载到容器内的相应位置。这将使容器内部的时间与主机系统时间保持一致; ro选项将/etc/localtime文件挂载为只读模式以防止容器内部意外修改主机系统时间-p 9092:9092: 将容器的9092端口映射到宿主机的9092端口。--link zookeeper:zookeeper: 连接到名为“zookeeper”的另一个Docker容器并且在当前的容器中可以通过zookeeper这个别名来访问它。--env KAFKA_ZOOKEEPER_CONNECTzookeeper:2181: 设置环境变量指定ZooKeeper的连接字符串。--env KAFKA_ADVERTISED_LISTENERSPLAINTEXT://localhost:9092: 设置环境变量指定Kafka的advertised listeners。--env KAFKA_LISTENERSPLAINTEXT://0.0.0.0:9092: 设置环境变量指定Kafka的listeners。--env KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR1: 设置环境变量指定offsets topic的副本因子。wurstmeister/kafka: 使用的Docker镜像名字确保在运行这个命令之前ZooKeeper容器已经在运行并且可以通过zookeeper:2181来访问。 如果你的ZooKeeper容器有一个不同的名字或者你使用的是不同的网络设置需要相应地调整--link和KAFKA_ZOOKEEPER_CONNECT的值2.5.2 验证kafka 容器已启动  # docker ps # 查询 已启动 docker container #docker ps -a # 查询 所有 docker container[rootlocalhost kafka]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b03ba55d79cb wurstmeister/kafka start-kafka.sh 27 seconds ago Up 26 seconds 0.0.0.0:9092-9092/tcp, :::9092-9092/tcp kafka 8dbbc5f4768e wurstmeister/zookeeper /bin/sh -c /usr/sb… 2 weeks ago Up 2 weeks 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181-2181/tcp, :::2181-2181/tcp zookeeper [rootlocalhost kafka]# [rootlocalhost kafka]# [rootlocalhost kafka]# docker ps -a # 查询所有 docker container CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b03ba55d79cb wurstmeister/kafka start-kafka.sh 41 seconds ago Up 40 seconds 0.0.0.0:9092-9092/tcp, :::9092-9092/tcp kafka 8dbbc5f4768e wurstmeister/zookeeper /bin/sh -c /usr/sb… 2 weeks ago Up 2 weeks 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181-2181/tcp, :::2181-2181/tcp zookeeper b38e9b5b6a2e hello-world /hello 2 weeks ago Exited (0) 2 weeks ago infallible_rosalind [rootlocalhost kafka]#2.6 创建测试主题 2.6.1 进入kafka容器 # 进入kafka容器 # docker exec -it kafka /bin/bash[rootlocalhost kafka]# [rootlocalhost kafka]# docker exec -it kafka /bin/bash rootb03ba55d79cb:/# rootb03ba55d79cb:/#2.6.2 创建topic # 在Kafka容器中运行以下命令创建一个测试主题 # 进入kafka 容器后创建topic# kafka-topics.sh --create --topic topic_name --partitions 1 --replication-factor 1 --zookeeper zookeeper:2181[rootlocalhost kafka]# docker exec -it kafka /bin/bash # 进入kafka 容器 rootb03ba55d79cb:/# rootb03ba55d79cb:/# kafka-topics.sh --create --topic test1221 --partitions 1 --replication-factor 1 --zookeeper zookeeper:2181 # 创建 topic Created topic test1221. rootb03ba55d79cb:/#注意 如果topic 包含 . 或者 _ 时在执行创建topic过程中会出现一个警告信息提示主题名称中句点和下划线的使用限制。最后命令成功执行并显示创建主题的结果rootb03ba55d79cb:/# kafka-topics.sh --create --topic alarm_warning --partitions 1 --replication-factor 1 --zookeeper zookeeper:2181 WARNING: Due to limitations in metric names, topics with a period (.) or underscore (_) could collide. To avoid issues it is best to use either, but not both. Created topic alarm_warning. rootb03ba55d79cb:/#注意 在执行命令之后出现了一个警告信息提示由于指标名称的限制主题名称中的句点(.)或下划线(_)可能会发生冲突。为了避免问题最好只使用其中一种符号而不是同时使用 2.6.3 查询已创建的topic # 查询已创建的所有topic[rootlocalhost grafana_loki_vector]# docker exec -it kafka /bin/bash # 进入容器 rootb03ba55d79cb:/# rootb03ba55d79cb:/# kafka-topics.sh --list --zookeeper zookeeper:2181 # 查询topic __consumer_offsets alarm_warning mag_test test test1221 test2013 test2023 test20231221 rootb03ba55d79cb:/#2.6.4 在创建的主题中生产消息  # 在创建的主题中生产kafka 消息 # kafka-console-producer.sh --broker-list localhost:9092 --topic 主题名[rootlocalhost kafka]# docker exec -it kafka /bin/bash rootb03ba55d79cb:/# rootb03ba55d79cb:/# kafka-topics.sh --create --topic test1221 --partitions 1 --replication-factor 1 --zookeeper zookeeper:2181 Created topic test1221. rootb03ba55d79cb:/# rootb03ba55d79cb:/# rootb03ba55d79cb:/# kafka-console-producer.sh --broker-list localhost:9092 --topic test1221 hell ka^H^H hello kafka-122^H^Hhello kafka-20131221topci test2023? y/n topci test1221! e\^H e^Crootb03ba55d79cb:/# #通过 Ctrl C 退出kafka 生产者 rootb03ba55d79cb:/# 2.6.5 kafka 消费者消费消息 在另一个终端窗口中后操作如下 1需要先进入kakfa 容器 # docker exec -it kafka /bin/bash2打开一个消费者来读取测试主题的消息 #kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic 主题名 --from-beginning--from-beginning 为可选参数 每次消费 该主题所有消息 不带 --from-beginning 参数 每次仅仅消费启动kafka 消费者后该主题最新的消息ex: kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic 主题名 注意 kafka 消费者消费消息之前要先进入 kafka 容器 [rootlocalhost ~]# [rootlocalhost ~]# docker exec -it kafka /bin/bash #进入kafka 容器 rootb03ba55d79cb:/# rootb03ba55d79cb:/# kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test1221 --from-beginning hell ka hello kafka-122hello kafka-20131221topci test2023? y/n topci test1221! e\ e^CProcessed a total of 11 messages #通过 Ctrl C 退出kafka 消费者 rootb03ba55d79cb:/# 到此基本完成使用Docker运行ZooKeeper和Kafka并进行基本验证的过程。  3  安装完kafka 容器后修改docker 中kafka 容器配置文件 3.1 进入kafka 容器 # 进入kafka 容器 # docker exec -it container_name_or_id /bin/bash[rootlocalhost ~]# docker exec -it kafka /bin/bash rootb03ba55d79cb:/#3.2 修改配置文件 3.2.1 安装 vim # 在一些Docker镜像中可能没有预安装vi编辑器。你可以使用其他可用的编辑器来修改Kafka配置文件# apt-get update # apt-get install vimrootb03ba55d79cb:/# rootb03ba55d79cb:/# vi /opt/kafka/config/server.properties bash: vi: command not found rootb03ba55d79cb:/# vim /opt/kafka/config/server.properties bash: vim: command not found rootb03ba55d79cb:/## vim: 一种功能丰富的终端文本编辑器。可以使用以下命令安装并使用vimrootb03ba55d79cb:/# apt-get update Get:1 http://deb.debian.org/debian bullseye InRelease [116 kB] Get:2 http://security.debian.org/debian-security bullseye-security InRelease [48.4 kB] Get:3 https://download.docker.com/linux/debian bullseye InRelease [43.3 kB] Get:4 http://deb.debian.org/debian bullseye-updates InRelease [44.1 kB] Get:5 http://security.debian.org/debian-security bullseye-security/main amd64 Packages [261 kB] Get:6 https://download.docker.com/linux/debian bullseye/stable amd64 Packages [28.1 kB] Get:7 http://deb.debian.org/debian bullseye/main amd64 Packages [8062 kB] Get:8 http://deb.debian.org/debian bullseye-updates/main amd64 Packages [17.7 kB] Fetched 8621 kB in 1min 57s (74.0 kB/s) Reading package lists... Done rootb03ba55d79cb:/# rootb03ba55d79cb:/# rootb03ba55d79cb:/# apt-get install vim Reading package lists... Done Building dependency tree... Done Reading state information... Done The following additional packages will be installed:libgpm2 vim-common vim-runtime xxd Suggested packages:gpm ctags vim-doc vim-scripts The following NEW packages will be installed:libgpm2 vim vim-common vim-runtime xxd 0 upgraded, 5 newly installed, 0 to remove and 32 not upgraded. Need to get 8174 kB of archives. After this operation, 36.9 MB of additional disk space will be used. Do you want to continue? [Y/n] y Get:1 http://deb.debian.org/debian bullseye/main amd64 xxd amd64 2:8.2.2434-3deb11u1 [192 kB] Get:2 http://deb.debian.org/debian bullseye/main amd64 vim-common all 2:8.2.2434-3deb11u1 [226 kB] Get:3 http://deb.debian.org/debian bullseye/main amd64 libgpm2 amd64 1.20.7-8 [35.6 kB] Get:4 http://deb.debian.org/debian bullseye/main amd64 vim-runtime all 2:8.2.2434-3deb11u1 [6226 kB] ...... ...... ...... 3.2.2 修改配置文件 # 修改配置文件以下字段内容 listenersPLAINTEXT://0.0.0.0:9092 advertised.listenersPLAINTEXT://your_ip_address:9092rootb03ba55d79cb:/# rootb03ba55d79cb:/# vim /opt/kafka/config/server.properties rootb03ba55d79cb:/#rootb03ba55d79cb:/# cat /opt/kafka/config/server.properties # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the License); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an AS IS BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.# see kafka.server.KafkaConfig for additional details and defaults############################# Server Basics ############################## The id of the broker. This must be set to a unique integer for each broker. broker.id-1############################# Socket Server Settings ############################## The address the socket server listens on. It will get the value returned from # java.net.InetAddress.getCanonicalHostName() if not configured. # FORMAT: # listeners listener_name://host_name:port # EXAMPLE: # listeners PLAINTEXT://your.host.name:9092 # listenersPLAINTEXT://0.0.0.0:9092 listenersPLAINTEXT://192.168.2.247:9092 # Hostname and port the broker will advertise to producers and consumers. If not set, # it uses the value for listeners if configured. Otherwise, it will use the value # returned from java.net.InetAddress.getCanonicalHostName(). #advertised.listenersPLAINTEXT://localhost:9092 advertised.listenersPLAINTEXT://192.168.2.247:9092 #//your_ip_address:9092 # Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details #listener.security.protocol.mapPLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL# The number of threads that the server uses for receiving requests from the network and sending responses to the network num.network.threads3# The number of threads that the server uses for processing requests, which may include disk I/O num.io.threads8# The send buffer (SO_SNDBUF) used by the socket server socket.send.buffer.bytes102400# The receive buffer (SO_RCVBUF) used by the socket server socket.receive.buffer.bytes102400# The maximum size of a request that the socket server will accept (protection against OOM) socket.request.max.bytes104857600############################# Log Basics ############################## A comma separated list of directories under which to store log files log.dirs/kafka/kafka-logs-b03ba55d79cb# The default number of log partitions per topic. More partitions allow greater # parallelism for consumption, but this will also result in more files across # the brokers. num.partitions1# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. # This value is recommended to be increased for installations with data dirs located in RAID array. num.recovery.threads.per.data.dir1############################# Internal Topic Settings ############################# # The replication factor for the group metadata internal topics __consumer_offsets and __transaction_state # For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3. offsets.topic.replication.factor1 transaction.state.log.replication.factor1 transaction.state.log.min.isr1############################# Log Flush Policy ############################## Messages are immediately written to the filesystem but by default we only fsync() to sync # the OS cache lazily. The following configurations control the flush of data to disk. # There are a few important trade-offs here: # 1. Durability: Unflushed data may be lost if you are not using replication. # 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush. # 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks. # The settings below allow one to configure the flush policy to flush data after a period of time or # every N messages (or both). This can be done globally and overridden on a per-topic basis.# The number of messages to accept before forcing a flush of data to disk #log.flush.interval.messages10000# The maximum amount of time a message can sit in a log before we force a flush #log.flush.interval.ms1000############################# Log Retention Policy ############################## The following configurations control the disposal of log segments. The policy can # be set to delete segments after a period of time, or after a given size has accumulated. # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens # from the end of the log.# The minimum age of a log file to be eligible for deletion due to age log.retention.hours168# A size-based retention policy for logs. Segments are pruned from the log unless the remaining # segments drop below log.retention.bytes. Functions independently of log.retention.hours. #log.retention.bytes1073741824# The maximum size of a log segment file. When this size is reached a new log segment will be created. log.segment.bytes1073741824# The interval at which log segments are checked to see if they can be deleted according # to the retention policies log.retention.check.interval.ms300000############################# Zookeeper ############################## Zookeeper connection string (see zookeeper docs for details). # This is a comma separated host:port pairs, each corresponding to a zk # server. e.g. 127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002. # You can also append an optional chroot string to the urls to specify the # root directory for all kafka znodes. zookeeper.connectzookeeper:2181# Timeout in ms for connecting to zookeeper zookeeper.connection.timeout.ms18000############################# Group Coordinator Settings ############################## The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance. # The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms. # The default value for this is 3 seconds. # We override this to 0 here as it makes for a better out-of-the-box experience for development and testing. # However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup. group.initial.rebalance.delay.ms0port9092rootb03ba55d79cb:/#3.2.3 重启Kafka容器以使配置更改生效 # docker 重启某个容器# docker restart container_id[rootlocalhost ~]# [rootlocalhost ~]# docker ps # 重启前 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b03ba55d79cb wurstmeister/kafka start-kafka.sh 29 hours ago Up 28 minutes 0.0.0.0:9092-9092/tcp, :::9092-9092/tcp kafka 8dbbc5f4768e wurstmeister/zookeeper /bin/sh -c /usr/sb… 2 weeks ago Up 2 weeks 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181-2181/tcp, :::2181-2181/tcp zookeeper [rootlocalhost ~]# [rootlocalhost ~]# [rootlocalhost ~]# docker restart b03ba55d79cb # 重启kafka 容器 b03ba55d79cb [rootlocalhost ~]# [rootlocalhost ~]# [rootlocalhost ~]# docker ps # 重启后 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b03ba55d79cb wurstmeister/kafka start-kafka.sh 30 hours ago Up 3 seconds 0.0.0.0:9092-9092/tcp, :::9092-9092/tcp kafka 8dbbc5f4768e wurstmeister/zookeeper /bin/sh -c /usr/sb… 2 weeks ago Up 2 weeks 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181-2181/tcp, :::2181-2181/tcp zookeeper [rootlocalhost ~]#3.3  如修改配置文件后重启kafka容器后配置无法保存  3.3.1 停止运行的kafka 容器 # 修改配置文件后重启kafka 容器后再次查看配置文件修改没有保存为此删除kafka 缓存 1. 停止运行的kafka 容器 2. 删除 kafka 容器 3. 重建新kafka 容器[rootlocalhost ~]# docker ps # 查询运行的kafka 容器 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b03ba55d79cb wurstmeister/kafka start-kafka.sh 31 hours ago Up 5 minutes 0.0.0.0:9092-9092/tcp, :::9092-9092/tcp kafka 8dbbc5f4768e wurstmeister/zookeeper /bin/sh -c /usr/sb… 2 weeks ago Up 2 weeks 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181-2181/tcp, :::2181-2181/tcp zookeeper [rootlocalhost ~]# [rootlocalhost ~]# [rootlocalhost ~]# docker stop kafka # 停止 kafka 容器 kafka [rootlocalhost ~]#[rootlocalhost ~]# 3.3.2  删除kafka 容器 [rootlocalhost ~]# [rootlocalhost ~]# docker rm kafka # 删除 kafka 容器 kafka [rootlocalhost ~]# [rootlocalhost ~]# docker ps -a # 查询所有容器 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 8dbbc5f4768e wurstmeister/zookeeper /bin/sh -c /usr/sb… 2 weeks ago Up 2 weeks 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181-2181/tcp, :::2181-2181/tcp zookeeper b38e9b5b6a2e hello-world /hello 2 weeks ago Exited (0) 2 weeks ago infallible_rosalind [rootlocalhost ~]# 3.3.3  删除Kafka数据目录 删除Kafka数据目录Kafka存储数据的目录通常位于容器内部的/var/lib/kafka路径。删除该目录以清除Kafka的数据缓存。注意删除数据目录将导致所有Kafka数据的丢失包括主题、消费者偏移量等[rootlocalhost ~]# [rootlocalhost ~]# rm -rf /var/lib/kafka # 删除Kafka数据目录 [rootlocalhost ~]# [rootlocalhost ~]# 3.3.4 重建kafk 容器  # 重建kafk 容器[rootlocalhost ~]# [rootlocalhost ~]# docker run -d --name kafka -v /etc/localtime:/etc/localtime:ro -p 9092:9092 --link zookeeper:zookeeper --env KAFKA_ZOOKEEPER_CONNECTzookeeper:2181 --env KAFKA_ADVERTISED_LISTENERSPLAINTEXT://192.168.2.247:9092 --env KAFKA_LISTENERSPLAINTEXT://0.0.0.0:9092 --env KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR1 wurstmeister/kafka 563f64beaba4621a714fc44d6a4f81f9464fab682330eddb51a737bdeb001934 [rootlocalhost ~]# [rootlocalhost ~]# [rootlocalhost ~]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 563f64beaba4 wurstmeister/kafka start-kafka.sh 8 seconds ago Up 7 seconds 0.0.0.0:9092-9092/tcp, :::9092-9092/tcp kafka 8dbbc5f4768e wurstmeister/zookeeper /bin/sh -c /usr/sb… 2 weeks ago Up 2 weeks 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181-2181/tcp, :::2181-2181/tcp zookeeper b38e9b5b6a2e hello-world /hello 2 weeks ago Exited (0) 2 weeks ago infallible_rosalind [rootlocalhost ~]#4. 安装完kafka 容器后设置容器与主机时间保持一致  [rootlocalhost ~]# date # 设置之前主机时间 Fri Dec 22 17:06:37 CST 2023 [rootlocalhost ~]# [rootlocalhost ~]# [rootlocalhost ~]# docker exec -it kafka /bin/bash # 进入kafka 容器 rootb03ba55d79cb:/# rootb03ba55d79cb:/# date # kafka container 时间 Fri Dec 22 09:06:50 UTC 2023 rootb03ba55d79cb:/## kafka容器内部可以使用以下命令来设置与主机系统时间同步rootb03ba55d79cb:/# rootb03ba55d79cb:/# cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime rootb03ba55d79cb:/# rootb03ba55d79cb:/# date # 设置之后kafka container 时间 Fri Dec 22 17:08:13 CST 2023 rootb03ba55d79cb:/# rootb03ba55d79cb:/# exit exit [rootlocalhost ~]# [rootlocalhost ~]# date # 设置之后主机时间 Fri Dec 22 17:08:18 CST 2023 [rootlocalhost ~]#注意 为了避免后期设置时间同步问题在创建 容器时添加 参数 保证容器与主机时间一致 -v /etc/localtime:/etc/localtime:ro ro选项将/etc/localtime文件挂载为只读模式以防止容器内部意外修改主机系统时间 至此已结束kafka 部署
http://www.hkea.cn/news/14416973/

相关文章:

  • 沧州网站设计公司价格做北京塞车网站
  • 厦门微信网站百度官网网站首页
  • 南宁模板做网站wordpress彩色字体
  • 外发加工网站源码下载北京 科技网站建设
  • 网站建设框架构建沈阳模板建站公司推荐
  • 外贸网站建设加推广城乡建设和住房建设厅官网
  • 公司内部交流 网站模板wordpress 360加速插件
  • wordpress原生封装appasp网站优化访问速度
  • 世界上最有趣的网站提高wordpress性能宝塔
  • 南京明月建设集团网站wordpress 删除缩略图
  • 网站备案包括空间内容吗onenote wordpress
  • 优舟网站建设做电脑网站手机能显示不出来怎么办啊
  • 比较冷门的视频网站做搬运wordpress 调用副标题
  • 全球访问量最大的10个网站亚马逊雨林生物
  • 在百度云上建设网站举报网站建设自查报告
  • 网站建设预算申请网页上的视频怎么保存到本地
  • 中国住房建设部网站湖北省建设厅招骋网站
  • 网站互动功能福建闽东建设网站
  • 需要做网站的公司青岛胶州网站建设
  • 个人未授权做的网站统计wordpress访问量
  • 网站建设指南wordpress跳转指定模板
  • 网站的基本要素网站建设公司全国排行
  • 电商网站设计公司力推亿企邦网站建设与运营
  • 亚马逊网站建设案例分析做网站什么价格
  • 福建个人网站备案滁州网站建设信息推荐
  • 现在有没有免费的网站松岗怎么做企业网站设计
  • 查icp备案是什么网站深圳网站建设jm3q
  • 哪些网站是用wordpress搭建的浙江网站制作公司
  • 广东网站快速备案微信手机版网站建设
  • 中国品牌网站设计做网站属于什么工作