Strimzi是

  • 基于Kubernetes CRD的Apache Kafka cluster部署工具Operator
  • CNCF Member Project

1.Apply CRD

# create namespace
% kubectl create ns kafka

# apply crd
% kubectl apply -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka
customresourcedefinition.apiextensions.k8s.io/kafkas.kafka.strimzi.io created
rolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator-entity-operator-delegation created
clusterrolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator created
rolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator-topic-operator-delegation created
customresourcedefinition.apiextensions.k8s.io/kafkausers.kafka.strimzi.io created
customresourcedefinition.apiextensions.k8s.io/kafkarebalances.kafka.strimzi.io created
customresourcedefinition.apiextensions.k8s.io/kafkamirrormaker2s.kafka.strimzi.io created
clusterrole.rbac.authorization.k8s.io/strimzi-entity-operator created
clusterrole.rbac.authorization.k8s.io/strimzi-cluster-operator-global created
clusterrolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator-kafka-broker-delegation created
rolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator created
clusterrole.rbac.authorization.k8s.io/strimzi-cluster-operator-namespaced created
clusterrole.rbac.authorization.k8s.io/strimzi-topic-operator created
serviceaccount/strimzi-cluster-operator created
clusterrole.rbac.authorization.k8s.io/strimzi-kafka-broker created
customresourcedefinition.apiextensions.k8s.io/kafkatopics.kafka.strimzi.io created
customresourcedefinition.apiextensions.k8s.io/kafkabridges.kafka.strimzi.io created
deployment.apps/strimzi-cluster-operator created
customresourcedefinition.apiextensions.k8s.io/kafkaconnectors.kafka.strimzi.io created
customresourcedefinition.apiextensions.k8s.io/kafkaconnects2is.kafka.strimzi.io created
customresourcedefinition.apiextensions.k8s.io/kafkaconnects.kafka.strimzi.io created
customresourcedefinition.apiextensions.k8s.io/kafkamirrormakers.kafka.strimzi.io created

部署完毕后,会在Kafka命名空间部署一个服务strimzi-cluster-operator,接下来我们就可以部署我们的kafka集群了。

2.Deploy Kafak Cluster

详细的默认参数有很多,比如自动创建Topic等,详细可查看https://strimzi.io/quickstarts/

  • Kafka

    • 节点:3
    • 持久化存储:100G/节点
  • zookeeper

    • 节点:3
    • 持久化存储:10G/节点
apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: log-kafka
spec:
  kafka:
    version: 2.5.0
    replicas: 3
    listeners:
      plain: {}
      tls: {}
    config:
      offsets.topic.replication.factor: 3
      transaction.state.log.replication.factor: 3
      transaction.state.log.min.isr: 2
      log.message.format.version: "2.5"
    storage:
      type: persistent-claim
      size: 100Gi
  zookeeper:
    replicas: 3
    storage:
      type: persistent-claim
      size: 10Gi
      deleteClaim: false
  entityOperator:
    topicOperator: {}
    userOperator: {}

自动部署完毕后有以下资源:

% kubectl get po -n kafka
NAME                                         READY   STATUS    RESTARTS   AGE
log-kafka-entity-operator-85698f64b4-klsvw   3/3     Running   0          2m41s
log-kafka-kafka-0                            2/2     Running   0          3m20s
log-kafka-kafka-1                            2/2     Running   0          3m20s
log-kafka-kafka-2                            2/2     Running   0          3m20s
log-kafka-zookeeper-0                        1/1     Running   0          4m16s
log-kafka-zookeeper-1                        1/1     Running   0          4m16s
log-kafka-zookeeper-2                        1/1     Running   1          4m16s
strimzi-cluster-operator-9968fd8c9-6v5g9     1/1     Running   0          7m36s
kubectl get svc -n kafka
NAME                         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
log-kafka-kafka-bootstrap    ClusterIP   172.20.194.0    <none>        9091/TCP,9092/TCP,9093/TCP   3m35s
log-kafka-kafka-brokers      ClusterIP   None            <none>        9091/TCP,9092/TCP,9093/TCP   3m35s
log-kafka-zookeeper-client   ClusterIP   172.20.82.214   <none>        2181/TCP                     4m31s
log-kafka-zookeeper-nodes    ClusterIP   None            <none>        2181/TCP,2888/TCP,3888/TCP   4m31s

3.Fluented上报日志

因为我们使用Rancher,可以简化日志上报机制,这里直接使用Rancher2.3版本后集成的Fluented。

Rancher-集群名称-工具-日志

  • Kafka

    • 访问端点类型: Broker
    • 访问地址:http://log-kafka-kafka-bootstrap.kafka:9092
    • 主题(Topic):eks
    • SSL配置: 全部留空
    • SASL配置: 全部留空
    • 其他日志配置:默认

      • 刷新间隔:60秒
      • 包含系统日志
Last modification:August 6th, 2020 at 02:02 pm