Connect Kafka with Golang
Introduction
If you need to know the basics of Kafka, such as its key features, components, and advantages, I have an article covering that here. Please review it and follow the steps until you’ve completed the Kafka installation using Docker to proceed with the following sections.
Connecting to Kafka with Golang
Similar to the example in the article about connecting Kafka with NodeJS, this source code also includes two parts: initializing a producer to send messages to Kafka and using a consumer to subscribe to messages from a topic.
I’ll break down the code into smaller parts for better understanding. First, let’s define the variable values.
package main
import (
"fmt"
"github.com/confluentinc/confluent-kafka-go/kafka"
)
var (
broker = "localhost:9092"
groupId = "group-id"
topic = "topic-name"
)
- Here, the package ` github.com/confluentinc/confluent-kafka-go/kafka` is used to connect to Kafka.
- The broker is the host address; if you are using ZooKeeper, replace the host address accordingly.
- The ` groupId` and ` topic` can be changed as needed.
Next is initializing the producer.
func startProducer() {
p, err := kafka.NewProducer(&kafka.ConfigMap{"bootstrap.servers": broker})
if err != nil {
panic(err)
}
go func() {
for e := range p.Events() {
switch ev := e.(type) {
case *kafka.Message:
if ev.TopicPartition.Error != nil {
fmt.Printf("Delivery failed: %v\n", ev.TopicPartition)
} else {
fmt.Printf("Delivered message to %v\n", ev.TopicPartition)
}
}
}
}()
for _, word := range []string{"message 1", "message 2", "message 3"} {
p.Produce(&kafka.Message{
TopicPartition: kafka.TopicPartition{Topic: &topic, Partition: kafka.PartitionAny},
Value: []byte(word),
}, nil)
}
}
The above code is used to send an array of messages `{“message 1”, “message 2”, “message 3”}` to a topic and uses a go-routine to iterate through events with `for e := range p.Events()` and print out the delivery result, whether it’s a success or failure.
Next is creating a consumer to subscribe to the topic and receive messages.
func startConsumer() {
c, err := kafka.NewConsumer(&kafka.ConfigMap{
"bootstrap.servers": broker,
"group.id": groupId,
"auto.offset.reset": "earliest",
})
if err != nil {
panic(err)
}
c.Subscribe(topic, nil)
for {
msg, err := c.ReadMessage(-1)
if err == nil {
fmt.Printf("Message on %s: %s\n", msg.TopicPartition, string(msg.Value))
} else {
fmt.Printf("Consumer error: %v (%v)\n", err, msg)
break
}
}
c.Close()
}
Finally, since this is a simple example, call the functions to create the producer and consumer for use. In a real-world scenario, the deployment of the producer and consumer is typically done on two different servers in a microservices system.
func main() {
startProducer()
startConsumer()
}
Some series you might find interesting: