Logging at scale, clustering OpenObserve for ingestion.

2025-09-01 10:09:34 - No Comments

NATS server installation

We will install the nats server on 3 different hosts.

sudo apt install nats-server

Create a directory for the jetstream and give it the right owner.

sudo mkdir /opt/nats
sudo chown nats:nats /opt/nats

Configure the nodes like below. One name and then routes to the other two hosts.

server_name: node1
port: 4222

cluster {
  name: my_nats_cluster
  listen: 0.0.0.0:6222

  routes = [
    nats://node2:6222
    nats://node3:6222
  ]
}

jetstream {
  store_dir: "/opt/nats"
}

We can also set it up using docker compose. Install package.

sudo apt install docker-compose

Create a docker-compose.yaml

version: "3.8"
services:
  n1:
    image: nats:2
    command: >
      -js -sd /data/jetstream -n n1
      -p 4222 -m 8222
      -cluster_name jsdemo
      -cluster nats://0.0.0.0:6222
      -routes nats://n2:6222,nats://n3:6222
    ports: ["4222:4222","8222:8222"]
  n2:
    image: nats:2
    command: >
      -js -sd /data/jetstream -n n2
      -p 4222 -m 8222
      -cluster_name jsdemo
      -cluster nats://0.0.0.0:6222
      -routes nats://n1:6222,nats://n3:6222
  n3:
    image: nats:2
    command: >
      -js -sd /data/jetstream -n n3
      -p 4222 -m 8222
      -cluster_name jsdemo
      -cluster nats://0.0.0.0:6222
      -routes nats://n1:6222,nats://n2:6222

Next we install the cluster using

docker compose up

OpenObserve installation

First of we install a couple of packages, namely nodejs, protobuf and postgresql that will be used by OpenObserve. The frontend is written in node, most of the messages sent are in protobuf and the postgresql database will keep your metadata.

sudo apt install git curl protobuf-compiler nodejs postgresql
curl -sL https://deb.nodesource.com/setup_22.x -o /tmp/nodesource_setup.sh
vi /tmp/nodesource_setup.sh
sudo bash /tmp/nodesource_setup.sh
sudo apt-get install nodejs -y

Next up we download and install rust as the backend is written in.

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

We now clone the repository, choose a version, build frontend and backend. Everything is bundled into a binary that we copy over to local bin for execution.

git clone https://github.com/openobserve/openobserve.git
cd openobserve
git checkout v0.15.0-rc5
cd web
npm install
npm run build
cd ..
cargo build -r
sudo cp target/release/openobserve /usr/local/bin

Next we prepare a data directory the system will use for temporary storage during processing.

sudo mkdir /opt/openobserve
sudo groupadd openobserve
sudo useradd -g openobserve openobserve
sudo chown openobserve:openobserve -R /opt/openobserve

Login to postgresql to setup database.

sudo -u postgres psql

We will create a user and database for the OpenObserve metadata.

CREATE USER openobserve WITH LOGIN PASSWORD 'change_me_now';
CREATE DATABASE openobserve OWNER openobserve TEMPLATE template0 ENCODING 'UTF8';
GRANT USAGE, CREATE ON SCHEMA public TO openobserve;

Now we configure the longtime storage in a S3 bucket, configure the postgresql login details and initial user and NATS cluster in environment variables. All added to /etc/systemd/system/openobserve.env file.

ZO_S3_SERVER_URL="http://ceph-service:8888/openobserve"
ZO_S3_ACCESS_KEY=T2JJAAAAAAAAAAAAV7WS0
ZO_S3_SECRET_KEY=WLvObPAAAAAAAAAAAAAAAAAAAAAAAA6vB5smDMo
ZO_S3_BUCKET_NAME=openobserve
ZO_S3_FEATURE_FORCE_HOSTED_STYLE=true
ZO_META_STORE=postgres
ZO_META_POSTGRES_DSN="postgres://openobserve:change_me_now@localhost:5432/openobserve"
ZO_LOCAL_MODE=false
ZO_ROOT_USER_EMAIL="root@example.com"
ZO_ROOT_USER_PASSWORD="qwerty"
ZO_NATS_ADDR=localhost:4222
ZO_CLUSTER_COORDINATOR=nats

Next we create an service file /etc/systemd/system/openobserve.service so we can run this in systemd.

[Unit]
Description=OpenObserve service
Wants=network-online.target

[Install]
WantedBy=multi-user.target

[Service]
WorkingDirectory=/opt/openobserve
EnvironmentFile=/etc/systemd/system/openobserve.env
Type=simple
User=openobserve
Group=openobserve
Delegate=yes
TimeoutStartSec=0
Restart=always
RestartSec=5s
ExecStart=/usr/local/bin/openobserve

Reload the daemon registry and start the service.

sudo systemctl daemon-reload
sudo systemctl start openobserve
sudo systemctl status openobserve

If you haven't already installed it we add NTP so we can ensure that all the servers have the same time.

sudo apt-get install ntp

Filebeat

We install wget and then fetch the key file for elasticsearch. Setup the repository and install filebeat. Copy the old configuration and edit a new file.

sudo apt install wget
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
echo "deb https://artifacts.elastic.co/packages/9.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-9.x.list
sudo apt-get update && sudo apt-get install filebeat apt-transport-https
sudo mv /etc/filebeat/filebeat.yml /etc/filebeat/filebeat.yml.old
sudo vi /etc/filebeat/filebeat.yml

We add this configuration to fetch all log files in /var/log and push it to the default organization and default stream using the username and password defined.

setup.ilm.enabled: false
setup.template.enabled: false

filebeat.inputs:
- type: filestream
  enabled: true
  id: ceph-logs
  paths:
    - /var/log/*.log

output.elasticsearch:
  hosts: ["http://localhost:5080"]
  timeout: 10
  path: "/api/default/"
  index: "default"
  username: "root@example.com"
  password: "qwerty"

Reload the daemon and start it. All your logs should now flow into openobserve.

sudo systemctl daemon-reload
sudo systemctl restart filebeat
sudo systemctl status filebeat

Be the first to leave a comment!


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.