Skip to content

This project demonstrates how to instrument a PostgreSQL database using OpenTelemetry, enabling both tracing (Jaeger) and metrics (Prometheus).

Notifications You must be signed in to change notification settings

Ayaan49/OTEL-PG-Project

Repository files navigation

Instrumenting PostgreSQL with OpenTelemetry

Python Docker OpenTelemetry Jaeger Prometheus

This project demonstrates how to instrument a PostgreSQL database using OpenTelemetry, enabling both tracing (Jaeger) and metrics (Prometheus).
It uses Docker Compose to set up a small environment with:

  • A Python app performing CRUD operations on PostgreSQL
  • OpenTelemetry Collector forwarding data
  • Jaeger for traces
  • Prometheus for metrics

image4


Prerequisites

Before you begin, ensure you have the following installed:

  • Docker and Docker Compose – for running containers
  • Python 3.9+ – used in the demo app

Step 1: Set Up the Project Structure

Create a new directory for your OpenTelemetry + PostgreSQL demo:

mkdir OTEL-PG-Project && cd OTEL-PG-Project

Step 2: Writing the Application Code

Create a file named app.py and divide it into three logical parts.


Part 1: OpenTelemetry Configuration

Sets up tracing and metrics using OpenTelemetry. It configures exporters, creates tracer and meter providers, and instruments psycopg2.

import os
import time
import psycopg2
from opentelemetry import trace, metrics
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.instrumentation.psycopg2 import Psycopg2Instrumentor
from opentelemetry.sdk.metrics import MeterProvider
from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader
from opentelemetry.exporter.otlp.proto.grpc.metric_exporter import OTLPMetricExporter

# Resource definition for observability metadata
resource = Resource.create({"service.name": "python-postgresql-app"})

# Tracing configuration
tracer_provider = TracerProvider(resource=resource)
trace.set_tracer_provider(tracer_provider)
span_processor = BatchSpanProcessor(OTLPSpanExporter(
    endpoint=os.getenv("OTEL_EXPORTER_OTLP_ENDPOINT", "http://jaeger:4317"),
    insecure=True
))
tracer_provider.add_span_processor(span_processor)

# Metrics configuration
metric_reader = PeriodicExportingMetricReader(OTLPMetricExporter(
    endpoint=os.getenv("OTEL_EXPORTER_OTLP_ENDPOINT", "http://otel-collector:4317"),
    insecure=True
))
meter_provider = MeterProvider(resource=resource, metric_readers=[metric_reader])
metrics.set_meter_provider(meter_provider)

# Get tracer and meter
tracer = trace.get_tracer(__name__)
meter = metrics.get_meter(__name__)

# Define a counter metric
db_operations_total = meter.create_counter(
    name="db.operations.total",
    description="Total number of database operations.",
    unit="{operations}"
)

# Instrument psycopg2 for auto-tracing DB queries
Psycopg2Instrumentor().instrument()

Part 2: Database Logic & Instrumentation

Defines utility functions for database operations, all instrumented with traces and metrics.

def get_db_connection():
    max_attempts = 5
    current_attempt = 0
    while current_attempt < max_attempts:
        try:
            conn = psycopg2.connect(
                host=os.getenv("DB_HOST", "postgres"),
                database=os.getenv("DB_NAME", "testdb"),
                user=os.getenv("DB_USER", "user"),
                password=os.getenv("DB_PASSWORD", "password")
            )
            print(f"Connected to PostgreSQL on attempt {current_attempt + 1}")
            return conn
        except psycopg2.OperationalError as e:
            print(f"Attempt {current_attempt + 1} failed: {e}")
            current_attempt += 1
            time.sleep(2)
    raise Exception("Failed to connect to PostgreSQL")


def create_table():
    with tracer.start_as_current_span("create_table"):
        conn = get_db_connection()
        cur = conn.cursor()
        cur.execute("DROP TABLE IF EXISTS users;")
        cur.execute("CREATE TABLE users (id SERIAL PRIMARY KEY, name VARCHAR(100), email VARCHAR(100));")
        conn.commit()
        cur.close()
        conn.close()
        db_operations_total.add(2, attributes={"operation": "create_table"})


def insert_user(name, email):
    with tracer.start_as_current_span("insert_user") as span:
        span.set_attribute("user.name", name)
        span.set_attribute("user.email", email)
        conn = get_db_connection()
        cur = conn.cursor()
        cur.execute("INSERT INTO users (name, email) VALUES (%s, %s);", (name, email))
        conn.commit()
        cur.close()
        conn.close()
        db_operations_total.add(1, attributes={"operation": "insert_user"})

Other similar functions like get_users(), update_user_email(), and delete_user() follow the same tracing + metrics pattern.


Part 3: Running the Demo

if __name__ == "__main__":
    print("Starting OpenTelemetry PostgreSQL Demo...")
    create_table()
    insert_user("Alice", "[email protected]")
    insert_user("Bob", "[email protected]")
    get_users()
    update_user_email("Alice", "[email protected]")
    get_users()
    delete_user("Bob")
    get_users()
    print("Check Jaeger (http://localhost:16686) for traces and Prometheus (http://localhost:9090) for metrics.")

Step 3: Add Dependencies and Dockerize the App

requirements.txt

psycopg2-binary==2.9.9
opentelemetry-sdk==1.20.0
opentelemetry-exporter-otlp==1.20.0
opentelemetry-instrumentation-psycopg2==0.41b0

Dockerfile

FROM python:3.9-slim-buster

WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY app.py .

CMD ["python", "app.py"]

Step 4: Set Up the OpenTelemetry Collector

Create a file named otel-collector-config.yaml:

receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318

processors:
  batch:

exporters:
  otlp/jaeger:
    endpoint: jaeger:4317
    tls:
      insecure: true
  prometheus:
    endpoint: "0.0.0.0:8889"

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlp/jaeger]
    metrics:
      receivers: [otlp]
      processors: [batch]
      exporters: [prometheus]

Step 5: Configure Prometheus

Create a prometheus.yml file:

global:
  scrape_interval: 15s

scrape_configs:
  - job_name: "otel-collector"
    static_configs:
      - targets: ["otel-collector:8889"]

Step 6: Run All Services with Docker Compose

Create docker-compose.yml:

version: '3.8'

services:
  postgres:
    image: postgres:13
    container_name: otel-postgres
    environment:
      POSTGRES_DB: testdb
      POSTGRES_USER: user
      POSTGRES_PASSWORD: password
    ports:
      - "5432:5432"
    volumes:
      - pgdata:/var/lib/postgresql/data

  jaeger:
    image: jaegertracing/all-in-one:latest
    container_name: otel-jaeger
    ports:
      - "6831:6831/udp"
      - "16686:16686"

  otel-collector:
    image: otel/opentelemetry-collector-contrib:0.99.0
    container_name: otel-collector
    command: [--config=/etc/otel-collector-config.yml]
    volumes:
      - ./otel-collector-config.yml:/etc/otel-collector-config.yml
    ports:
      - "4317:4317"
      - "4318:4318"
      - "8889:8889"
    depends_on:
      - jaeger
      - prometheus

  prometheus:
    image: prom/prometheus:v2.52.0
    container_name: otel-prometheus
    command: --config.file=/etc/prometheus/prometheus.yml
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml
    ports:
      - "9090:9090"

  app:
    build: .
    container_name: otel-python-app
    environment:
      DB_HOST: postgres
      DB_NAME: testdb
      DB_USER: user
      DB_PASSWORD: password
      OTEL_EXPORTER_OTLP_ENDPOINT: http://otel-collector:4317
    depends_on:
      - postgres
      - otel-collector

volumes:
  pgdata:

Start everything:

docker-compose up --build -d

Step 7: View Traces and Metrics

View Traces in Jaeger

  • Open: http://localhost:16686
  • Select python-postgresql-app under Service
  • Click Find Traces Each trace includes spans like insert_user, get_users, etc.

image3

View Metrics in Prometheus

  • Open: http://localhost:9090

  • Go to Status → Targets → confirm otel-collector is UP

  • In the Expression box, run:

    db_operations_total
    
  • Click Execute to view live metrics

image1


🧾 Summary

This demo illustrates:

  • How to instrument PostgreSQL operations using OpenTelemetry
  • How to export traces to Jaeger and metrics to Prometheus
  • How to visualize application performance and database interactions

About

This project demonstrates how to instrument a PostgreSQL database using OpenTelemetry, enabling both tracing (Jaeger) and metrics (Prometheus).

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published