Streaming logs in Rust with Tokio

← Back to Blog

You get a tiny, focused log tailer you can drop into any service:

  • low latency: events are pushed as they arrive
  • bounded memory: backpressure via streaming reads
  • structured JSON ready for any log pipeline

Shipping logs line-by-line keeps latency low and memory stable. This snippet shows a minimal async tailer that emits JSON.

The streamer

use tokio::{fs::File, io::{AsyncBufReadExt, BufReader}};
use serde::Serialize;

#[derive(Serialize)]
struct LogLine<'a> {
    line: &'a str,
    source: &'a str,
}

async fn stream_file(path: &str, source: &str) -> anyhow::Result<()> {
    let file = File::open(path).await?;
    let reader = BufReader::new(file);
    let mut lines = reader.lines();

    while let Some(line) = lines.next_line().await? {
        let payload = LogLine { line: &line, source };
        let json = serde_json::to_string(&payload)?;
        // Replace with your sink: TCP, HTTP, Kafka, etc.
        println!("{}", json);
    }
    Ok(())
}

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    // Run with: cargo run -- path/to/app.log
    let path = std::env::args().nth(1).expect("missing log path");
    stream_file(&path, "app").await
}

Notes

  • BufReader keeps memory bounded; lines() yields lazily.
  • Swap println! with your preferred sink client.
  • Use tokio::select! to combine multiple files or shutdown signals.
  • Emit structured JSON so downstream consumers can parse reliably.