Fackel: an autonomous pentest framework powered by ReAct agents
Fackel is a multi-agent pentest framework where LLMs decide strategy, not hardcoded pipelines. A walkthrough of the architecture, the design decisions, and the lessons learned.
24 posts on systems, architecture, reliability, and leadership.
A practical deep dive into device code phishing combined with vishing targeting Microsoft Entra: how the OAuth device code flow gets abused, what to monitor, and how to mitigate.
A practical overview of modern AI agent systems: tool use, retrieval, memory, verification, multi-agent patterns, evaluation, and security.
Autoregressive models are just the probability chain rule plus a conditional model. Here’s the mental model, the math, and what training is really doing.
A lightweight memo format that clarifies the call, exposes trade-offs, and speeds up execution.
A rigorous analysis of how probabilistic reasoning in generative models shapes security risk, failure modes, and robustness.
Examines how Kotlin’s type system and language semantics sharpen responsibility boundaries in Spring-style architectures without replacing architectural discipline.
A reflective essay on learning as disciplined endurance of uncertainty, revision, and silence.
Argues that abstraction layers can obscure failure modes, shift risk across boundaries, and weaken assurance unless their assumptions are made explicit.
A highly technical article on Amazon Bedrock with mathematical foundations and numerical examples.
Argues that SICP’s core lesson is the disciplined separation of meaning from mechanism, a prerequisite for reliable and scalable system design.
A post with a Spotify episode embedded at the top.
A quick, code-backed refresher on gradients, Jacobians, and the linear algebra that drives modern ML.
How to publish faster without losing quality: scope, guardrails, and a minimal checklist.
A small, production-ready retry helper using exponential backoff and logging.
Build a small async log streamer that tails a file and ships JSON lines.
A quick editing pass to make any post shorter and clearer.
A repeatable way to expose options, trade-offs, and a clear call in small teams.
A simple structure to keep 1:1s focused on people, not project status.
Three practical lessons that made the year more sustainable and effective.
Argues that probabilistic behavior, distributional risk, and system composability invalidate core assumptions of classical threat modeling for generative AI.
Argues that postmortems often substitute proximate triggers for causal structure, obscuring system dynamics, incentives, and latent conditions that actually drive failure.