Orlando Agostinho
Enterprise Rigor Applied to Agentic Systems
I spent 25 years building enterprise software. Now I am making the leap into AI engineering — documenting my journey into agentic systems, context engineering, and practical workflows.
Start here
If you're new, start with these.
The core pattern
I am on the journey to become an AI Engineer. The deeper I go, the clearer the pattern becomes: most agentic systems fail not because the model is weak, but because the context is wrong. Static prompts. Irrelevant retrieval. Unformatted tool outputs. The model hallucinates because it is blind, not because it is dumb.
My focus is learning how to build agentic systems that are secure and stable. That means designing what the model sees, controlling how tools are invoked, and engineering the guardrails that keep agents reliable in production.
Context Engineering: The Skill That Replaced Prompt Engineering
Why context design matters more than prompt tweaking — and the five elements every production context payload needs.
OpenClaw Power User Guide
Sessions, heartbeats, routing, and security — a field guide to running OpenClaw in real workflows.
What I'm building now
Current focus areas on my journey to become an AI Engineer.
- 01 AI workflows — designing end-to-end AI workflows that connect models, tools, and data sources in a way that is repeatable and reliable.
- 02 Agentic systems — learning how to build secure and stable agent systems using Agentforce and open frameworks, with a focus on guardrails and observability.
- 03 OpenClaw experiments — hands-on experiments with OpenClaw sessions, heartbeats, routing, and sub-agent orchestration. Read the field guide.
- 04 Context engineering — mastering how to design what the model sees: system prompts, retrieved documents, tool outputs, and memory. Read the deep dive.
- 05 Practical notes — short, direct notes from real experiments. What worked, what broke, and what I would do differently next time.
The Lab
My personal AI assistant — built from scratch, learned in public.
Building my own AI assistant
I decided to create my own personal assistant — similar to OpenClaw, but with a different approach. The goal is to understand the full stack: from model orchestration to user interaction design across Telegram and Discord.
What I discovered is that the interface shapes the workflow. How we interact with coding agents in chat platforms is fundamentally different from how we work inside IDEs. This project is where I explore that boundary.
The OpenClaw Ecosystem
Why I am focused on OpenClaw, and what just happened at NVIDIA GTC 2026.
OpenClaw is rapidly becoming the standard for agentic AI. At NVIDIA GTC 2026, CEO Jensen Huang made it clear: "Every company in the world today needs to have an OpenClaw strategy."
The biggest challenge with OpenClaw has been enterprise security. NVIDIA just solved this by announcing NemoClaw — an enterprise-grade platform built on top of OpenClaw. It introduces OpenShell, a runtime that provides kernel-level sandboxing and a "privacy router" to monitor and block agents from sending sensitive data where they shouldn't.
This is exactly why I am focusing my learning here. The gap between a cool local agent and a secure enterprise deployment is where the real engineering happens. NemoClaw makes OpenClaw viable for the strict compliance environments I spent 25 years building for.
NVIDIA's secure, enterprise-grade wrapper for OpenClaw agents.
The new runtime providing sandbox isolation and privacy routing.
The new paradigm where AI-to-AI communication drives inference demand.
Why this site exists
Most AI content is too vague, too hype-driven, or too disconnected from production reality.
This site is where I document my transition into AI engineering. I am taking 25 years of enterprise software discipline and applying it to modern agentic systems. Not just demos. I am exploring real workflows, studying failure modes, and figuring out how to build AI systems that can survive contact with production.
AI project failures
Studying why enterprise AI projects fail at the context layer, not the model layer.
Context window design
How to structure what the model sees — system prompts, retrieved docs, tool outputs, memory — for reliable results.
Production workflows
Agent orchestration, OpenClaw experiments, and AI systems that do real work beyond the demo.
Why listen to me
I spent 25 years in the enterprise software world. Finance. Insurance. Telecom. I built systems that had to work under real constraints: compliance, scale, and zero tolerance for failure.
Now I am making the leap into AI engineering. Not because it is trendy. Because it is the next hard problem. I am learning how to build agentic systems that are secure and stable, and I am documenting every step of that journey here.
I write for people who are on the same path. Experienced developers who know how to build things, and are now figuring out how AI changes the game.
Featured
-
Context Engineering: The Skill That Replaced Prompt Engineering
Prompt engineering is dead. The real skill in 2026 is designing the full context window—system prompts, memory, tool outputs, and retrieved docs—that drives agent behavior.
Recent writing
-
OpenClaw Power User Guide
A practical field guide to OpenClaw 2026.3.2 covering the gateway, sessions, heartbeats, multi-agent routing, security, and the pi.dev bridge.
Want to see everything? Browse all posts.
All Posts