Founding Backend Engineer & Senior Context Engineer
$ echo $IMPACT_METRICS
$ cat tech-stack.json
๐ค AI & Machine Learning
โก Core Technologies
๐ง Supporting Stack
โ๏ธ Infrastructure & DevOps
$ cat README.md
AFORI Solutions is building the agentic operating system for insurance: a B2B platform where AI agents work alongside brokers and operators to digitize applications, compare policies, check contracts, summarize claims, and handle day-to-day correspondence across complex, document-heavy workflows.
I joined as founding backend engineer to own the architecture that turns a collection of promising prototypes into a system that real insurance operators can rely on. My role sits at the intersection of two disciplines: classical backend engineering (reliable APIs, event-driven ingestion, multi-tenant data isolation) and context engineering โ the emerging discipline of designing how AI agents find, rank, and consume information at inference time.
The work is foundational. Patterns I establish now determine how every new agent, every new document type, and every new tenant will plug into the platform for years to come.
$ git log --oneline responsibilities/
$ grep -r "achievement" ./
$ cat CHALLENGES.md
Context Windows vs. Document-Heavy Insurance Workflows
Insurance agents routinely need to reason over accounts, contracts, claims, correspondence, regulatory references, and operational playbooks. Loading even a fraction of this content into a single prompt blows through context windows, inflates cost, and buries the signal in noise.
Designed the Agent Context Layer Architecture: seven layers with distinct entity scopes, rates of change, caching policies, and token budgets. A ContextAssembler service resolves which layers a task needs, queries each layer's document store, ranks chunks by relevance, and assembles a final payload that respects per-layer and total token budgets. Budgets are proportional, so unused allocation redistributes to higher-priority layers.
Documents Without a Home
In insurance, documents often arrive before anyone knows which case, claim, or policy they belong to: email attachments, broker uploads, third-party syncs. A pipeline that demands scope at arrival time stalls indefinitely, and a pipeline that re-runs extraction per scope wastes compute.
Built a two-stage ingestion pipeline: stage one materializes a canonical extraction artifact as soon as a document arrives, regardless of scope; stage two publishes embeddings into scoped vector indexes when business context is assigned. Extraction runs once, indexes project on demand, and scope assignment becomes a metadata operation rather than a reprocessing event.
Reliable Agents Across Four Repositories
The platform spans four repositories (apps, distribution-api, agents, orm) that must evolve together. A schema change in orm can silently break an agent; a new agent tool can break a backend service. Without careful coordination, the system fragments.
Established explicit dependency flow (apps โ distribution-api โ orm, distribution-api โ agents โ orm), local package linking for in-flight cross-repo changes, and release tracking via changesets. Testing strategy requires updating the nearest dependent repo test whenever a contract changes, not just the source repo.
Evaluating Agents That Actually Do Work
Unit tests confirm a function runs; they do not confirm an agent made the right decision, used the right tool, cited the right document, or refused to hallucinate a policy clause. Traditional testing leaves the highest-risk surface of the platform untested.
Built an agent evaluation harness with LLM-tagged Jest projects that run smoke, unit, and end-to-end evaluations against real agent graphs. Evaluations assert on structured output shape, tool call sequences, grounding citations, and refusal behavior. Regressions surface before production, not after.
$ cat details.md
Why AFORI
Insurance is one of the most document-heavy industries on earth. Every case, claim, policy, and interaction generates paper, PDFs, emails, and tables that humans have to read, cross-reference, and act on. It is exactly the kind of work where agentic AI should shine, and exactly the kind of work where shallow AI implementations fail publicly.
AFORI is building the platform that closes that gap. I joined because the problem space is real, the team is serious about production quality, and the role lets me do the work I care about most: turning AI capability into dependable engineering.
What a Senior Context Engineer Actually Does
The title is newer than the role it describes. “Context engineering” is the discipline of designing how an AI agent finds, ranks, filters, and consumes information at inference time. It sits between retrieval, prompting, and evaluation, and it is where most agentic systems succeed or fail.
At AFORI, my context engineering work includes:
- Defining retrieval scopes so agents query the right slice of the data graph
- Allocating token budgets per layer so no single context source starves the others
- Designing reranking and citation flows so agents can ground every claim in a source
- Building evaluation harnesses that measure context quality, not just output fluency
- Documenting reference patterns the rest of the team can build against
Every agent on the platform inherits these decisions. Good context engineering is invisible; bad context engineering shows up as hallucinations, cost blowouts, and lost user trust.
The Foundation Work
As a founding engineer, most of what I ship is foundational: pipelines, frameworks, patterns, and the documentation that makes them legible. That includes:
- The two-stage ingestion pipeline that separates extraction from indexing
- The Agent Context Layer Architecture that defines how agents see the world
- The agent evaluation harness that prevents regression in production
- The cross-repo dependency discipline that keeps four repositories shipping in lockstep
None of this is glamorous. All of it is load-bearing.
Why Barcelona
AFORI is headquartered in Barcelona, with a European customer base that demands GDPR discipline, multilingual support, and tight iteration with design partners across the region. Being on European time and close to the customer matters for a platform this operational.
Related
Technologies: LangChain, AI Agents, RAG Systems, PGVector, Node.js, TypeScript, PostgreSQL, OpenAI, Anthropic Claude
Similar Roles: AI Backend Lead at Anaqua | Founder at Sparrow Intelligence | Senior Engineer at Flowrite