founding-backend-engineer-senior-context-engineer@afori-solutions:~/career
โ† Back to CV
Current Founding Engineer InsurTech Agentic AI

Founding Backend Engineer & Senior Context Engineer

AFORI Solutions ๐Ÿ‡ช๐Ÿ‡ธ Barcelona, Spain
๐Ÿ“… January 2026 โ†’ Present (Ongoing)

$ echo $IMPACT_METRICS

0โ†’1 Platform Foundation
40+ Specialized AI Agents
7 Context Layers

$ cat tech-stack.json

๐Ÿค– AI & Machine Learning

LangGraphLangChainOpenAI GPT-4Anthropic ClaudeRAG PipelinesAgentic WorkflowsMulti-Layer Context RetrievalStructured Output ValidationAgent Evaluation (LLM-as-Judge)

โšก Core Technologies

๐Ÿ”ง Supporting Stack

โ˜๏ธ Infrastructure & DevOps

TurborepoDockerGitHub ActionsGitHub PackagesJestVitest

$ cat README.md

AFORI Solutions is building the agentic operating system for insurance: a B2B platform where AI agents work alongside brokers and operators to digitize applications, compare policies, check contracts, summarize claims, and handle day-to-day correspondence across complex, document-heavy workflows.

I joined as founding backend engineer to own the architecture that turns a collection of promising prototypes into a system that real insurance operators can rely on. My role sits at the intersection of two disciplines: classical backend engineering (reliable APIs, event-driven ingestion, multi-tenant data isolation) and context engineering โ€” the emerging discipline of designing how AI agents find, rank, and consume information at inference time.

The work is foundational. Patterns I establish now determine how every new agent, every new document type, and every new tenant will plug into the platform for years to come.

$ git log --oneline responsibilities/

โ†’
Architected the two-stage document ingestion pipeline that decouples content extraction from retrieval publishing, enabling deferred embedding, multi-scope projection, and domain-aware indexing across the entire platform
โ†’
Designed the Agent Context Layer Architecture, a seven-layer context retrieval system with per-layer token budgets, caching policies, and relevance patterns that scope precisely what each agent sees at inference time
โ†’
Built and maintain the backend monorepo (Turborepo) spanning the main API, admin API, APDB API, socket service, agent-worker, agent-cloud, ingestion-api, and analytics-logger
โ†’
Own the AI agents framework used across 40+ specialized agents: broker copilots, policy digitizers, commission report processors, claim extractors, email assistants, and more
โ†’
Lead context engineering strategy: designing retrieval scopes, token budget allocation, citation-aware reranking, and fallback chains that keep agents grounded in real customer data
โ†’
Set engineering standards for the team: code review patterns, testing strategy (Jest, Vitest, LLM-tagged E2E suites), cross-repo dependency management, and release tracking through changesets
โ†’
Drive cross-repo architecture decisions across four tightly coupled repositories (apps, distribution-api, agents, orm) that must evolve in lockstep as schema and contracts change
โ†’
Mentor engineers on agentic patterns: structured outputs, retry and fallback chains, evaluation harnesses, and the subtle differences between a demo agent and a production agent

$ grep -r "achievement" ./

โœ“
Shipped the production ingestion pipeline that handles documents arriving without scope, materializes canonical artifacts once, and projects embeddings into multiple vector indexes on demand, eliminating redundant extraction work
โœ“
Designed and documented the multi-layer context architecture that became the reference pattern for all new agents: transactional, statutory, insurance, interaction history, regulatory, operational, and third-party layers, each with explicit token budgets
โœ“
Reduced agent latency and cost by replacing monolithic prompt loading with scoped, layered retrieval, so agents only pay for the context they actually need
โœ“
Established the agent evaluation harness with LLM-tagged Jest suites that catch regressions in tool use, structured output shape, and domain reasoning before they ship
โœ“
Authored reference documentation (execution plans, architecture diagrams, and implementation plans) that the broader team uses to onboard and extend the platform
โœ“
Published public technical writing on the ingestion pipeline architecture, establishing AFORI’s engineering brand in the agentic AI space

$ cat CHALLENGES.md

Context Windows vs. Document-Heavy Insurance Workflows

๐Ÿ”ด Challenge:

Insurance agents routinely need to reason over accounts, contracts, claims, correspondence, regulatory references, and operational playbooks. Loading even a fraction of this content into a single prompt blows through context windows, inflates cost, and buries the signal in noise.

๐ŸŸข Solution:

Designed the Agent Context Layer Architecture: seven layers with distinct entity scopes, rates of change, caching policies, and token budgets. A ContextAssembler service resolves which layers a task needs, queries each layer's document store, ranks chunks by relevance, and assembles a final payload that respects per-layer and total token budgets. Budgets are proportional, so unused allocation redistributes to higher-priority layers.

Documents Without a Home

๐Ÿ”ด Challenge:

In insurance, documents often arrive before anyone knows which case, claim, or policy they belong to: email attachments, broker uploads, third-party syncs. A pipeline that demands scope at arrival time stalls indefinitely, and a pipeline that re-runs extraction per scope wastes compute.

๐ŸŸข Solution:

Built a two-stage ingestion pipeline: stage one materializes a canonical extraction artifact as soon as a document arrives, regardless of scope; stage two publishes embeddings into scoped vector indexes when business context is assigned. Extraction runs once, indexes project on demand, and scope assignment becomes a metadata operation rather than a reprocessing event.

TypeScriptExpressBullMQPostgreSQLPGVectorCanonical Artifact Pattern

Reliable Agents Across Four Repositories

๐Ÿ”ด Challenge:

The platform spans four repositories (apps, distribution-api, agents, orm) that must evolve together. A schema change in orm can silently break an agent; a new agent tool can break a backend service. Without careful coordination, the system fragments.

๐ŸŸข Solution:

Established explicit dependency flow (apps โ†’ distribution-api โ†’ orm, distribution-api โ†’ agents โ†’ orm), local package linking for in-flight cross-repo changes, and release tracking via changesets. Testing strategy requires updating the nearest dependent repo test whenever a contract changes, not just the source repo.

TurborepoTypeORMGitHub PackagesChangesetsJest

Evaluating Agents That Actually Do Work

๐Ÿ”ด Challenge:

Unit tests confirm a function runs; they do not confirm an agent made the right decision, used the right tool, cited the right document, or refused to hallucinate a policy clause. Traditional testing leaves the highest-risk surface of the platform untested.

๐ŸŸข Solution:

Built an agent evaluation harness with LLM-tagged Jest projects that run smoke, unit, and end-to-end evaluations against real agent graphs. Evaluations assert on structured output shape, tool call sequences, grounding citations, and refusal behavior. Regressions surface before production, not after.

JestLangGraphLangSmithStructured OutputsLLM-as-Judge

$ cat details.md

Why AFORI

Insurance is one of the most document-heavy industries on earth. Every case, claim, policy, and interaction generates paper, PDFs, emails, and tables that humans have to read, cross-reference, and act on. It is exactly the kind of work where agentic AI should shine, and exactly the kind of work where shallow AI implementations fail publicly.

AFORI is building the platform that closes that gap. I joined because the problem space is real, the team is serious about production quality, and the role lets me do the work I care about most: turning AI capability into dependable engineering.

What a Senior Context Engineer Actually Does

The title is newer than the role it describes. “Context engineering” is the discipline of designing how an AI agent finds, ranks, filters, and consumes information at inference time. It sits between retrieval, prompting, and evaluation, and it is where most agentic systems succeed or fail.

At AFORI, my context engineering work includes:

  • Defining retrieval scopes so agents query the right slice of the data graph
  • Allocating token budgets per layer so no single context source starves the others
  • Designing reranking and citation flows so agents can ground every claim in a source
  • Building evaluation harnesses that measure context quality, not just output fluency
  • Documenting reference patterns the rest of the team can build against

Every agent on the platform inherits these decisions. Good context engineering is invisible; bad context engineering shows up as hallucinations, cost blowouts, and lost user trust.

The Foundation Work

As a founding engineer, most of what I ship is foundational: pipelines, frameworks, patterns, and the documentation that makes them legible. That includes:

  • The two-stage ingestion pipeline that separates extraction from indexing
  • The Agent Context Layer Architecture that defines how agents see the world
  • The agent evaluation harness that prevents regression in production
  • The cross-repo dependency discipline that keeps four repositories shipping in lockstep

None of this is glamorous. All of it is load-bearing.

Why Barcelona

AFORI is headquartered in Barcelona, with a European customer base that demands GDPR discipline, multilingual support, and tight iteration with design partners across the region. Being on European time and close to the customer matters for a platform this operational.


Technologies: LangChain, AI Agents, RAG Systems, PGVector, Node.js, TypeScript, PostgreSQL, OpenAI, Anthropic Claude

Similar Roles: AI Backend Lead at Anaqua | Founder at Sparrow Intelligence | Senior Engineer at Flowrite

Writing: Two-Stage Document Ingestion Pipeline

$ ls -la case-studies/