senior-software-engineer@flowrite:~/career
← Back to CV
Acquired Startup AI-First

Senior Software Engineer

Flowrite 🇫🇮 Helsinki, Finland
📅 June 2022 → August 2023 (1 year 3 months)

$ echo $IMPACT_METRICS

10K→100K User Growth
40-50% Infrastructure Savings
99.9% System Uptime

$ cat tech-stack.json

🤖 AI & Machine Learning

OpenAI GPT-3/3.5CohereGantry (AI Observability)LLM Feature EngineeringPrompt Engineering

☁️ Infrastructure & DevOps

$ cat README.md

Flowrite was a pioneering AI startup in the LLM space — one of the first companies in Europe to build a production-grade generative AI product. Our email assistant helped professionals write emails 10x faster using AI.

I joined during a critical growth phase and led the AI backend architecture that powered our rapid scaling from 10,000 to 100,000 users. The product’s success led to acquisition by MailMerge in 2024.

This was one of my most challenging and rewarding roles: shipping AI features at startup speed while maintaining the reliability that paying customers demanded.

$ git log --oneline responsibilities/

Led AI backend architecture from initial design through production scaling, building systems that supported rapid feature deployment and 10x user growth
Integrated multiple LLM providers (OpenAI, Cohere) with intelligent fallbacks and cost optimization, achieving 40-50% infrastructure savings
Implemented AI observability with Gantry to monitor model performance, detect drift, and ensure consistent output quality
Built event-driven architectures using RabbitMQ and Celery for asynchronous AI processing, handling spiky traffic patterns gracefully
Developed GraphQL and gRPC APIs for efficient frontend-backend communication, minimizing latency for the Chrome extension
Managed complex data pipelines from Hasura through PostgreSQL and MongoDB to BigQuery for analytics
Mastered specialized tools rapidly — Nomad, NixOS, Gantry — to enhance deployment consistency and developer productivity
Mentored junior engineers and established best practices for AI feature development

$ grep -r "achievement" ./

Scaled AI backend 10x from 10,000 to 100,000 active users without proportional infrastructure cost increase
Reduced infrastructure costs by 40-50% through intelligent database solutions (AWS, Hetzner, MongoDB optimization)
Maintained 99.9% uptime during rapid growth phase with proactive monitoring and graceful degradation
Resolved live deployment bottlenecks under pressure, working closely with product and engineering teams
Integrated Stripe payments with multi-currency support and Cello Referral Program for growth
Built Chrome Extension backend that seamlessly integrated with Gmail and other email clients

$ cat CHALLENGES.md

LLM Latency for Real-Time Email Suggestions

🔴 Challenge:

Users expected instant email suggestions, but LLM inference times of 2-5 seconds created a poor experience, especially for short emails.

🟢 Solution:

Implemented streaming responses using Server-Sent Events, allowing users to see text generating in real-time. Built a speculative caching layer that pre-generated common response patterns. Optimized prompts to reduce token count without sacrificing quality.

TypeScriptSSERedisOpenAI Streaming API

Cost Control with Unpredictable Traffic

🔴 Challenge:

User traffic was highly spiky (Monday mornings, end-of-quarter), and LLM costs could spiral quickly without careful management.

🟢 Solution:

Designed a multi-provider architecture with intelligent routing between OpenAI and Cohere based on task complexity and cost. Implemented token budgets per user tier and graceful degradation when limits approached. This saved 40-50% on infrastructure.

PythonRedisCustom Rate LimitingMulti-Provider SDK

Observability for AI Output Quality

🔴 Challenge:

Traditional monitoring told us if services were up, but not if the AI was generating helpful emails vs. garbage.

🟢 Solution:

Integrated Gantry for AI-specific observability — tracking output quality metrics, detecting prompt injection attempts, and monitoring for model drift. Built dashboards that product could use to understand AI performance.

GantryBigQueryMixpanelCustom Metrics

$ cat details.md

The Early LLM Days

Flowrite was building with LLMs before ChatGPT made them mainstream. When I joined in mid-2022, we were among a handful of companies globally shipping production LLM products to real users.

This meant no playbook — we had to figure out best practices for:

  • Prompt engineering at scale
  • LLM cost management
  • AI observability and monitoring
  • User experience for generative AI

Architecture Deep Dive

The Request Flow

1
2
3
4
5
6
7
8
9
User Types Email Context → Chrome Extension
    GraphQL API (Hasura) → TypeScript Backend
    AI Service Layer → LLM Provider Selection
    OpenAI/Cohere → Streaming Response
    Chrome Extension → Real-time Display

Multi-Provider LLM Strategy

We couldn’t afford to be locked into one provider:

  1. OpenAI for complex, nuanced emails requiring high quality
  2. Cohere for simpler, high-volume suggestions with lower latency
  3. Intelligent Router that classified email complexity and routed accordingly

This saved us 40-50% on LLM costs while maintaining quality.

The Observability Stack

AI systems fail differently than traditional software. We built comprehensive monitoring:

  • Gantry for LLM-specific metrics (output quality, prompt effectiveness)
  • Mixpanel/Segment for user behavior and feature adoption
  • BigQuery for deep analytics on AI performance
  • Tableau dashboards for business stakeholders

Startup Lessons Learned

Speed vs. Reliability Trade-offs

At a startup, you can’t over-engineer. I learned to ship fast while maintaining just enough reliability to not lose customer trust.

AI Observability is Non-Negotiable

You can’t improve what you can’t measure. Investing early in AI-specific monitoring paid dividends as we scaled.

Cost Management is a Feature

For an AI startup, controlling LLM costs is as important as shipping features. This became a core competency.

The Acquisition

Flowrite’s success attracted MailMerge in 2024. The technical foundation we built — scalable AI backend, cost-efficient LLM orchestration, robust Chrome extension — made the product attractive for acquisition.

This validated my belief that solid engineering fundamentals matter even in fast-moving AI startups.


Technologies Used: TypeScript, Node.js, FastAPI, GraphQL, gRPC, OpenAI, Redis, RabbitMQ, Prompt Engineering

Similar Roles: AI Backend Lead at Anaqua | Founder at Sparrow Intelligence

$ ls -la case-studies/