ai / ml
i use ai / ml when model behavior meaningfully changes the workflow, not when a chatbot veneer is the whole pitch.
the kind of work i do here
- wiring retrieval and generation flows into concrete operator or analyst tasks
- building agent-style systems where tool use, context passing, and guardrails matter more than demo flash
- adding ml-assisted features that need review loops, fallbacks, and practical debugging paths
scope: this covers prompt orchestration, retrieval pipelines, and model-driven product features — the work where the ai layer is doing something real, not decorative.
flagship highlights
collection curator api
an apollo and fastapi system for exploring ai-assisted analytics curation with graph queries, python endpoints, and guarded service-to-service calls inside one real api surface.
problem: the team needed to test whether an ai-assisted curation workflow could live inside a product-grade api stack instead of as a disconnected prototype.
role: i helped shape the service boundaries and workflow so model-backed experimentation could sit next to auth, validation, database, and data-source concerns without becoming a toy app.
constraints:
- the system had to support both conventional api work and ai-specific behavior in the same service boundary.
- node and python concerns needed to cooperate without exposing the python surface directly.
- the experiment still needed real auth, validation, and data plumbing because the goal was learning what would hold up in a product setting.
decisions:
- paired Express and Apollo for the main api surface while keeping FastAPI behind a reverse-proxied secondary service for python-heavy work.
- used Prisma, Redis pub/sub, and Cognito-backed middleware so the ai exploration lived inside normal backend discipline.
- treated the project as an investigation of the next product analytics api iteration rather than a throwaway chat demo.
outcomes:
- proved a more capable architecture for ai-assisted curation than the existing app sync path alone.
- made it easier to test retrieval, subscriptions, and mixed-language service behavior in one place.
- gave the team a concrete sandbox for seeing where model-backed product workflows were promising and where they added complexity.
stack:
- Express
- Apollo GraphQL
- Prisma
- PostgreSQL
- Snowflake
- Redis pub/sub
- FastAPI
- AWS Cognito
proof: repo
mcp demo
a small demo that made MCP-style user actions and agent actions tangible by showing both modes in a working interface instead of in abstract slides.
problem: people could talk about tool-using agents all day, but it was hard to judge the value until there was a concrete example of how user-driven and agent-driven actions actually felt.
role: i turned the concept into a demo surface that made the interaction model easy to inspect, explain, and share outside a meeting.
constraints:
- the demo needed to stay simple enough to understand quickly while still proving something real about MCP-style actions.
- it had to show both user-driven and agent-driven behavior instead of collapsing them into the same vague flow.
- because it was a demo, the explanation layer mattered almost as much as the underlying mechanics.
decisions:
- kept the scope tight around one concrete demo instead of padding it with unrelated agent features.
- made the two interaction modes visible so people could compare direct user action with delegated agent action.
- used a live demo link and repo as proof so the idea stayed inspectable outside a slide deck.
outcomes:
- gave stakeholders a working example of MCP-style interaction instead of a conceptual pitch.
- made the tradeoffs between user-driven and agent-driven actions much easier to discuss.
- served as a small but credible proof point that agent tooling could be grounded in product behavior.
stack:
- React
- Mantine
- Express
- LangChain
- LangGraph
- MCP SDK
- Socket.io
- AWS Bedrock
- Sentry
supporting work
bedrock utilities in datalabs api
Bedrock-backed retrieve, converse, and knowledge-base helpers wired into a larger production API surface.
proof: repo
overlap: the consuming data workflows live in analytics .
nearby domains
when a project crosses boundaries, it usually lands closest to analytics , developer experience .