back home

analytics

i build analytics systems when the real bottleneck is understanding the business, not collecting one more table.

the kind of work i do here

  • building dashboards and reports that ops teams actually use every morning
  • designing measurement layers so trends, exceptions, and drill-downs are obvious before anyone is flying blind
  • data-heavy interfaces where query depth, filters, and clarity matter as much as frontend polish

scope: this covers the reporting, measurement, and data-trust side of products — the work where the main challenge is making the numbers useful, not building the platform underneath.

flagship highlights

web portal

the main operator-facing reporting surface for drilling into product data, narrowing with filters, and taking follow-up action without bouncing between separate tools.

problem: teams needed one place to inspect product data and answer day-to-day questions without turning every investigation into an ad hoc query or spreadsheet detour.

role: i shaped the reporting workflow and translated the core business questions into filters, views, and drill-down paths people could use in real work.

constraints:

  • the interface had to support quick checks and deeper investigation without collapsing into a generic dashboard shell.
  • data trust mattered because the page was meant to drive follow-up actions, not just passive reading.
  • operators needed the reporting context and the next step close together instead of split across multiple tools.

decisions:

  • centered the portal on operator workflows instead of a chart-first dashboard layout.
  • paired summary views with filter and drill-down paths so people could move from signal to record-level context without losing their place.
  • kept follow-up actions near the reporting surface so the portal could help resolve questions instead of only describe them.

outcomes:

  • gave the team one reporting home for exploring product data, filters, and exceptions.
  • cut down the back-and-forth between raw data pulls and separate operational tools during investigations.
  • became the clearest analytics proof point in the portfolio because the reporting depth was tied to real day-to-day use.

stack:

  • React
  • Redux
  • Apollo GraphQL
  • D3
  • visx
  • AWS Amplify
  • Vite
  • Sentry
  • Umami

superset on stargazer

a repeatable path for running Apache Superset on the shared EKS platform so teams could publish dashboards without kicking off a separate platform project first.

problem: teams needed dashboarding, but standing up a one-off analytics environment each time would have turned a normal reporting ask into avoidable platform work.

role: i mapped the existing cluster and release rails into a usable analytics deployment path so dashboarding could fit the same platform standards carrying the rest of the stack.

constraints:

  • it had to fit a shared EKS environment instead of assuming a single-purpose analytics cluster.
  • analysts needed a publishable dashboard path without becoming kubernetes specialists.
  • the setup had to be repeatable enough that each new dashboard request did not restart the same platform conversation from zero.

decisions:

  • used the existing stargazer application rail instead of carving out a special-case hosting path.
  • optimized for a documented repeatable deployment shape rather than a fragile manual install.
  • kept dashboard authoring separate from lower-level cluster concerns so the analytics workflow stayed approachable.

outcomes:

  • made Apache Superset a realistic option on the shared platform instead of a theoretical someday tool.
  • shortened the path from 'we need a dashboard' to a working deployment shape.
  • proved that analytics tooling could ride the same release and platform standards as the rest of the stack.

stack:

  • Apache Superset
  • AWS
  • EKS

supporting work

umami

a self-hosted analytics deployment in aws so baseline measurement stayed inside our own stack and easy to inspect.

overlap: the hosting path leans on infrastructure .

when a project crosses boundaries, it usually lands closest to product , ai / ml .