"The PM who holds the system view when organizational pressure says to look away. Five programs. One pattern: the gap between what each team owns and what the system does is where the risk lives — and where I work."
Five programs. Five companies. Five domains. One through-line: the gap between what each team owned and what the system actually did — that's where the risk lived. In every case, I closed it before production.
| What I Bring | What Proves It |
|---|---|
| ✦System-level ownership | Sole integrated view across three interdependent programs on a fixed regulatory deadline — the view no individual program team held |
| ✦High-stakes judgment under pressure | Three holds in Flagship I, each made against named organizational opposition, each correct in production. Zero post-launch remediation |
| ✦ML and AI product governance | Owned precision/recall targets, signal contracts, drift thresholds, and retraining triggers as first-class product requirements before Sprint 1 of build |
| ✦The metric that matters | Replaced approval rate with completion rate — made eight figures of invisible revenue visible without shipping a single new feature |
| ✦Cross-functional authority without authority | Aligned engineering, risk, operations, compliance, and ML teams across all five programs with no direct reporting authority over any of them |
| ✦Regulated environment delivery | SOX, PCI-DSS, and a regulatory-committed deadline governing $1.9T in assets — met in full, zero remediation |
| ✦Compounding architecture over feature lists | Reframed AI personalization from "ship features" to "build a learning system" — +65% progression continued improving after launch because the architecture was designed to improve |
The largest brokerage integration in industry history. One team. One fixed regulatory deadline. Zero disruption on day one.
| Stage | System Dependency | Failure Consequence | My Ownership |
|---|---|---|---|
| Authentication | Identity layer + session continuity | Clients locked out of all downstream systems | Defined auth success criteria across both platforms |
| Account Mapping | Position, permission, preference sync | Incorrect balances, missing positions, wrong access | Mapping validation gates before any wave proceeded |
| Fund Transfer | ACH, Wire, Zelle continuity | Funds inaccessible or in indeterminate state | Failure behavior requirements for every transfer state |
| Trading Access | Thinkorswim platform parity | Active traders unable to execute | Trading readiness criteria and go/no-go gates |
| Client Comms | CRM + journey state alignment | Incorrect messaging → support volume spike | Comms readiness as launch gate — not parallel track |
| Metric | Result | What Drove It |
|---|---|---|
| Client disruption on day one | Zero | Go/no-go readiness gates held two waves until operationally ready — not just engineering complete |
| Assets migrated | $1.9T | Phased wave execution with explicit validation criteria at each stage |
| Accounts transitioned | 17M | Full account mapping validation — 340,000 edge cases caught and resolved pre-launch |
| Platform ranking | #1 & #2 J.D. Power 2024 | Zero disruption maintained client trust through transition |
| Post-launch remediation | Zero | Three holds — each against pressure — each correct |
Designing the ML system that decides — in under one second, at $200B+ payments volume — whether a device can be trusted to move money that cannot be recalled.
| Layer | Function | Latency Budget | What I Owned |
|---|---|---|---|
| L1 — Device Intelligence | Fingerprinting, behavior tracking, anomaly detection | <50ms | Defined 30+ trust signals vs. noise |
| L2 — Signal Processing | Normalization, weighting, quality scoring | <50ms | Signal quality thresholds and fallback logic |
| L3 — ML Engine | Fraud scoring, composite trust score 0–100 | <200ms p99 | Precision/recall targets + biweekly retraining governance |
| L4 — Decision Layer | Allow / Step-Up / Block logic | <10ms | Threshold definition by transaction type, amount, device history |
| L5 — Payment Execution | Zelle transaction authorization | — | Payment-layer acceptance criteria + failure recovery |
| Metric | Result | What Drove It |
|---|---|---|
| Fraud rate | -15% | Context-aware scoring — not global threshold tightening |
| Auth success rate | 95% | Dynamic thresholds stopped blocking legitimate transactions from trusted devices |
| Friction for trusted users | Zero added | Allow / Step-Up / Block applied proportionally to risk — not uniformly |
| False positive rate | 2.4% → 0.8% | Biweekly PM-governed retraining — not automated |
| Rollback capability | <60 seconds | PM-controlled governance — no code deploy required |
| Cross-functional alignment | 55% → 90% | Structured decision forums replaced status meetings |
$200B+ in payments governed. 500K learners served. 1M+ users across 25 countries unified. Zero post-launch rollbacks across all three programs.
I owned the product definition, signal contracts, and model KPIs for every layer — from behavioral data (L1) through signal processing (L2), ML engine governance (L3), decision layer thresholds (L4), and product experience acceptance criteria (L5). Every decision traced back to this architecture.
Launched in controlled cohorts instead of all 25+ markets simultaneously. Cohort approach exposed 3 critical regional dependencies before global rollout — different billing cycle handling, CRM field mapping, and CS escalation logic. Each would have required emergency rollback if discovered post-launch. None did.
View Full Case Study →Five governing principles built from 18 years in environments where the cost of failure is measured in billions — not NPS points.
Every tradeoff, every decision, every sprint backlog, every outcome — documented in full. Not a highlight reel.