Senior Product Manager · FinTech · AI/ML · Platform Systems

Andrés
Garcia.

18+ Years · Regulated Financial Platforms · Real-Time Payments · ML Governance

"The PM who holds the system view when organizational pressure says to look away. Five programs. One pattern: the gap between what each team owns and what the system does is where the risk lives — and where I work."

$1.9T
Assets Migrated · Zero Disruption
$200B+
Annual Payments Volume
33M+
Users Served
Zero
Post-Launch Rollbacks
18 yrs
Product Leadership
Scroll to explore
01 Portfolio Synthesis
What These Five
Case Studies Prove.

Five programs. Five companies. Five domains. One through-line: the gap between what each team owned and what the system actually did — that's where the risk lived. In every case, I closed it before production.

"The decisions that changed these outcomes were not heroic. They were correct. Heroic decisions get made in crisis. Correct decisions get made before the crisis arrives — which is why there was no crisis."
What I BringWhat Proves It
System-level ownershipSole integrated view across three interdependent programs on a fixed regulatory deadline — the view no individual program team held
High-stakes judgment under pressureThree holds in Flagship I, each made against named organizational opposition, each correct in production. Zero post-launch remediation
ML and AI product governanceOwned precision/recall targets, signal contracts, drift thresholds, and retraining triggers as first-class product requirements before Sprint 1 of build
The metric that mattersReplaced approval rate with completion rate — made eight figures of invisible revenue visible without shipping a single new feature
Cross-functional authority without authorityAligned engineering, risk, operations, compliance, and ML teams across all five programs with no direct reporting authority over any of them
Regulated environment deliverySOX, PCI-DSS, and a regulatory-committed deadline governing $1.9T in assets — met in full, zero remediation
Compounding architecture over feature listsReframed AI personalization from "ship features" to "build a learning system" — +65% progression continued improving after launch because the architecture was designed to improve
Core BankingReal-Time PaymentsACH · Wire · ZelleML GovernancePlatform MigrationsSOX · PCI-DSSAI SystemsFraud PreventionRegulatory DeliverySystem Architecture
02 Flagship Case Study I · Charles Schwab
★ Flagship · Charles Schwab · 2021–2023

Schwab Mobile + Thinkorswim Integration
+ $1.9T Client Migration

The largest brokerage integration in industry history. One team. One fixed regulatory deadline. Zero disruption on day one.

$1.9T
Assets Migrated
17M
Accounts · Zero Errors
#1 & #2
J.D. Power 2024
Zero
Post-Launch Remediation
3
Simultaneous Programs
⚡ 60-Second Read
Stakes
$1.9T in assets, 17M accounts, fixed regulatory deadline — no extension, no partial success, no fallback. Failure in front of a regulatory body and 17 million clients simultaneously.
Decision
Held three migration waves against engineering sign-off and C-suite velocity pressure. Maintained the integrated system view across three interdependent programs when organizational pressure pushed toward siloed tracking.
Outcome
Zero disruption to trading or fund movement on day one. Zero post-launch remediation across all three programs. Schwab ranked #1 and #2 J.D. Power 2024.
"A migration at this scale is not a data move. It is a trust transfer — executed once, on a fixed date, in front of 17 million people who depend on uninterrupted access to their money."
Program Outcomes — Measured Results
Critical Path Architecture — What I Owned
StageSystem DependencyFailure ConsequenceMy Ownership
AuthenticationIdentity layer + session continuityClients locked out of all downstream systemsDefined auth success criteria across both platforms
Account MappingPosition, permission, preference syncIncorrect balances, missing positions, wrong accessMapping validation gates before any wave proceeded
Fund TransferACH, Wire, Zelle continuityFunds inaccessible or in indeterminate stateFailure behavior requirements for every transfer state
Trading AccessThinkorswim platform parityActive traders unable to executeTrading readiness criteria and go/no-go gates
Client CommsCRM + journey state alignmentIncorrect messaging → support volume spikeComms readiness as launch gate — not parallel track
The Three Decisions That Changed the Outcome
Decision 01 — Against Engineering Sign-Off
Held Wave 2 on Account Mapping Validation
Pressure: Engineering had marked account mapping complete. Engineering VP was ready to proceed.
I identified an unresolved edge case in non-standard position structures affecting ~2% of accounts — 340,000 clients at 17M account scale. I held the wave, defined explicit validation criteria, and required full coverage before proceeding. The engineering VP disagreed. Decision went to C-suite. I presented the math: 340,000 clients unable to see correct balances on day one is not an edge case — it is the headline.
→ Wave 2 held. Zero mapping failures in production.
Decision 02 — Against Program Schedule
Required Communications Readiness as a Launch Gate
Pressure: Program management treated client communications as a parallel workstream. Program director wanted to launch on technical readiness.
A technically successful migration that sends incorrect status messaging at scale creates a support crisis indistinguishable from a failed migration from the client's perspective. This delayed one wave by six days. I held the gate.
→ Prevented a support volume spike during the most critical post-launch window.
Decision 03 — Against C-Suite Velocity Pressure
Maintained Integrated System View When Pressure Pushed Toward Siloed Tracking
Pressure: C-suite wanted visible progress on each program independently at Week 8.
Accelerating thinkorswim milestones without the account mapping dependency doesn't reduce risk — it creates the illusion of progress while accumulating hidden dependency risk that surfaces on launch day at 17M account scale. This created friction at the executive level. It is the reason the program launched without a single day-one disruption.
→ Zero day-one disruption to trading or fund movement for 17M accounts.
Outcome
MetricResultWhat Drove It
Client disruption on day oneZeroGo/no-go readiness gates held two waves until operationally ready — not just engineering complete
Assets migrated$1.9TPhased wave execution with explicit validation criteria at each stage
Accounts transitioned17MFull account mapping validation — 340,000 edge cases caught and resolved pre-launch
Platform ranking#1 & #2 J.D. Power 2024Zero disruption maintained client trust through transition
Post-launch remediationZeroThree holds — each against pressure — each correct
Full Case Study + 40-Slide Execution Record
03 Flagship Case Study II · USAA
★ Flagship II · USAA · 2024–2025

Trusted Device Verification (TDV):
Real-Time AI Trust Architecture

Designing the ML system that decides — in under one second, at $200B+ payments volume — whether a device can be trusted to move money that cannot be recalled.

-15%
Fraud Reduction
95%
Auth Success Rate
Zero
Added Friction
<1s
Decision Latency
0.8%
False Positive Rate
"Authentication tells you who the user is. TDV answers the harder question: can this device be trusted, right now, to move money that cannot be reversed?"
⚠ Before TDV
False Positive Rate2.4%
Auth Success Rate~82%
Fraud RateBaseline
Decision ModelGlobal thresholds
Trusted User FrictionPresent
✓ After TDV (12 Sprints)
False Positive Rate0.8%
Auth Success Rate95%
Fraud Rate-15%
Decision ModelContext-aware ML
Trusted User FrictionZero added
TDV Performance — 12 Sprints
System Architecture — Five Layers, One Decision in <1 Second
LayerFunctionLatency BudgetWhat I Owned
L1 — Device IntelligenceFingerprinting, behavior tracking, anomaly detection<50msDefined 30+ trust signals vs. noise
L2 — Signal ProcessingNormalization, weighting, quality scoring<50msSignal quality thresholds and fallback logic
L3 — ML EngineFraud scoring, composite trust score 0–100<200ms p99Precision/recall targets + biweekly retraining governance
L4 — Decision LayerAllow / Step-Up / Block logic<10msThreshold definition by transaction type, amount, device history
L5 — Payment ExecutionZelle transaction authorizationPayment-layer acceptance criteria + failure recovery
ML Governance as Product Requirement — Not Engineering Parameter
Governance Principle 01
Thresholds are PM-Owned, Not Hardcoded
Allow / Step-Up / Block thresholds are configurable via admin interface — no code deploy required. Every change writes an immutable audit log. Rollback to any prior threshold in under 60 seconds. This is not a convenience feature — it is a regulatory requirement. A model requiring a code deploy to adjust fraud thresholds cannot respond to a fraud pattern shift within a business day.
→ Full rollback capability in <60 seconds. Zero emergency code deploys required post-launch.
Governance Principle 02
Retraining is Triggered, Not Automatic
Drift computed per signal daily. PM notified within 4 hours of a breach. PM investigates before any retraining executes. Automatic retraining without PM review introduces feedback loop degradation invisible in dashboards until the system quietly stops working.
→ False positive rate reduced from 2.4% to 0.8% across 12 sprints — through governance, not threshold tightening.
Outcome
MetricResultWhat Drove It
Fraud rate-15%Context-aware scoring — not global threshold tightening
Auth success rate95%Dynamic thresholds stopped blocking legitimate transactions from trusted devices
Friction for trusted usersZero addedAllow / Step-Up / Block applied proportionally to risk — not uniformly
False positive rate2.4% → 0.8%Biweekly PM-governed retraining — not automated
Rollback capability<60 secondsPM-controlled governance — no code deploy required
Cross-functional alignment55% → 90%Structured decision forums replaced status meetings
Full Case Study + 48-Slide Execution Record
04 Supporting Case Studies
Same Operating Model.
Different Domains.

$200B+ in payments governed. 500K learners served. 1M+ users across 25 countries unified. Zero post-launch rollbacks across all three programs.

💳
Charles Schwab Payments Platform
Full-Lifecycle Product Ownership
Governing a $200B+ annual payments system where every product decision has an immediate financial consequence and no failure state is graceful
+17%
Revenue
-12%
Fraud Rate
+4%
Approval Rate
99.99%
Uptime
Zero
Duplicate Transactions
"The single most important product decision was a metric change: replacing approval rate with completion rate. At $200B+ volume, the gap between those two numbers was worth eight figures in invisible revenue — for months."
Payments Platform — Outcomes
Three Critical Decisions
Context-aware risk scoring over global thresholds Security team pushed to tighten thresholds globally. I introduced context-aware scoring differentiated by device trust, transaction history, and amount tier. Same threshold logic applied differently based on signal context.
→ -12% fraud AND +4% approval rate simultaneously — moving in the same direction
Completion quality before feature expansion Roadmap pressure to add new payment types. I focused all investment on completion rate, error recovery, and retry logic before expanding. No new payment types shipped until existing flows had measurable completion quality gates.
→ +17% revenue — not from new features. From existing flows completing reliably at $200B+ volume.
Defined failure behavior before scaling Engineering priority was scaling infrastructure. I blocked scaling until failure behavior was explicitly defined — timeout handling, retry idempotency, partial execution recovery.
→ Zero duplicate transaction incidents post-implementation. 99.99% uptime.
View Full Case Study
🤖
LHR Media · AI Learning & Monetization
Building the System That Learns
Reframing a 500K-learner platform from "ship AI features" to "build a system that learns" — and governing the ML architecture that made progression, engagement, and revenue compound simultaneously
+65%
Learner Progression
+25%
Engagement
+35%
Revenue Growth
500K+
Active Users
"Building the learning system before shipping AI features was the most important product decision. Features are copied in months. A system that learns from 500K users takes years to replicate."
AI Learning System Outcomes
The 5-Layer Learning System Architecture

I owned the product definition, signal contracts, and model KPIs for every layer — from behavioral data (L1) through signal processing (L2), ML engine governance (L3), decision layer thresholds (L4), and product experience acceptance criteria (L5). Every decision traced back to this architecture.

Four Operating Decisions
Experimentation infrastructure before AI expansion No model shipped without a defined experiment to validate it. Test system came before AI system.
→ Every future product decision validated with real behavioral data instead of assumption
Governed ML performance as product requirements When engagement improved but progression declined, I flagged it as a product failure — not a model success.
→ +65% learner progression from architecture designed to improve — not individual features
Cold-start design before general personalization 40–60% of churn happens during onboarding before the model has any signal. Designed explicitly for new users.
→ Progressive trust model: onboarding flows first, personalization activated incrementally
View Full Case Study
🌍
Global CRM & Lifecycle Unification
Fixing the Handoffs No Dashboard Could See
1M+ users · 25+ countries · Zero post-launch rollbacks — +35% revenue from invisible infrastructure work that never appeared in a demo
+35%
Revenue Growth
+30%
Retention
+25%
Conversion
Zero
Post-Launch Rollbacks
25+
Countries Unified
"The most expensive customer experience failures are the ones no product dashboard can see — because someone downstream is compensating for them. Making those failures visible was the first product decision."
CX Systems — Business Impact
The Four Handoffs Silently Destroying Revenue
Onboarding → CRM: "You're all set" before they could access the product Lifecycle messaging triggered on form submission, not provisioning confirmation. Every message sent in this window was a trust withdrawal before the relationship began.
→ Fixed: confirmation messaging triggered on provisioning readiness, not form completion
Billing → Access: Ghost customers paying for inaccessible product Entitlement state and product access out of sync. Operations had built manual workarounds absorbing the failure — invisible in every metric.
→ +25% conversion when access provisioning delays were eliminated
Cohort Validation Before Global Rollout

Launched in controlled cohorts instead of all 25+ markets simultaneously. Cohort approach exposed 3 critical regional dependencies before global rollout — different billing cycle handling, CRM field mapping, and CS escalation logic. Each would have required emergency rollback if discovered post-launch. None did.

View Full Case Study
05 Product Philosophy
Products Are Systems.
Not Features.

Five governing principles built from 18 years in environments where the cost of failure is measured in billions — not NPS points.

01
Define the Failure Mode Before the Feature
Most teams write acceptance criteria that describe what should happen when everything goes right. I require two types: happy path AND failure path. At $200B payment volume, 0.01% wrong behavior means thousands of irreversible financial decisions.
02
Risk is the Real Backlog Filter
Not rank. Not pressure. Not recency. The question preceding every prioritization: what is the financial, regulatory, or trust consequence if this is wrong? Features that cannot answer that question don't make the backlog.
03
Measurement Before Models
You cannot validate whether AI works without a measurement system. Building experimentation infrastructure before AI feature expansion is the sequencing decision most teams get backwards — they ship AI and discover they cannot measure whether it worked.
04
Clarity Drives Velocity — Not Effort
Teams don't slow down from caution — they slow down from ambiguity. Every program I've run had explicit ownership, defined decision rights, and documented go/no-go criteria before Sprint 1. Clarity is not a planning tax. It is the velocity enabler.
05
Capabilities Compound — Features Don't
A feature ships and stops. A capability — experimentation infrastructure, ML governance, signal architecture — compounds. Every future decision becomes faster, more accurate, and more defensible. I prioritize architectural investment that makes the next ten decisions better, not just the next feature. That's why +65% learner progression kept improving after launch. The architecture was designed to improve.
Full Product Philosophy — 7 Operating Principles
06 Complete Execution Evidence
Download the
Full Work.

Every tradeoff, every decision, every sprint backlog, every outcome — documented in full. Not a highlight reel.