Contacts Upload

Constant Contact · PLG Growth

Fixing the upload flow that was quietly killing conversions.

The contact file upload flow was losing users at every step. A false completion signal, noisy mapping UI, and a blocking consent step were all contributing. I redesigned the flow to eliminate those friction points without touching the underlying architecture. Our team A/B tested it across 4,462 users. The results showed the clearest causal chain of all three experiments: better uploads led to more sends, which led to more conversions. An estimated $215K in annual revenue impact.

Trial-to-paid conversion

+3.0%

Annual revenue impact

+$215K

Email send rate

+2.4%

My Role

  • Product Design
  • UX Strategy
  • Interaction Design

Team

  • PLG growth
  • Experimentation

Tools

  • Figma
  • Notion
  • Statsig
  • Cursor
  • Snowflake MCP

The Problem

Analytics showed drop-off at every stage, but the most damaging issue was a false signal. After selecting a file, a green checkmark appeared on screen. To users, that looked like confirmation the upload was complete. Many closed the browser and moved on, never knowing they still had mapping and consent steps ahead. The flow had an invisible exit built into it.

Beyond the false completion signal, the mapping interface was visually noisy and hard to parse. Help content was scattered rather than surfaced where it was needed. The consent step appeared as a blocking interruption that broke forward momentum. Error states were silent. Each issue was individually minor. Together, they made completing the flow feel like work.

Redesigning the Upload Flow

The redesign stayed within the existing architecture. I removed the false completion signal and replaced it with progressive disclosure that made the next step explicit. Primary actions after file selection became more visually prominent. The field mapping interface was simplified for clearer column relationships. Help documentation moved from scattered placements to contextual inline guidance. Consent became an inline modal instead of a blocking step. Error states were rewritten with clear explanations and recovery paths.

None of these were large surface changes. The bet was that the cumulative effect of removing small friction at every step would meaningfully improve completion, and that the false completion signal alone was doing enough damage to move the primary metric.

Original upload screen showing the false completion signal after file selection
Redesigned upload screen with clear next step and prominent primary action

Before and after: the upload initiation step. The original green checkmark signaled completion; the redesign makes the next step explicit.

Original field mapping interface with visual noise and scattered help content
Redesigned field mapping interface with simplified layout and inline contextual guidance

Before and after: the field mapping step. Reduced visual noise and inline guidance replace the original scattered help content.

Original post-upload confirmation screen
Redesigned post-upload confirmation screen with clear success state and next steps

Before and after: post-upload confirmation. The redesign gives users a clear success state and a forward path.

Results

The experiment ran as an A/B test through Statsig from December 16, 2025 to January 27, 2026. Both the control (2,201 users) and test group (2,261 users) were users who reached the file upload step, so baseline rates are higher than the general trial population.

The redesigned flow shifted users toward deeper contact uploads. 3+ contacts improved 3.0%, and downstream sending for power users (3+ emails) improved 2.2%. Those upstream gains cascaded into a 3.0% lift in trial-to-paid conversion. The clearest causal chain of all three experiments: better uploads led to more sends, which led to more conversions.

Contact Upload Depth

The redesigned flow successfully pushed users toward uploading more contacts. MEQ-3 (3+) improved +3.0% while the shallower buckets decreased, indicating users who previously stopped at 1-2 contacts now continued through the full flow.

+3.0%

Contacts 3+ (82.6% → 85.1%)

-11.9%

Contacts exactly 1 (4.8% → 4.2%)

-43.3%

Contacts exactly 2 (1.4% → 0.8%)

0%20%40%60%80%100%Contacts 3+Contacts exactly 1Contacts exactly 2ControlTest

Downstream Behavior

Email sending improved across all depth levels, suggesting the deeper contact lists gave users more to work with and more reason to send. Overall send rate rose +2.4%, with improvement in every MEQ bucket.

+2.4%

Email sent any (56.6% → 58.0%)

+2.2%

Email sent 3+ (17.4% → 17.7%)

+0.0%

Email created (77.6% → 77.6%)

+0.6%

Second login (82.8% → 83.2%)

0%20%40%60%80%100%Second loginEmail createdSent any emailSent 3+ControlTest

Conversion

Trial-to-paid conversion rose +3.0%, the strongest balanced result of all three experiments. The causal chain is clear: better contact upload led to more sends, which led to more conversions. ELTV per converter was slightly lower in the test group ($759 vs $785), but the conversion lift more than compensates.

+3.0%

T2P conversion (48.8% → 50.2%)

+$215K

Est. annual revenue impact

0%20%40%60%80%100%0200400600800$1.0kT2P conversionELTV 12-mo per converterELTV ($)ControlTest

Reflection

False completion signals are worth naming as a specific category. They don't generate support tickets. Users don't report them as bugs. They just leave, convinced they finished something they didn't. You can't see the exit in the data because it looks like a successful session.

This experiment produced the most balanced improvement of all three. The causal chain was visible at every step: deeper uploads led to more sends, which led to more conversions. Each step improved because the one before it did. That kind of cascading effect is uncommon in growth experiments and validates the principle that reducing friction at a high-intent moment unlocks downstream behavior.

The approach proved sound. The team continues to apply this friction-reduction framework across other complex workflows in the product.