Artur Kolasa
Back to Blog
12 min read
salesforce-architecture independent-advisory

Salesforce Data Migration Strategy: 4 Steps That Prevent Day-1 Disasters [Guide]

Salesforce data migration fails when treated as lift-and-shift. A 4-step strategy from a CTA covering data ownership, quality, and volume testing.

Salesforce data migration fails when treated as lift-and-shift. Success requires four things: know exactly what you’re migrating and why, assign data owners with real sign-off authority, refuse to migrate garbage, and test at full production volume. Skip any of these and you hit “data panic” during validation.

The lift-and-shift trap

“Where is all my data?”

That question, asked during business validation by an end user, is the moment a project goes from green to red. I’ve watched it happen. The room gets quiet. The project manager checks their notes. The technical lead starts explaining. But there is no good explanation. The data simply was not migrated because no one thought it was needed, or it was there but duplicated, or it came across malformed.

This is the lift-and-shift trap: treating data migration as a technical exercise (move data from A to B) rather than a business decision. It is the pattern behind most Salesforce implementation failures I’ve seen.

The problem is not technical capability. Modern tools can move massive volumes of data. The problem is that no one asked the right questions before migration started. No one owned the decisions. No one enforced standards. And by the time you reach UAT, it is too late to fix without burning budget and timeline.

Why migration gets treated as an afterthought

Migration often sits at the end of project plans, squeezed into whatever time remains before go-live. There are reasons for this, and none of them are good.

It is not exciting. Data migration does not demo well. No stakeholder gets energized watching records move between systems. The features that get attention are the ones that look good in steering committee presentations. Migration? That is “just IT work.”

Assumptions go unchallenged. Everyone assumes someone else has thought through the data strategy. The business assumes the technical team knows what to migrate. The technical team assumes the business has validated the source data. Neither checks. Both are wrong.

Scope looks simple on paper. “Migrate customer records” seems straightforward until you discover five source systems with conflicting data models, a decade of duplicates, and field mappings that require business decisions no one has authority to make.

The wrong people own it. Data migration gets assigned to developers or data analysts. They can execute the technical work, but they cannot make the business decisions about what constitutes source of truth, which transformations are correct, or what data quality means for each domain.

The result is predictable. You arrive at business validation and discover the foundation was never solid. Everything built on top of it is now suspect.

4 steps to strategic data migration

These steps come from experience across enterprise Salesforce implementations, including Consumer Goods Cloud, B2B Commerce, and legacy system integrations where migration was the make-or-break factor.

Step 1: Define “What,” “Why,” and “From Where”

Every critical field needs three answers documented before any data moves.

What data? Not just “migrate customers.” Which specific fields? Which record types? Which statuses? Be precise. The difference between “migrate all accounts” and “migrate active accounts with revenue > $10K from the last 3 years” is the difference between a clean system and importing a decade of dead records.

Why is it needed? If data does not support a Day-1 process, it should not be migrated. This is the question that forces prioritization. When someone insists on migrating historical records, ask: which Day-1 process requires them? If the answer is “nice to have” or “we might need it someday,” that is not migration scope. That is archival, and it belongs in a different conversation.

From where? Identify the source system for every field. This sounds obvious, but on projects with multiple legacy systems, the same data often exists in multiple places with different values. CRM says the customer address is X. ERP says it is Y. Finance says it is Z. Which is the source of truth? This question must be answered before migration, not during.

This is also your only opportunity to plan for de-duplication. Legacy systems accumulate garbage over years. Duplicate records, test data that became production data, records for entities that no longer exist. Migration is the one moment when you can enforce cleanliness. Import the junk, and you pay to maintain it forever.

QuestionExample AnswerImpact
What data?Active customer accounts from SAP, created after 2020, with at least one orderExcludes 60% of legacy records that serve no Day-1 purpose
Why needed?Required for order history display in customer portalClear traceability to business process
From where?SAP as master for account data, legacy CRM for contact preferencesResolves source-of-truth conflicts before migration

Step 2: Appoint Data Owners, Not Just SMEs

Subject matter experts know the data. That is not enough. You need someone with authority to make decisions and accountability for the outcomes.

A data owner for each key domain must have the power to sign off on three things:

  1. “This is the source of truth.” When sources conflict, the data owner decides which system wins. No escalation. No committee deliberation. One person, one decision.

  2. “These transformations are correct.” Legacy systems use different picklist values, different status codes, different categorizations. Someone must validate that “Status = A” in the old system correctly maps to “Status = Active” in Salesforce. This is a business decision, not a technical one.

  3. “This data is clean.” Before migration, not after. The data owner accepts responsibility for the quality of their domain. If bad data makes it into Salesforce, they own the remediation.

What does this look like in practice? During a large Consumer Goods Cloud implementation, we established data owners for distinct domains: customer master data, product hierarchy, pricing, and order history. Each owner had explicit sign-off authority. When the product owner discovered the legacy system had three different category taxonomies across regions, they made the mapping decision in one meeting. Without that authority, the same decision would have required weeks of cross-functional alignment and probably a steering committee escalation.

The pattern is consistent: projects with clear data owners complete migration with fewer issues. When problems do occur, remediation is faster because someone owns the fix. Projects with diffuse ownership discover data problems late and spend unplanned budget fixing them under time pressure.

Step 3: Do Not Migrate Garbage

Migration is the one time you can enforce data quality at scale. Once bad data is in Salesforce, cleaning it becomes an ongoing operational burden. Someone will build reports on it. Someone will integrate systems with it. Someone will make decisions based on it. Then fixing it breaks everything downstream.

Archive, do not migrate. Historical records that serve no Day-1 process? Archive them. Make them accessible if truly needed, but do not pollute your new system with them. The cost of archival is a fraction of the cost of maintaining bad data indefinitely.

Fix formatting before migration. Phone numbers in inconsistent formats? Addresses with data quality issues? Names with encoding problems? Fix them in the migration pipeline. Do not import problems that become someone else’s job to clean up later.

Validate against business rules. If Salesforce enforces required fields or validation rules, your migration data must comply. Discovering data quality issues when bulk loads fail is expensive. Discovering them when you try to run Day-1 processes is worse.

The pushback you will hear: “We need all the historical data for reporting.” Challenge this. What specific reports require that data? What decisions depend on those reports? In most cases, the answer is vague. The data is wanted, not needed. Want is not a migration requirement.

I learned this lesson working on a project with decade-old order data. The business initially requested full historical migration. When we pushed back and asked which Day-1 processes required orders from 2015, the answer was “none, but leadership might want to see trends.” We proposed a summary data approach instead: aggregate metrics for historical analysis, detailed records only for recent years. Leadership approved. We avoided migrating millions of records that would have complicated the data model and slowed every query touching the order object.

Step 4: Test Volume, Not Just Samples

A 100-record test migration is nearly useless. It proves your mapping logic works on happy-path data. It tells you nothing about what happens at scale.

Full-volume testing reveals the problems that actually matter:

Governor limits and record locking. Salesforce has limits. Large data volumes hit them. A 10-million-record migration will encounter issues that never surface with small datasets. Record-locking during parallel loads, CPU timeout on triggers, SOQL query limits on complex objects. You discover these in testing or you discover them blocking your go-live.

Migration window reality. You have a cutover window. Is it 8 hours? 24 hours? 48 hours? Your migration must complete within that window, including time for validation and potential rollback. A test that processes 1,000 records per hour extrapolates to weeks for 10 million records. Only full-volume testing tells you if your actual approach fits your actual window.

Performance at scale. The system behaves differently with production data volumes. Queries that run instantly on 1,000 records timeout on 10 million. Reports that render quickly become unusable. Full-volume testing is not optional.

Test TypeWhat It ProvesWhat It Misses
Sample data (100 records)Mapping logic is correctGovernor limits, performance, timing
Subset migration (10,000 records)Basic load process worksScale-related failures, realistic timing
Full volume migrationActual go-live readinessNothing (if representative of production)

Consumer Goods Cloud and B2B Commerce add complexity: product catalogs, pricing matrices, inventory snapshots, order history. Each object has different volumes and performance characteristics. A successful customer migration means nothing if your product catalog load takes three times longer than expected.


What happens when migration strategy succeeds

Business validation becomes what it should be: validating the new process. Users test workflows, not hunt for missing fields. They evaluate whether the system makes their job easier, not whether the data is trustworthy.

This builds trust immediately. The platform works. The data is there, and it is correct. Users adopt because adoption makes sense, not because they are forced to use something they do not trust.

The go-live conversation shifts from “can we go live?” to “we are ready.” No data panic. The cutover window was validated. The owners signed off. You built on rock, not sand.

What happens when migration strategy fails

You hit “data panic” at business validation. This is the moment the project goes from green to red. Someone asks where their data is, and no one has a good answer.

Go-live date is now at risk. The team scrambles to hold emergency data workshops, the same workshops that should have happened during discovery. This burns unplanned budget and erodes business confidence. Stakeholders who were supportive become skeptical. They are now questioning decisions they previously approved.

The project team loses credibility. Instead of a smooth launch, you face a difficult path to adoption. Users’ first impression of the platform is that it is “broken” or “missing data.” That impression is hard to reverse, even after remediation.

You may still go live. But you launch with the same data quality problems you had in the old system. The business doubts the solution’s value before they have even used it. Adoption becomes a struggle rather than a natural transition.


Key takeaway

Migration is not the final 10% of your project. It is the foundation for 100% of your business adoption.

Your platform’s ROI depends on trusted, usable data. Features do not matter if users cannot trust what they see on screen. If you treat migration as a technical lift-and-shift, you guarantee a broken process on Day-1. If you treat it as a strategic business transition, you set yourself up for adoption.

The four steps are straightforward: define what, why, and from where for every field. Appoint data owners with real authority. Refuse to migrate garbage. Test at full production volume. None of this is complicated. But it does require making hard decisions early, when they’re cheap, rather than discovering problems late, when they’re expensive.

Data migration happens on every Salesforce project to some degree. The question is whether you approach it strategically or treat it as an afterthought. The answer determines whether your users spend go-live week adopting the platform or questioning whether it works.


This post is part of a series on enterprise Salesforce implementation strategy:


Original post

This article expands on a LinkedIn post from my feed: