Salesforce Release Planning: 3 Platform Constraints Your DevOps Team Must Know [Checklist]
Salesforce release planning fails when teams apply standard DevOps patterns. Master 3 platform constraints: runbooks, sharing/LDV, and feature flags.
Salesforce is not a standard containerized app. Your DevOps team’s generic delivery principles will fail without understanding three platform constraints: runbooks for mandatory manual steps, deferred sharing calculations for LDV orgs, and dependency-aware feature flagging. Skip these and you trade predictable velocity for deployment anxiety.
Why standard DevOps patterns fail on Salesforce
I recently watched a highly capable technical team struggle because they applied generic software delivery principles to Salesforce, missing the “hidden” platform constraints.
This was a global FMCG company implementation. Multiple teams working in parallel: integration team, custom frontend team, dedicated DevOps team, ERP team. The DevOps team had strong generic DevOps skills. They understood CI/CD, they knew branching strategies, they could build pipelines. What they did not have was dedicated Salesforce experience.
The assumption was reasonable on the surface: DevOps is DevOps. If you can deploy containerized microservices reliably, you can deploy Salesforce metadata. The tooling looks similar. You have source control, you have pipelines, you have deployment targets.
But Salesforce is not a container. It is a multi-tenant platform with governor limits, shared database resources, and metadata dependencies that do not exist in a standard software stack. The constraints are real, and they are not optional.
Here are three Salesforce-specific realities that your release plan must account for.
1. Pre and post steps are mandatory (Runbook)
You cannot write your cutover plan the night before. Many configurations do not deploy cleanly via Metadata API.
CPQ configuration, Commerce Cloud setup, Consumer Goods Cloud data, AppExchange package installations: these have manual steps that no deployment tool fully automates. The Metadata API covers most declarative configuration, but “most” is not “all.” And the gaps are exactly where your go-live fails.
What requires runbooks:
- AppExchange package installations and upgrades (version dependencies, post-install scripts)
- CPQ quote templates and advanced configurations
- Commerce Cloud storefront setup and buyer group assignments
- Consumer Goods Cloud visit templates and route planning data
- Permission set assignments for managed package licenses
- Data loader operations for setup objects that are not metadata
- Connected App configurations with OAuth credentials
The rule is simple: if a manual step was not rehearsed in the Sandbox, it does not go to Production. Document every step. Run it against a full sandbox copy. Time it. Identify dependencies. Know who executes each step and what happens if it fails.
Automate what you can, but accept that “manual” is sometimes the only path. The goal is not zero manual steps. The goal is zero unrehearsed manual steps.
The real problem with runbooks is not writing them. It is maintaining them. During implementation, developers and consultants focus on building functionality. The pre and post deployment steps get forgotten, skipped, or simply not updated as the solution evolves.
I have seen this pattern repeatedly: deployment to the next environment fails, and no one knows if it is a missing manual step or an implementation bug. The team wastes hours investigating what should have been documented.
On projects I led, I required a conscious decision in every pull request. The PR must explicitly state either “post/pre deployment step is NOT needed for this change” or “step is required” with a link to the Jira ticket documenting it. No information is not acceptable. Silence means the question was never asked. This simple gate catches missing steps before they become deployment failures.
2. Sharing is a database event, not just metadata
Deploying changes to the sharing model triggers massive background calculations. This is not a metadata refresh. This is Salesforce recalculating record access for every user against every record that the sharing rule affects.
In Large Data Volume (LDV) orgs, this locks tables and degrades performance. I have seen deployments where a sharing rule change brought an org to a crawl for hours. Users could not access records. Reports timed out. Integrations failed because API calls took too long.
The platform does this because sharing in Salesforce is not a simple permission check. Record access is pre-calculated and stored for performance. When you change a sharing rule, the platform must recalculate that access. For an org with millions of records and thousands of users, that calculation is substantial.
What triggers recalculation:
- Adding or modifying sharing rules (owner-based or criteria-based)
- Changing role hierarchy structure
- Modifying public group membership
- Territory hierarchy changes
- Account team or opportunity team template changes
The solution: Defer Sharing Calculation.
Before deploying sharing model changes:
- Enable “Defer Sharing Calculation” in Setup (Sharing Settings)
- Deploy your sharing model changes
- Schedule the recalculation for a quiet window (weekend, off-hours)
- Monitor the async job until completion
- Validate that access is correct before resuming normal operations
This converts an uncontrolled performance hit during deployment into a scheduled maintenance window you can plan around. The recalculation still happens, but you control when.
| Sharing Change Type | Impact Level | Recalculation Scope |
|---|---|---|
| Criteria-based sharing rule (add) | High | All records matching criteria |
| Owner-based sharing rule (modify) | High | All records owned by affected users |
| Role hierarchy change | Very High | All records visible through hierarchy |
| Public group membership | Medium | Records shared with that group |
| Territory assignment | High | All accounts in territory model |
Large Data Volume issues are easy to miss until they become critical. Lists load slowly. Reports time out. Users complain, but the symptoms look like “the system is just slow” rather than a specific architectural problem.
On a Healthcare and Life Sciences Cloud implementation, I had to explain why performance was degrading as the org grew. The root cause was data skew on lookup relationships and unclear data ownership. Records concentrated on specific parent accounts, creating hot spots that affected every query touching those objects. The sharing recalculation amplified the problem because every ownership change triggered reprocessing across skewed relationships.
These issues compound. What starts as “slightly slow” becomes “unusable” as data volume increases. By the time the symptoms are obvious, the fix requires architectural changes that should have been planned from the start.
3. Feature flags have “hard” dependencies
In other stacks, feature flags are straightforward. Deploy code behind a flag, enable for subset of users, validate, roll out gradually. If something breaks, flip the flag off. The code is isolated.
In Salesforce, hard references create a nightmare. You cannot simply “flag off” a field if a deployed Flow references it. You cannot disable an Apex class if a trigger calls it. The metadata dependencies are enforced at the platform level, and they do not respect your feature flagging strategy.
This is where generic DevOps experience fails. Teams accustomed to microservices assume they can isolate features behind flags the way they would in a Node.js or Java application. Salesforce does not work that way.
Where feature flags are possible in Salesforce:
- Custom Settings or Custom Metadata to control Apex behavior (if the Apex is written to check them)
- Permission Sets to control access to features (users see or do not see functionality)
- Dynamic Forms with visibility rules (show/hide fields based on criteria)
- Lightning App Builder page visibility (show/hide components)
- Flow entry conditions (skip flow execution based on criteria)
Where feature flags are impossible or extremely difficult:
- Fields referenced in validation rules, flows, or Apex (hard dependency)
- Objects with required master-detail relationships (cannot make optional)
- Apex triggers with direct DML operations (no flag check possible after deployment)
- Page layouts referenced in record types (must deploy together)
- Flows that reference specific fields or objects (cannot “flag off” the field)
- Report types with required fields (dependency baked in)
The fundamental problem: Salesforce metadata dependencies are compile-time, not runtime. When you deploy a Flow that references a field, that reference is validated at deployment. The field must exist. You cannot deploy the Flow “flagged off” and then deploy the field later. The platform will not let you.
Deep dive: Introducing feature flags mid-project
On the global FMCG implementation I mentioned earlier, the DevOps team wanted to introduce feature flagging mid-project. The implementation was already mature. Multiple workstreams had delivered functionality. The deadline pressure was intense.
This is the wrong moment and the wrong approach.
Feature flagging in Salesforce requires deep platform knowledge because it is complicated. You cannot bolt it onto an existing codebase without understanding every metadata dependency in that codebase. The DevOps team had the skills to implement a flagging system. What they did not have was the ability to assess which features could be flagged and which could not.
Why mid-project introduction fails:
-
Existing code was not written for flags. Apex classes, Flows, and validation rules were built assuming they would always execute. Retrofitting flag checks requires touching every component, which means regression risk.
-
Dependency chains are already deep. A field created in month one is now referenced by three Flows, two validation rules, and custom Apex. You cannot flag off that field without removing all those references first.
-
Testing scope explodes. Every flag combination creates a test scenario. If you have ten features with flags, you have potentially 1,024 combinations to test. Mid-project introduction means discovering which combinations break.
-
Timeline pressure prevents proper implementation. Flagging done right requires architecture changes. Flagging done fast requires compromises that become technical debt.
What proper feature flagging requires:
-
Upfront architecture: Design for flagging from the start. Apex should check Custom Metadata before executing feature logic. Flows should have entry criteria that respect flags. Fields that might be flagged should not be required by other metadata.
-
Dependency mapping: Before flagging any feature, map every metadata component that references it. If a flag would require removing ten dependencies to work, that feature cannot be cleanly flagged.
-
Testing strategy: Decide which flag states you will test. Not every combination, but at least the states you plan to use: all flags on (production), all flags off (baseline), each feature individually off (rollback scenarios).
-
Documentation: Every flag needs documentation of what it controls, what depends on it, and what the rollback procedure is. Undocumented flags become mysteries that no one touches.
The architectural consequence:
You cannot isolate code as cleanly as in a microservices architecture. Accept this. Design around it. Use packages for major feature boundaries. Use permission sets for user-level feature access. Use Custom Metadata for behavior toggles in code you control. But do not expect the flexibility of a flag-anything system.
There are multiple approaches to feature flagging in Salesforce, and the right choice depends on the project, its complexity, and what the business actually needs the flags to solve.
One pattern I have implemented successfully in ISV contexts uses Custom Permissions for feature access control combined with Custom Metadata for configuration. Custom Permissions integrate cleanly with Permission Sets, so you can enable features per user or per profile. Custom Metadata stores the flag states and any associated configuration, deployable across environments without data migration.
But this is one solution, not the solution. A global implementation with multiple integration teams has different needs than a managed package. The key is designing the flagging strategy before you need it, understanding which features can be isolated and which cannot, and accepting that Salesforce will never offer the runtime flexibility of a microservices architecture.
What happens when teams ignore Salesforce platform constraints
Ignoring these constraints leads to architectural degradation. Each workaround creates debt. Each unrehearsed deployment step becomes a production risk. Each “we’ll figure out the sharing later” becomes a performance incident.
The pattern is consistent: teams without Salesforce platform experience underestimate these constraints. They are not visible in documentation or demos. They emerge during implementation when timelines are tight and options are limited.
Signs you are ignoring constraints:
- Runbooks created during the cutover window, not before
- Sharing rule changes deployed during business hours
- Feature flags attempted without dependency analysis
- DevOps team working independently of Salesforce architects
- “It works in Dev” as the primary validation
Signs you are respecting constraints:
- Runbooks rehearsed at least twice before production
- Sharing changes scheduled for maintenance windows with deferred recalculation
- Feature isolation designed into architecture from the start
- DevOps team partnered with platform specialists
- Full sandbox deployment as the minimum validation standard
When you respect these constraints (runbooks, deferred sharing, dependency-aware flagging), you trade “deployment anxiety” for predictable velocity. Releases become routine because the surprises have been eliminated through preparation.
Key takeaways
-
Runbooks are not optional. Document every manual step, rehearse it in sandbox, time it. If a step was not rehearsed, it does not go to production.
-
Sharing changes are database events. Defer calculation before deployment, schedule recalculation for quiet windows, monitor until complete.
-
Feature flags have platform limits. Salesforce metadata dependencies are compile-time. Design for flags from the start or accept that some features cannot be cleanly isolated.
-
Platform knowledge is required. Generic DevOps skills are necessary but not sufficient. Partner your DevOps team with Salesforce architects who understand these constraints.
-
Plan for constraints, do not discover them. Every constraint you discover during cutover is a constraint you should have discovered during planning.
Related
This post is part of a series on enterprise Salesforce implementation strategy:
- Before you build: Salesforce Discovery Phase: 4 Steps to Avoid Watermelon Projects covers how to run discovery that actually prevents project failure.
- Before you buy: 5 Questions That Expose OOTB Feature Gaps Before You Buy helps you evaluate whether “out-of-the-box” will actually work for your requirements.
- Before you go live: Salesforce Data Migration Strategy: 4 Steps That Prevent Day-1 Disasters covers how to approach data migration as a strategic business transition.
Original post
This article expands on a LinkedIn post from my feed: