I recently attended a Salesforce Data 360 enablement session in Copenhagen, where the focus wasn’t just on features, it was on architecture: how Salesforce Data 360 fits into a modern data landscape, how it supports Customer 360, and why it’s increasingly relevant in a world moving toward AI-assisted and agentic workflows.

What stood out most is that many conversations about Salesforce Data 360 start in the wrong place. People often try to categorize it as something familiar: a CDP, a data lake, a warehouse, or “Salesforce’s answer to Snowflake.” That mental shortcut is understandable but it misses the point. Salesforce Data 360 is designed to operationalize customer data inside Salesforce, not replace your existing data platform.

The core problem Salesforce Data 360 is built to solve

Over the past decade, companies have invested massively in cloud data platforms like Snowflake, Databricks, Redshift, and BigQuery. These platforms have transformed how organizations ingest, store, and analyze information. Yet despite this progress, a persistent gap remains: while we’ve gotten very good at building data products, we’re still surprisingly bad at putting those data products to work inside the daily processes where customer decisions are made.

That gap is exactly where Salesforce Data 360 comes in. It complements your existing data ecosystem by bringing the right customer context into your Salesforce workflows, so data becomes actionable where sales, service, marketing, and increasingly AI agents do their work.

“Connecting data is not enough. The data needs context, otherwise you cannot act on it.”

That’s the heart of the Salesforce Data 360 story: connected data is table stakes; contextualized data enables action.


From integration to operationalization (why “activation” is the new bottleneck)

Modern organizations have gotten used to the idea that accumulating data is the path to value. But more data doesn’t automatically translate into better decisions. The real bottleneck today isn’t integration or storage, it’s activation. Or, in the language used in the session, operationalization: putting data into the operations and into the flow of work.

It’s the difference between asking “Do we have the data?” and asking:

  • How do service agents make smarter decisions using ERP contract data?
  • How do sales reps see relevant financial signals without drowning Salesforce in replicated tables and custom objects?
  • How does an AI agent answer something as simple as “Am I entitled to free service?” with 100% confidence?

Those are not analytics questions. They’re operational questions. They live inside workflows, routing, entitlements, case handling, and real-time customer interactions, not inside dashboards or notebooks.

This is precisely where traditional approaches often fail. Reverse ETL, custom objects, point-to-point integrations, and stitched APIs can work, but they tend to turn into a fragile web of dependencies over time. And point-to-point integration does not scale well when your ambition is autonomous processes and high-trust automation.

And crucially: you don’t need 750 data sources to hit this wall. Even a simple landscape: Salesforce + ERP + subscription platform can create identity conflicts, latency issues, and inconsistent “customer” definitions that break operational trust.


Why Salesforce Data 360 is different from Snowflake, Databricks, and “classic” CDPs

The simplest way to understand Salesforce Data 360 is this:

Snowflake and Databricks are built for analytics. Salesforce Data 360 is built for operations.

Cloud data platforms excel at building a unified layer of enterprise truth. But they are not designed to feed operational systems in real time with the semantic context, governance alignment, and platform-native integration required for transactional workflows, especially inside Salesforce.

That’s the native advantage of Salesforce Data 360: it acts as an operational customer data layer that turns analytical assets into operational signals inside Salesforce.

“Let data platforms do data integration, let Salesforce Data 360 operationalize the data inside Salesforce.”

This does not replace your lake or warehouse. It adds a missing middle layer that makes curated data decision-ready. It maps to a canonical model, stitches identities, and exposes data to Salesforce apps without copying half the enterprise into CRM objects. And it does so through Salesforce’s governance and metadata model, which is why the data can behave “Salesforce-native” even when it comes from elsewhere.

That context becomes even more critical in an AI world. Without context, agents hallucinate or make brittle assumptions. With context and identity, you can build automation you can actually trust.


Built for activation, not analysis (why selectivity is a feature, not a limitation)

One of the most important architectural points is that Salesforce Data 360 is deliberately selective. It focuses on data relevant to customer operations. The information that actually drives action in sales, service, marketing, commerce, and agent workflows.

This mindset matters because it is the opposite of a data lake philosophy. A data lake wants everything. A Customer 360 operational layer wants the right things: the signals that enable a decision.

That’s why Salesforce Data 360 is not the place to build enterprise-wide models or run heavy SQL transformations. It’s where curated and governed data, ideally shaped in your data platform, becomes available to:

  • Salesforce automations
  • Salesforce flows
  • AI agents and agentic workflows
  • customer-facing apps and service consoles
  • segmentation and real-time decisioning

It’s also why the platform remains relevant for smaller and mid-sized companies. You don’t need petabytes. You need relevant, contextual data that drives smarter operational decisions.


Reducing technical debt while increasing agility in Salesforce

A common mistake organizations make is extending Salesforce with custom objects to accommodate large volumes of external data. It works in the short term, and then creates long-term issues: storage cost, performance degradation, schema drift, security complexity, and a growing web of integrations to maintain.

Reverse ETL can unintentionally accelerate this problem, because it pushes more and more modeled data into Salesforce, often as a patchwork of use-case-specific pipelines. Over time, it puts technical debt on both sides: the data platform and the CRM.

Salesforce Data 360 flips the pattern. Instead of constantly pushing data into Salesforce, it helps Salesforce work with harmonized customer data without bloating the CRM model. The result is fewer fragile integration touchpoints, less schema sprawl, better performance, and a more scalable architecture.

There’s an operating model angle too: by making activation more native, Data 360 can reduce the dependency on scarce data engineering capacity for every single business change. The business can move faster, but with guardrails.


Why Salesforce Data 360 matters even without “enterprise scale”

The misconception that Salesforce Data 360 is only for huge enterprises is understandable, but it doesn’t hold up in practice. The number of data sources isn’t the real driver of complexity. The driver is the number of operational decisions you want Salesforce to make reliably across systems.

Even with three core systems, you will face inconsistent data definitions, disconnected processes, fragmented customer context, and growing pressure for AI-powered experiences.

The most pragmatic advice from the session was use-case driven: don’t start with 200 use cases. Start with one, prove value, and expand. A single entitlement check, an enriched service flow, a better segmentation model, or unifying two identifiers can demonstrate the value quickly and then scale out.


The bigger picture: Salesforce Data 360 complements, it doesn’t replace

Stepping back, the direction of the market is clear: a specialized operational data layer is becoming essential. ServiceNow, SAP, Adobe, Microsoft, all are building their own versions. Salesforce’s approach is Salesforce Data 360.

And the why is simple: modern, AI-enabled customer experiences require a layer where context, identity, and operational triggers come together.

That’s why I don’t see this as “Salesforce vs Snowflake” or “Data 360 vs Databricks.” It’s a layered architecture:

  • Snowflake/Databricks remain your analytical brain.
  • Salesforce Data 360 becomes the operational nervous system inside Salesforce.

This isn’t about choosing one or the other. It’s about connecting them in a way that makes data useful where it matters most: in front-office workflows, customer touchpoints, and the AI agents that increasingly support (or even run) those interactions.

Finally, beyond architecture, the session also covered practical topics like licensing, cost optimization, and implementation best practices: all of which are crucial if you want Salesforce Data 360 to deliver value without surprises. All in all, a strong toolbox, and one I’m looking forward to applying together with my NoA Ignite colleagues and our Salesforce customers.