Pergunta 1 de 76

Northern Trail Outfitters (NTD) creates a calculated insight to compute recency, frequency, monetary {RFM) scores on its unified individuals. NTO then creates a segment based on these scores that it activates to a Marketing Cloud activation target. Which two actions are required when configuring the activation? Choose 2 answers To configure an activation to a Marketing Cloud activation target, you need to choose a segment and select contact points. Choosing a segment allows you to specify which unified individuals you want to activate. Selecting contact points allows you to map the attributes from the segment to the fields in the Marketing Cloud data extension. You do not need to add additional attributes or add the calculated insight in the activation, as these are already part of the segment definition. Create a Marketing Cloud Activation Target; Types of Data Targets in Data Cloud

A customer is concerned that the consolidation rate displayed in the identity resolution is quite low compared to their initial estimations. Which configuration change should a consultant consider in order to increase the consolidation rate? The consolidation rate is the amount by which source profiles are combined to produce unified profiles, calculated as 1 - (number of unified individuals / number of source individuals). For example, if you ingest 100 source records and create 80 unified profiles, your consolidation rate is 20%. To increase the consolidation rate, you need to increase the number of matches between source profiles, which can be done by adding more match rules. Match rules define the criteria for matching source profiles based on their attributes. By increasing the number of match rules, you can increase the chances of finding matches between source profiles and thus increase the consolidation rate. On the other hand, changing reconciliation rules, including additional attributes, or reducing the number of match rules can decrease the consolidation rate, as they can either reduce the number of matches or increase the number of unified profiles.

A customer is trying to activate data from Data Cloud to an Amazon S3 Cloud File Storage Bucket. Which authentication type should the consultant recommend to connect to the S3 bucket from Data Cloud? To use the Amazon S3 Storage Connector in Data Cloud, the consultant needs to provide the S3 bucket name, region, and access key and secret key for authentication. The access key and secret key are generated by AWS and can be managed in the IAM console. The other options are not supported by the S3 Storage Connector or by Data Cloud.

A consultant has an activation that is set to publish every 12 hours, but has discovered that updates to the data prior to activation are delayed by up to 24 hours. Which two areas should a consultant review to troubleshoot this issue? Choose 2 answers The correct answer is B and C because calculated insights and segments are both dependent on the data ingestion process. Calculated insights are derived from the data model objects and segments are subsets of data model objects that meet certain criteria. Therefore, both of them need to be updated after the data is ingested to reflect the latest changes. Data transformations are optional steps that can be applied to the data streams before they are mapped to the data model objects, so they are not relevant to the issue. Reviewing calculated insights to make sure they're run after the segments are refreshed (option D) is also incorrect because calculated insights are independent of segments and do not need to be refreshed after them.

Northern Trail Outfitters wants to use some of its Marketing Cloud data in Data Cloud. Which engagement channel data will require custom integration? CloudPage is a web page that can be personalized and hosted by Marketing Cloud. It is not one of the standard engagement channels that Data Cloud supports out of the box. To use CloudPage data in Data Cloud, a custom integration is required. The other engagement channels (SMS, email, and mobile push) are supported by Data Cloud and can be integrated using the Marketing Cloud Connector or the Marketing Cloud API. Data Cloud Overview, Marketing Cloud Connector, Marketing Cloud API

Which permission setting should a consultant check if the custom Salesforce CRM object is not available in New Data Stream configuration? To create a new data stream from a custom Salesforce CRM object, the consultant needs to confirm that the View All object permission is enabled in the source Salesforce CRM org. This permission allows the user to view all records associated with the object, regardless of sharing settings. Without this permission, the custom object will not be available in the New Data Stream configuration. Manage Access with Data Cloud Permission Sets Object Permissions

Which two common use cases can be addressed with Data Cloud? Choose 2 answers Data Cloud is a data platform that can help customers connect, prepare, harmonize, unify, query, analyze, and act on their data across various Salesforce and external sources. Some of the common use cases that can be addressed with Data Cloud are: Understand and act upon customer data to drive more relevant experiences. Data Cloud can help customers gain a 360-degree view of their customers by unifying data from different sources and resolving identities across channels. Data Cloud can also help customers segment their audiences, create personalized experiences, and activate data in any channel using insights and AI. Harmonize data from multiple sources with a standardized and extendable data model. Data Cloud can help customers transform and cleanse their data before using it, and map it to a common data model that can be extended and customized. Data Cloud can also help customers create calculated insights and related attributes to enrich their data and optimize identity resolution. The other two options are not common use cases for Data Cloud. Data Cloud does not provide data governance or backup and disaster recovery features, as these are typically handled by other Salesforce or external solutions. Learn How Data Cloud Works About Salesforce Data Cloud Discover Use Cases for the Platform Understand Common Data Analysis Use Cases

Where is value suggestion for attributes in segmentation enabled when creating the DMO? Value suggestion for attributes in segmentation is a feature that allows you to see and select the possible values for a text field when creating segment filters. You can enable or disable this feature for each data model object (DMO) field in the DMO record home. Value suggestion can be enabled for up to 500 attributes for your entire org. It can take up to 24 hours for suggested values to appear. To use value suggestion when creating segment filters, you need to drag the attribute onto the canvas and start typing in the Value field for an attribute. You can also select multiple values for some operators. Value suggestion is not available for attributes with more than 255 characters or for relationships that are one-to-many (1:N). Use Value Suggestions in Segmentation, Considerations for Selecting Related Attributes

A Data Cloud customer wants to adjust their identity resolution rules to increase their accuracy of matches. Rather than matching on email address, they want to review a rule that joins their CRM Contacts with their Marketing Contacts, where both use the CRM ID as their primary key. Which two steps should the consultant take to address this new use case? Choose 2 answers To address this new use case, the consultant should map the primary key from the two systems to Party Identification, using CRM ID as the identification name for both, and create a matching rule based on party identification that matches on CRM ID as the party identification name. This way, the consultant can ensure that the CRM Contacts and Marketing Contacts are matched based on their CRM ID, which is a unique identifier for each individual. By using Party Identification, the consultant can also leverage the benefits of this attribute, such as being able to match across different entities and sources, and being able to handle multiple values for the same individual. The other options are incorrect because they either do not use the CRM ID as the primary key, or they do not use Party Identification as the attribute type. Configure Identity Resolution Rulesets, Identity Resolution Match Rules, Data Cloud Identity Resolution Ruleset, Data Cloud Identity Resolution Config Input

Which consideration related to the way Data Cloud ingests CRM data is true? The correct answer is D. The CRM Connector allows standard fields to stream into Data Cloud in real time. This means that any changes to the standard fields in the CRM data source are reflected in Data Cloud almost instantly, without waiting for the next scheduled synchronization. This feature enables Data Cloud to have the most up-to-date and accurate CRM data for segmentation and activation. The other options are incorrect for the following reasons: A . CRM data can be manually refreshed at any time by clicking the Refresh button on the data stream detail page. This option is false. B . The CRM Connector's synchronization times can be customized to up to 60-minute intervals, not 15-minute intervals. This option is false. C . Formula fields are not refreshed at regular sync intervals, but only at the next full refresh. A full refresh is a complete data ingestion process that occurs once every 24 hours or when manually triggered. This option is false. 1: Connect and Ingest Data in Data Cloud article on Salesforce Help 2: Data Sources in Data Cloud unit on Trailhead 3: Data Cloud for Admins module on Trailhead 4: [Formula Fields in Data Cloud] unit on Trailhead 5 : [Data Streams in Data Cloud] unit on Trailhead

What does the Source Sequence reconciliation rule do in identity resolution? The Source Sequence reconciliation rule sets the priority of specific data sources when building attributes in a unified profile, such as a first or last name. This rule allows you to define which data source should be used as the primary source of truth for each attribute, and which data sources should be used as fallbacks in case the primary source is missing or invalid. For example, you can set the Source Sequence rule to use data from Salesforce CRM as the first priority, data from Marketing Cloud as the second priority, and data from Google Analytics as the third priority for the first name attribute. This way, the unified profile will use the first name value from Salesforce CRM if it exists, otherwise it will use the value from Marketing Cloud, and so on. This rule helps you to ensure the accuracy and consistency of the unified profile attributes across different data sources. Reference: Salesforce Data Cloud Consultant Exam Guide, Identity Resolution, Reconciliation Rules

Which two dependencies prevent a data stream from being deleted? Choose 2 answers To delete a data stream in Data Cloud, the underlying data lake object (DLO) must not have any dependencies or references to other objects or processes. The following two dependencies prevent a data stream from being deleted: Data transform: This is a process that transforms the ingested data into a standardized format and structure for the data model. A data transform can use one or more DLOs as input or output. If a DLO is used in a data transform, it cannot be deleted until the data transform is removed or modified. Data model object: This is an object that represents a type of entity or relationship in the data model. A data model object can be mapped to one or more DLOs to define its attributes and values. If a DLO is mapped to a data model object, it cannot be deleted until the mapping is removed or changed. 1: Delete a Data Stream article on Salesforce Help 2: [Data Transforms in Data Cloud] unit on Trailhead 3: [Data Model in Data Cloud] unit on Trailhead

What should a user do to pause a segment activation with the intent of using that segment again? The correct answer is A. Deactivate the segment. If a segment is no longer needed, it can be deactivated through Data Cloud and applies to all chosen targets. A deactivated segment no longer publishes, but it can be reactivated at any time. This option allows the user to pause a segment activation with the intent of using that segment again. The other options are incorrect for the following reasons: B . Delete the segment. This option permanently removes the segment from Data Cloud and cannot be undone. This option does not allow the user to use the segment again. C . Skip the activation. This option skips the current activation cycle for the segment, but does not affect the future activation cycles. This option does not pause the segment activation indefinitely. D . Stop the publish schedule. This option stops the segment from publishing to the chosen targets, but does not deactivate the segment. This option does not pause the segment activation completely. 1: Deactivated Segment article on Salesforce Help 2: Delete a Segment article on Salesforce Help 3: Skip an Activation article on Salesforce Help 4: Stop a Publish Schedule article on Salesforce Help

When creating a segment on an individual, what is the result of using two separate containers linked by an AND as shown below? GoodsProduct | Count | At Least | 1 Color | Is Equal To | red AND GoodsProduct | Count | At Least | 1 PrimaryProductCategory | Is Equal To | shoes : When creating a segment on an individual, using two separate containers linked by an AND means that the individual must satisfy both the conditions in the containers. In this case, the individual must have purchased at least one product with the color attribute equal to `red' and at least one product with the primary product category attribute equal to `shoes'. The products do not have to be the same or purchased in the same transaction. Therefore, the correct answer is A. The other options are incorrect because they imply different logical operators or conditions. Option B implies that the individual must have purchased a single product that has both the color attribute equal to `red' and the primary product category attribute equal to `shoes'. Option C implies that the individual must have purchased only one product that has both the color attribute equal to `red' and the primary product category attribute equal to `shoes' and no other products. Option D implies that the individual must have purchased either one product with the color attribute equal to `red' or one product with the primary product category attribute equal to `shoes' or both, which is equivalent to using an OR operator instead of an AND operator. Create a Container for Segmentation Create a Segment in Data Cloud Navigate Data Cloud Segmentation

What should an organization use to stream inventory levels from an inventory management system into Data Cloud in a fast and scalable, near-real-time way? The Ingestion API is a RESTful API that allows you to stream data from any source into Data Cloud in a fast and scalable way. You can use the Ingestion API to send data from your inventory management system into Data Cloud as JSON objects, and then use Data Cloud to create data models, segments, and insights based on your inventory data. The Ingestion API supports both batch and streaming modes, and can handle up to 100,000 records per second. The Ingestion API also provides features such as data validation, encryption, compression, and retry mechanisms to ensure data quality and security. Ingestion API Developer Guide, Ingest Data into Data Cloud

Northern Trail Outfitters (NTO), an outdoor lifestyle clothing brand, recently started a new line of business. The new business specializes in gourmet camping food. For business reasons as well as security reasons, it's important to NTO to keep all Data Cloud data separated by brand. Which capability best supports NTO's desire to separate its data by brand? Data spaces are logical containers that allow you to separate and organize your data by different criteria, such as brand, region, product, or business unit. Data spaces can help you manage data access, security, and governance, as well as enable cross-cloud data integration and activation. For NTO, data spaces can support their desire to separate their data by brand, so that they can have different data models, rules, and insights for their outdoor lifestyle clothing and gourmet camping food businesses. Data spaces can also help NTO comply with any data privacy and security regulations that may apply to their different brands. The other options are incorrect because they do not provide the same level of data separation and organization as data spaces. Data streams are used to ingest data from different sources into Data Cloud, but they do not separate the data by brand. Data model objects are used to define the structure and attributes of the data, but they do not isolate the data by brand. Data sources are used to identify the origin and type of the data, but they do not partition the data by brand. Data Spaces Overview, Create Data Spaces, Data Privacy and Security in Data Cloud, Data Streams Overview, Data Model Objects Overview, [Data Sources Overview]

Cumulus Financial created a segment called High Investment Balance Customers. This is a foundational segment that includes several segmentation criteria the marketing team should consistently use. Which feature should the consultant suggest the marketing team use to ensure this consistency when creating future, more refined segments? Nested segments are segments that include or exclude one or more existing segments. They allow the marketing team to reuse filters and maintain consistency in their data by using an existing segment to build a new one. For example, the marketing team can create a nested segment that includes High Investment Balance Customers and excludes customers who have opted out of email marketing. This way, they can leverage the foundational segment and apply additional criteria without duplicating the rules. The other options are not the best features to ensure consistency because: B . A calculated insight is a data object that performs calculations on data lake objects or CRM data and returns a result. It is not a segment and cannot be used for activation or personalization. C . A data kit is a bundle of packageable metadata that can be exported and imported across Data Cloud orgs. It is not a feature for creating segments, but rather for sharing components. D . Cloning a segment creates a copy of the segment with the same rules and filters. It does not allow the marketing team to add or remove criteria from the original segment, and it may create confusion and redundancy. Create a Nested Segment - Salesforce, Save Time with Nested Segments (Generally Available) - Salesforce, Calculated Insights - Salesforce, Create and Publish a Data Kit Unit | Salesforce Trailhead, Create a Segment in Data Cloud - Salesforce

Cumulus Financial uses Service Cloud as its CRM and stores mobile phone, home phone, and work phone as three separate fields for its customers on the Contact record. The company plans to use Data Cloud and ingest the Contact object via the CRM Connector. What is the most efficient approach that a consultant should take when ingesting this data to ensure all the different phone numbers are properly mapped and available for use in activation? The most efficient approach that a consultant should take when ingesting this data to ensure all the different phone numbers are properly mapped and available for use in activation is B. Ingest the Contact object and use streaming transforms to normalize the phone numbers from the Contact data stream into a separate Phone data lake object (DLO) that contains three rows, and then map this new DLO to the Contact Point Phone data map object. This approach allows the consultant to use the streaming transforms feature of Data Cloud, which enables data manipulation and transformation at the time of ingestion, without requiring any additional processing or storage. Streaming transforms can be used to normalize the phone numbers from the Contact data stream, such as removing spaces, dashes, or parentheses, and adding country codes if needed. The normalized phone numbers can then be stored in a separate Phone DLO, which can have one row for each phone number type (work, home, mobile). The Phone DLO can then be mapped to the Contact Point Phone data map object, which is a standard object that represents a phone number associated with a contact point. This way, the consultant can ensure that all the phone numbers are available for activation, such as sending SMS messages or making calls to the customers. The other options are not as efficient as option B. Option A is incorrect because it does not normalize the phone numbers, which may cause issues with activation or identity resolution. Option C is incorrect because it requires creating a calculated insight, which is an additional step that consumes more resources and time than streaming transforms. Option D is incorrect because it requires creating formula fields in the Contact data stream, which may not be supported by the CRM Connector or may cause conflicts with the existing fields in the Contact object. Salesforce Data Cloud Consultant Exam Guide, Data Ingestion and Modeling, Streaming Transforms, Contact Point Phone

A customer has a Master Customer table from their CRM to ingest into Data Cloud. The table contains a name and primary email address, along with other personally Identifiable information (Pll). How should the fields be mapped to support identity resolution? To support identity resolution in Data Cloud, the fields from the Master Customer table should be mapped to the standard data model objects that are designed for this purpose. The Individual object is used to store the name and other personally identifiable information (PII) of a customer, while the Contact Phone Email object is used to store the primary email address and other contact information of a customer. These objects are linked by a relationship field that indicates the contact information belongs to the individual. By mapping the fields to these objects, Data Cloud can use the identity resolution rules to match and reconcile the profiles from different sources based on the name and email address fields. The other options are not recommended because they either create a new custom object that is not part of the standard data model, or map all fields to the Customer object that is not intended for identity resolution, or map all fields to the Individual object that does not have a standard email address field. Data Modeling Requirements for Identity Resolution, Create Unified Individual Profiles

Cloud Kicks received a Request to be Forgotten by a customer. In which two ways should a consultant use Data Cloud to honor this request? Choose 2 answers To honor a Request to be Forgotten by a customer, a consultant should use Data Cloud in two ways: Add the Individual ID to a headerless file and use the delete from file functionality. This option allows the consultant to delete multiple Individuals from Data Cloud by uploading a CSV file with their IDs. The deletion process is asynchronous and can take up to 24 hours to complete. Use the Consent API to suppress processing and delete the Individual and related records from source data streams. This option allows the consultant to submit a Data Deletion request for an Individual profile in Data Cloud using the Consent API2. A Data Deletion request deletes the specified Individual entity and any entities where a relationship has been defined between that entity's identifying attribute and the Individual ID attribute. The deletion process is reprocessed at 30, 60, and 90 days to ensure a full deletion. The other options are not correct because: Deleting the data from the incoming data stream and performing a full refresh will not delete the existing data in Data Cloud, only the new data from the source system. Using Data Explorer to locate and manually remove the Individual will not delete the related records from the source data streams, only the Individual entity in Data Cloud. Delete Individuals from Data Cloud Requesting Data Deletion or Right to Be Forgotten Data Refresh for Data Cloud [Data Explorer]

Cumulus Financial uses Data Cloud to segment banking customers and activate them for direct mail via a Cloud File Storage activation. The company also wants to analyze individuals who have been in the segment within the last 2 years. Which Data Cloud component allows for this? Data Cloud allows customers to analyze the segment membership history of individuals using the Segment Membership data model object. This object stores information about when an individual joined or left a segment, and can be used to create reports and dashboards to track segment performance over time. Cumulus Financial can use this object to filter individuals who have been in the segment within the last 2 years and compare them with other metrics. The other options are not Data Cloud components that allow for this analysis. Segment exclusion is a feature that allows customers to remove individuals from a segment based on another segment. Nested segments are segments that are created from other segments using logical operators. Calculated insights are derived attributes that are created from existing data using formulas. Segment Membership Data Model Object Data Cloud Reports and Dashboards Create a Segment in Data Cloud

What is Data Cloud's primary value to customers? Data Cloud is a platform that enables you to activate all your customer data across Salesforce applications and other systems. Data Cloud allows you to create a unified profile of each customer by ingesting, transforming, and linking data from various sources, such as CRM, marketing, commerce, service, and external data providers. Data Cloud also provides insights and analytics on customer behavior, preferences, and needs, as well as tools to segment, target, and personalize customer interactions. Data Cloud's primary value to customers is to provide a unified view of a customer and their related data, which can help you deliver better customer experiences, increase loyalty, and drive growth. Salesforce Data Cloud, When Data Creates Competitive Advantage

During an implementation project, a consultant completed ingestion of all data streams for their customer. Prior to segmenting and acting on that data, which additional configuration is required? After ingesting data from different sources into Data Cloud, the additional configuration that is required before segmenting and acting on that data is Identity Resolution. Identity Resolution is the process of matching and reconciling source profiles from different data sources and creating unified profiles that represent a single individual or entity. Identity Resolution enables you to create a 360- degree view of your customers and prospects, and to segment and activate them based on their attributes and behaviors. To configure Identity Resolution, you need to create and deploy a ruleset that defines the match rules and reconciliation rules for your data. The other options are incorrect because they are not required before segmenting and acting on the data. Data Activation is the process of sending data from Data Cloud to other Salesforce clouds or external destinations for marketing, sales, or service purposes. Calculated Insights are derived attributes that are computed based on the source or unified data, such as lifetime value, churn risk, or product affinity. Data Mapping is the process of mapping source attributes to unified attributes in the data model. These configurations can be done after segmenting and acting on the data, or in parallel with Identity Resolution, but they are not prerequisites for it. Identity Resolution Overview, Segment and Activate Data in Data Cloud, Configure Identity Resolution Rulesets, Data Activation Overview, Calculated Insights Overview, [Data Mapping Overview]

Northern Trail Outfitters (NTO) wants to connect their B2C Commerce data with Data Cloud and bring two years of transactional history into Data Cloud. What should NTO use to achieve this? The B2C Commerce Starter Bundles are predefined data streams that ingest order and product data from B2C Commerce into Data Cloud. However, the starter bundles only bring in the last 90 days of data by default. To bring in two years of transactional history, NTO needs to use a custom extract from B2C Commerce that includes the historical data and configure the data stream to use the custom extract as the source. The other options are not sufficient to achieve this because: A . B2C Commerce Starter Bundles only ingest the last 90 days of data by default. B . Direct Sales Order entity ingestion is not a supported method for connecting B2C Commerce data with Data Cloud. Data Cloud does not provide a direct-access connection for B2C Commerce data, only data ingestion. C . Direct Sales Product entity ingestion is not a supported method for connecting B2C Commerce data with Data Cloud. Data Cloud does not provide a direct-access connection for B2C Commerce data, only data ingestion. Create a B2C Commerce Data Bundle - Salesforce, B2C Commerce Connector - Salesforce, Salesforce B2C Commerce Pricing Plans & Costs

A customer has a requirement to receive a notification whenever an activation fails for a particular segment. Which feature should the consultant use to solution for this use case? The feature that the consultant should use to solution for this use case is C. Activation alert. Activation alerts are notifications that are sent to users when an activation fails or succeeds for a segment. Activation alerts can be configured in the Activation Settings page, where the consultant can specify the recipients, the frequency, and the conditions for sending the alerts. Activation alerts can help the customer to monitor the status of their activations and troubleshoot any issues that may arise. Salesforce Data Cloud Consultant Exam Guide, Activation Alerts

Which two steps should a consultant take if a successfully configured Amazon S3 data stream fails to refresh with a "NO FILE FOUND" error message? Choose 2 answers A "NO FILE FOUND" error message indicates that Data Cloud cannot access or locate the file from the Amazon S3 source. There are two possible reasons for this error and two corresponding steps that a consultant should take to troubleshoot it: The Data Cloud user does not have the correct permissions to read the file from the Amazon S3 bucket. This could happen if the user's permission set or profile does not include the Data Cloud Data Stream Read permission, or if the user's Amazon S3 credentials are invalid or expired. To fix this issue, the consultant should check and update the user's permissions and credentials in Data Cloud and Amazon S3, respectively. The file does not exist in the specified bucket location. This could happen if the file name or path has changed, or if the file has been deleted or moved from the Amazon S3 bucket. To fix this issue, the consultant should check and verify the file name and path in the Amazon S3 bucket, and update the data stream configuration in Data Cloud accordingly. Create Amazon S3 Data Stream in Data Cloud, How to Use the Amazon S3 Storage Connector in Data Cloud, Amazon S3 Connection

A consultant is discussing the benefits of Data Cloud with a customer that has multiple disjointed data sources. Which two functional areas should the consultant highlight in relation to managing customer data? Choose 2 answers Data Cloud is an open and extensible data platform that enables smarter, more efficient AI with secure access to first-party and industry data. Two functional areas that the consultant should highlight in relation to managing customer data are: Data Harmonization: Data Cloud harmonizes data from multiple sources and formats into a common schema, enabling a single source of truth for customer data. Data Cloud also applies data quality rules and transformations to ensure data accuracy and consistency. Unified Profiles: Data Cloud creates unified profiles of customers and prospects by linking data across different identifiers, such as email, phone, cookie, and device ID1. Unified profiles provide a holistic view of customer behavior, preferences, and interactions across channels and touchpoints. The other options are not correct because: Master Data Management: Master Data Management (MDM) is a process of creating and maintaining a single, consistent, and trusted source of master data, such as product, customer, supplier, or location data. Data Cloud does not provide MDM functionality, but it can integrate with MDM solutions to enrich customer data. Data Marketplace: Data Marketplace is a feature of Data Cloud that allows users to discover, access, and activate data from third-party providers, such as demographic, behavioral, and intent data. Data Marketplace is not a functional area related to managing customer data, but rather a source of external data that can enhance customer data. Salesforce Data Cloud [Data Harmonization for Data Cloud] [Unified Profiles for Data Cloud] [What is Master Data Management?] [Integrate Data Cloud with Master Data Management] [Data Marketplace for Data Cloud]

A retailer wants to unify profiles using Loyalty ID which is different than the unique ID of their customers. Which object should the consultant use in identity resolution to perform exact match rules on the Loyalty ID? The Party Identification object is the correct object to use in identity resolution to perform exact match rules on the Loyalty ID. The Party Identification object is a child object of the Individual object that stores different types of identifiers for an individual, such as email, phone, loyalty ID, social media handle, etc. Each identifier has a type, a value, and a source. The consultant can use the Party Identification object to create a match rule that compares the Loyalty ID type and value across different sources and links the corresponding individuals. The other options are not correct objects to use in identity resolution to perform exact match rules on the Loyalty ID. The Loyalty Identification object does not exist in Data Cloud. The Individual object is the parent object that represents a unified profile of an individual, but it does not store the Loyalty ID directly. The Contact Identification object is a child object of the Contact object that stores identifiers for a contact, such as email, phone, etc., but it does not store the Loyalty ID. Reference: Data Modeling Requirements for Identity Resolution Identity Resolution in a Data Space Configure Identity Resolution Rulesets Map Required Objects Data and Identity in Data Cloud

Which data model subject area defines the revenue or quantity for an opportunity by product family? The Sales Order subject area defines the details of an order placed by a customer for one or more products or services. It includes information such as the order date, status, amount, quantity, currency, payment method, and delivery method. The Sales Order subject area also allows you to track the revenue or quantity for an opportunity by product family, which is a grouping of products that share common characteristics or features. For example, you can use the Sales Order Line Item DMO to associate each product in an order with its product family, and then use the Sales Order Revenue DMO to calculate the total revenue or quantity for each product family in an opportunity. Sales Order Subject Area, Sales Order Revenue DMO Reference

Which configuration supports separate Amazon S3 buckets for data ingestion and activation? To support separate Amazon S3 buckets for data ingestion and activation, you need to configure dedicated S3 data sources in Data Cloud setup. Data sources are used to identify the origin and type of the data that you ingest into Data Cloud. You can create different data sources for each S3 bucket that you want to use for ingestion or activation, and specify the bucket name, region, and access credentials. This way, you can separate and organize your data by different criteria, such as brand, region, product, or business unit. The other options are incorrect because they do not support separate S3 buckets for data ingestion and activation. Multiple S3 connectors are not a valid configuration in Data Cloud setup, as there is only one S3 connector available. Dedicated S3 data sources in activation setup are not a valid configuration either, as activation setup does not require data sources, but activation targets. Separate user credentials for data stream and activation target are not sufficient to support separate S3 buckets, as you also need to specify the bucket name and region for each data source. Data Sources Overview, Amazon S3 Storage Connector, Data Spaces Overview, Data Streams Overview, Data Activation Overview

A customer wants to use the transactional data from their data warehouse in Data Cloud. They are only able to export the data via an SFTP site.How should the file be brought into Data Cloud? The SFTP Connector is a data source connector that allows Data Cloud to ingest data from an SFTP server. The customer can use the SFTP Connector to create a data stream from their exported file and bring it into Data Cloud as a data lake object. The other options are not the best ways to bring the file into Data Cloud because: B . The Cloud Storage Connector is a data source connector that allows Data Cloud to ingest data from cloud storage services such as Amazon S3, Azure Storage, or Google Cloud Storage. The customer does not have their data in any of these services, but only on an SFTP site. C . The Data Import Wizard is a tool that allows users to import data for many standard Salesforce objects, such as accounts, contacts, leads, solutions, and campaign members. It is not designed to import data from an SFTP site or for custom objects in Data Cloud. D . The Dataloader is an application that allows users to insert, update, delete, or export Salesforce records. It is not designed to ingest data from an SFTP site or into Data Cloud. SFTP Connector - Salesforce, Create Data Streams with the SFTP Connector in Data Cloud - Salesforce, Data Import Wizard - Salesforce, Salesforce Data Loader

When performing segmentation or activation, which time zone is used to publish and refresh data? The time zone that is used to publish and refresh data when performing segmentation or activation is D. Time zone set by the Salesforce Data Cloud org. This time zone is the one that is configured in the org settings when Data Cloud is provisioned, and it applies to all users and activities in Data Cloud. This time zone determines when the segments are scheduled to refresh and when the activations are scheduled to publish. Therefore, it is important to consider the time zone difference between the Data Cloud org and the destination systems or channels when planning the segmentation and activation strategies. Salesforce Data Cloud Consultant Exam Guide, Segmentation, Activation

Cumulus Financial is currently using Data Cloud and ingesting transactional data from its backend system via an S3 Connector in upsert mode. During the initial setup six months ago, the company created a formula field in Data Cloud to create a custom classification. It now needs to update this formula to account for more classifications. What should the consultant keep in mind with regard to formula field updates when using the S3 Connector? A formula field is a field that calculates a value based on other fields or constants. When using the S3 Connector to ingest data from an Amazon S3 bucket, Data Cloud supports creating and updating formula fields on the data lake objects (DLOs) that store the data from the S3 source. However, the formula field updates are not applied immediately, but rather at the next incremental upsert refresh of the data stream. An incremental upsert refresh is a process that adds new records and updates existing records from the S3 source to the DLO based on the primary key field. Therefore, the consultant should keep in mind that the formula field updates will affect both new and existing records, but only after the next incremental upsert refresh of the data stream. The other options are incorrect because Data Cloud does not initiate a full refresh of data from S3, does not update the formula only for new records, and does support formula field updates for data streams of type upsert. Create a Formula Field, Amazon S3 Connection, Data Lake Object

Luxury Retailers created a segment targeting high value customers that it activates through Marketing Cloud for email communication. The company notices that the activated count is smaller than the segment count. What is a reason for this? Data Cloud requires a Contact Point for Marketing Cloud activations, which is a record that links an individual to an email address. This ensures that the individual has given consent to receive email communications and that the email address is valid. If the individual does not have a related Contact Point, they will not be activated in Marketing Cloud. This may result in a lower activated count than the segment count. Data Cloud Activation, Contact Point for Marketing Cloud

Northern Trail Outfitters wants to implement Data Cloud and has several use cases in mind. Which two use cases are considered a good fit for Data Cloud? Choose 2 answers Data Cloud is a data platform that can help customers connect, prepare, harmonize, unify, query, analyze, and act on their data across various Salesforce and external sources. Some of the use cases that are considered a good fit for Data Cloud are: To ingest and unify data from various sources to reconcile customer identity. Data Cloud can help customers bring all their data, whether streaming or batch, into Salesforce and map it to a common data model. Data Cloud can also help customers resolve identities across different channels and sources and create unified profiles of their customers. To use harmonized data to more accurately understand the customer and business impact. Data Cloud can help customers transform and cleanse their data before using it, and enrich it with calculated insights and related attributes. Data Cloud can also help customers create segments and audiences based on their data and activate them in any channel. Data Cloud can also help customers use AI to predict customer behavior and outcomes. The other two options are not use cases that are considered a good fit for Data Cloud. Data Cloud does not provide features to create and orchestrate cross-channel marketing messages, as this is typically handled by other Salesforce solutions such as Marketing Cloud. Data Cloud also does not eliminate the need for separate business intelligence and IT data management tools, as it is designed to work with them and complement their capabilities. Learn How Data Cloud Works About Salesforce Data Cloud Discover Use Cases for the Platform Understand Common Data Analysis Use Cases

What does it mean to build a trust-based, first-party data asset? Building a trust-based, first-party data asset means collecting, managing, and activating data from your own customers and prospects in a way that respects their privacy and preferences. It also means providing them with clear and honest information about how you use their data, what benefits they can expect from sharing their data, and how they can control their data. By doing so, you can create a mutually beneficial relationship with your customers, where they trust you to use their data responsibly and ethically, and you can deliver more relevant and personalized experiences to them. A trust-based, first-party data asset can help you improve customer loyalty, retention, and growth, as well as comply with data protection regulations and standards. Use first-party data for a powerful digital experience, Why first-party data is the key to data privacy, Build a first-party data strategy

What is the result of a segmentation criteria filtering on City | Is Equal To | 'San José'? sensitive, meaning that it will only match the exact value that is entered in the filter. Therefore, cities containing 'San Jose', 'san jose', or `San Jose' will not be included in the result, as they do not Segmentation Criteria, Segmentation Operators

During a privacy law discussion with a customer, the customer indicates they need to honor requests for the right to be forgotten. The consultant determines that Consent API will solve this business need.Which two considerations should the consultant inform the customer about? Choose 2 answers When advising a customer about using the Consent API in Salesforce to comply with requests for the right to be forgotten, the consultant should focus on two primary considerations: Data deletion requests are submitted for Individual profiles (Answer C): The Consent API in Salesforce is designed to handle data deletion requests specifically for individual profiles. This means that when a request is made to delete data, it is targeted at the personal data associated with an individual's profile in the Salesforce system. The consultant should inform the customer that the requests must be specific to individual profiles to ensure accurate processing and compliance with privacy laws. Data deletion requests submitted to Data Cloud are passed to all connected Salesforce clouds (Answer D): When a data deletion request is made through the Consent API in Salesforce Data Cloud, the request is not limited to the Data Cloud alone. Instead, it propagates through all connected Salesforce clouds, such as Sales Cloud, Service Cloud, Marketing Cloud, etc. This ensures comprehensive compliance with the right to be forgotten across the entire Salesforce ecosystem. The customer should be aware that the deletion request will affect all instances of the individual's data across the connected Salesforce environments.

To import campaign members into a campaign in Salesforce CRM, a user wants to export the segment to Amazon S3. The resulting file needs to include the Salesforce CRM Campaign ID in the name. What are two ways to achieve this outcome? Choose 2 answers The two ways to achieve this outcome are A and C. Include campaign identifier in the activation name and include campaign identifier in the filename specification. These two options allow the user to specify the Salesforce CRM Campaign ID in the name of the file that is exported to Amazon S3. The activation name and the filename specification are both configurable settings in the activation wizard, where the user can enter the campaign identifier as a text or a variable. The activation name is used as the prefix of the filename, and the filename specification is used as the suffix of the filename. For example, if the activation name is "Campaign_123" and the filename specification is "{segmentName}_{date}", the resulting file name will be "Campaign_123_SegmentA_2023-12- 18.csv". This way, the user can easily identify the file that corresponds to the campaign and import it into Salesforce CRM. The other options are not correct. Option B is incorrect because hard coding the campaign identifier as a new attribute in the campaign activation is not possible. The campaign activation does not have any attributes, only settings. Option D is incorrect because including the campaign identifier in the segment name is not sufficient. The segment name is not used in the filename of the exported file, unless it is specified in the filename specification. Therefore, the user will not be able to see the campaign identifier in the file name.

How can a consultant modify attribute names to match a naming convention in Cloud File Storage targets? : A Cloud File Storage target is a type of data action target in Data Cloud that allows sending data to a cloud storage service such as Amazon S3 or Google Cloud Storage. When configuring an activation to a Cloud File Storage target, a consultant can modify the attribute names to match a naming convention by setting preferred attribute names in Data Cloud. Preferred attribute names are aliases that can be used to control the field names in the target file. They can be set for each attribute in the activation configuration, and they will override the default field names from the data model object. The other options are incorrect because they do not affect the field names in the target file. Using a formula field to update the field name in an activation will not change the field name, but only the field value. Updating attribute names in the data stream configuration will not affect the existing data lake objects or data model objects. Updating field names in the data model object will change the field names for all data sources and activations that use the object, which may not be desirable or consistent. Preferred Attribute Name, Create a Data Cloud Activation Target, Cloud File Storage Target

Northern Trail Qutfitters wants to be able to calculate each customer's lifetime value {LTV) but also create breakdowns of the revenue sourced by website, mobile app, and retail channels. What should a consultant use to address this use case in Data Cloud? Metrics on metrics is a feature that allows creating new metrics based on existing metrics and applying mathematical operations on them. This can be useful for calculating complex business metrics such as LTV, ROI, or conversion rates. In this case, the consultant can use metrics on metrics to calculate the LTV of each customer by summing up the revenue generated by them across different channels. The consultant can also create breakdowns of the revenue by channel by using the channel attribute as a dimension in the metric definition. Metrics on Metrics, Create Metrics on Metrics

A consultant wants to ensure that every segment managed by multiple brand teams adheres to the same set of exclusion criteria, that are updated on a monthly basis. What is the most efficient option to allow for this capability? The most efficient option to allow for this capability is to create a reusable container block with common criteria. A container block is a segment component that can be reused across multiple segments. A container block can contain any combination of filters, nested segments, and exclusion criteria. A consultant can create a container block with the exclusion criteria that apply to all the segments managed by multiple brand teams, and then add the container block to each segment. This way, the consultant can update the exclusion criteria in one place and have them reflected in all the segments that use the container block. The other options are not the most efficient options to allow for this capability. Creating, publishing, and deploying a data kit is a way to share data and segments across different data spaces, but it does not allow for updating the exclusion criteria on a monthly basis. Creating a nested segment is a way to combine segments using logical operators, but it does not allow for excluding individuals based on specific criteria. Creating a segment and copying it for each brand is a way to create multiple segments with the same exclusion criteria, but it does not allow for updating the exclusion criteria in one place. Create a Container Block Create a Segment in Data Cloud Create and Publish a Data Kit Create a Nested Segment

A customer needs to integrate in real time with Salesforce CRM. Which feature accomplishes this requirement? The correct answer is A. Streaming transforms. Streaming transforms are a feature of Data Cloud that allows real-time data integration with Salesforce CRM. Streaming transforms use the Data Cloud Streaming API to synchronize micro-batches of updates between the CRM data source and Data Cloud in near-real time. Streaming transforms enable Data Cloud to have the most current and accurate CRM data for segmentation and activation. The other options are incorrect for the following reasons: B . Data model triggers. Data model triggers are a feature of Data Cloud that allows custom logic to be executed when data model objects are created, updated, or deleted. Data model triggers do not integrate data with Salesforce CRM, but rather manipulate data within Data Cloud. C . Sales and Service bundle. Sales and Service bundle is a feature of Data Cloud that allows pre-built data streams, data model objects, segments, and activations for Sales Cloud and Service Cloud data sources. Sales and Service bundle does not integrate data in real time with Salesforce CRM, but rather ingests data at scheduled intervals. D . Data actions and Lightning web components. Data actions and Lightning web components are features of Data Cloud that allow custom user interfaces and workflows to be built and embedded in Salesforce applications. Data actions and Lightning web components do not integrate data with Salesforce CRM, but rather display and interact with data within Salesforce applications. 1: Load Data into Data Cloud 2: [Data Streams in Data Cloud] 3: [Data Model Triggers in Data Cloud] unit on Trailhead 4: [Sales and Service Bundle in Data Cloud] unit on Trailhead 5: [Data Actions and Lightning Web Components in Data Cloud] unit on Trailhead : [Data Model in Data Cloud] unit on Trailhead 6: [Create a Data Model Object] article on Salesforce Help 7: [Data Sources in Data Cloud] unit on Trailhead 8: [Connect and Ingest Data in Data Cloud] article on Salesforce Help : [Data Spaces in Data Cloud] unit on Trailhead 9: [Create a Data Space] article on Salesforce Help 10: [Segments in Data Cloud] unit on Trailhead 11: [Create a Segment] article on Salesforce Help 12: [Activations in Data Cloud] unit on Trailhead 13: [Create an Activation] article on Salesforce Help

A user wants to be able to create a multi-dimensional metric to identify unified individual lifetime value (LTV). Which sequence of data model object (DMO) joins is necessary within the calculated Insight to enable this calculation? To create a multi-dimensional metric to identify unified individual lifetime value (LTV), the sequence of data model object (DMO) joins that is necessary within the calculated Insight is Unified Individual > Unified Link Individual > Sales Order. This is because the Unified Individual DMO represents the unified profile of an individual or entity that is created by identity resolution. The Unified Link Individual DMO represents the link between a unified individual and an individual from a source system. The Sales Order DMO represents the sales order information from a source system. By joining these three DMOs, you can calculate the LTV of a unified individual based on the sales order data from different source systems. The other options are incorrect because they do not join the correct DMOs to enable the LTV calculation. Option B is incorrect because the Individual DMO represents the source profile of an individual or entity from a source system, not the unified profile. Option C is incorrect because the join order is reversed, and you need to start with the Unified Individual DMO to identify the unified profile. Option D is incorrect because it is missing the Unified Link Individual DMO, which is needed to link the unified profile with the source profile. Unified Individual Data Model Object, Unified Link Individual Data Model Object, Sales Order Data Model Object, Individual Data Model Object

Cumulus Financial created a segment called Multiple Investments that contains individuals who have invested in two or more mutual funds.The company plans to send an email to this segment regarding a new mutual fund offering, and wants to personalize the email content with information about each customer's current mutual fund investments.How should the Data Cloud consultant configure this activation? To personalize the email content with information about each customer's current mutual fund investments, the Data Cloud consultant needs to add related attributes to the activation. Related attributes are additional data fields that can be sent along with the segment to the target system for personalization or analysis purposes. In this case, the consultant needs to add the Fund Name attribute, which contains the name of the mutual fund that the customer has invested in, and apply a filter for Fund Type equal to "Mutual Fund" to ensure that only relevant data is sent. The other options are not correct because: A . Including Fund Type equal to "Mutual Fund" as a related attribute is not enough to personalize the email content. The consultant also needs to include the Fund Name attribute, which contains the specific name of the mutual fund that the customer has invested in. C . Adding related attribute Fund Type is not enough to personalize the email content. The consultant also needs to add the Fund Name attribute, which contains the specific name of the mutual fund that the customer has invested in, and apply a filter for Fund Type equal to "Mutual Fund" to ensure that only relevant data is sent. D . Including Fund Name and Fund Type by default for post processing in the target system is not a valid option. The consultant needs to add the related attributes and filters during the activation configuration in Data Cloud, not after the data is sent to the target system. Add Related Attributes to an Activation - Salesforce, Related Attributes in Activation - Salesforce, Prepare for Your Salesforce Data Cloud Consultant Credential

A consultant is integrating an Amazon 53 activated campaign with the customer's destination system. In order for the destination system to find the metadata about the segment, which file on the 53 will contain this information for processing? The file on the Amazon S3 that will contain the metadata about the segment for processing is B. The json file. The json file is a metadata file that is generated along with the csv file when a segment is activated to Amazon S3. The json file contains information such as the segment name, the segment ID, the segment size, the segment attributes, the segment filters, and the segment schedule. The destination system can use this file to identify the segment and its properties, and to match the segment data with the corresponding fields in the destination system. Salesforce Data Cloud Consultant Exam Guide, Amazon S3 Activation

A customer notices that their consolidation rate has recently increased. They contact the consultant to ask why. What are two likely explanations for the increase? Choose 2 answers The consolidation rate is a metric that measures the amount by which source profiles are combined to produce unified profiles in Data Cloud, calculated as 1 - (number of unified profiles / number of source profiles). A higher consolidation rate means that more source profiles are matched and merged into fewer unified profiles, while a lower consolidation rate means that fewer source profiles are matched and more unified profiles are created. There are two likely explanations for why the consolidation rate has recently increased for a customer: New data sources have been added to Data Cloud that largely overlap with the existing profiles. This means that the new data sources contain many profiles that are similar or identical to the profiles from the existing data sources. For example, if a customer adds a new CRM system that has the same customer records as their old CRM system, the new data source will overlap with the existing one. When Data Cloud ingests the new data source, it will use the identity resolution ruleset to match and merge the overlapping profiles into unified profiles, resulting in a higher consolidation rate. Identity resolution rules have been added to the ruleset to increase the number of matched profiles. This means that the customer has modified their identity resolution ruleset to include more match rules or more match criteria that can identify more profiles as belonging to the same individual. For example, if a customer adds a match rule that matches profiles based on email address and phone number, instead of just email address, the ruleset will be able to match more profiles that have the same email address and phone number, resulting in a higher consolidation rate. Reference: Identity Resolution Calculated Insight: Consolidation Rates for Unified Profiles, Configure Identity Resolution Rulesets

A client wants to bring in loyalty data from a custom object in Salesforce CRM that contains a point balance for accrued hotel points and airline points within the same record. The client wants to split these point systems into two separate records for better tracking and processing.What should a consultant recommend in this scenario? Batch transforms are a feature that allows creating new data lake objects based on existing data lake objects and applying transformations on them. This can be useful for splitting, merging, or reshaping data to fit the data model or business requirements. In this case, the consultant can use batch transforms to create a second data lake object that contains only the airline points from the original loyalty data object. The original object can be modified to contain only the hotel points. This way, the client can have two separate records for each point system and track and process them accordingly. Batch Transforms, Create a Batch Transform

A segment fails to refresh with the error "Segment references too many data lake objects (DLOS)". Which two troubleshooting tips should help remedy this issue? Choose 2 answers The error "Segment references too many data lake objects (DLOs)" occurs when a segment query exceeds the limit of 50 DLOs that can be referenced in a single query. This can happen when the segment has too many filters, nested segments, or exclusion criteria that involve different DLOs. To remedy this issue, the consultant can try the following troubleshooting tips: Split the segment into smaller segments. The consultant can divide the segment into multiple segments that have fewer filters, nested segments, or exclusion criteria. This can reduce the number of DLOs that are referenced in each segment query and avoid the error. The consultant can then use the smaller segments as nested segments in a larger segment, or activate them separately. Use calculated insights in order to reduce the complexity of the segmentation query. The consultant can create calculated insights that are derived from existing data using formulas. Calculated insights can simplify the segmentation query by replacing multiple filters or nested segments with a single attribute. For example, instead of using multiple filters to segment individuals based on their purchase history, the consultant can create a calculated insight that calculates the lifetime value of each individual and use that as a filter. The other options are not troubleshooting tips that can help remedy this issue. Refining segmentation criteria to limit up to five custom data model objects (DMOs) is not a valid option, as the limit of 50 DLOs applies to both standard and custom DMOs. Spacing out the segment schedules to reduce DLO load is not a valid option, as the error is not related to the DLO load, but to the segment query complexity. Troubleshoot Segment Errors Create a Calculated Insight Create a Segment in Data Cloud

An organization wants to enable users with the ability to identify and select text attributes from a picklist of options. Which Data Cloud feature should help with this use case? Value suggestion is a Data Cloud feature that allows users to see and select the possible values for a text field when creating segment filters. Value suggestion can be enabled or disabled for each data model object (DMO) field in the DMO record home. Value suggestion can help users to identify and select text attributes from a picklist of options, without having to type or remember the exact values. Value suggestion can also reduce errors and improve data quality by ensuring consistent and valid values for the segment filters. Reference: Use Value Suggestions in Segmentation, Considerations for Selecting Related Attributes

A consultant is working in a customer's Data Cloud org and is asked to delete the existing identity resolution ruleset.Which two impacts should the consultant communicate as a result of this action? Choose 2 answers Deleting an identity resolution ruleset has two major impacts that the consultant should communicate to the customer. First, it will permanently remove all unified customer data that was created by the ruleset, meaning that the unified profiles and their attributes will no longer be available in Data Cloud. Second, it will eliminate dependencies on data model objects that were used by the ruleset, meaning that the data model objects can be modified or deleted without affecting the ruleset. These impacts can have significant consequences for the customer's data quality, segmentation, activation, and analytics, so the consultant should advise the customer to carefully consider the implications of deleting a ruleset before proceeding. The other options are incorrect because they are not impacts of deleting a ruleset. Option A is incorrect because deleting a ruleset will not remove all individual data, but only the unified customer data. The individual data from the source systems will still be available in Data Cloud. Option D is incorrect because deleting a ruleset will not remove all source profile data, but only the unified customer data. The source profile data from the data streams will still be available in Data Cloud. Delete an Identity Resolution Ruleset

Northern Trail Outfitters uploads new customer data to an Amazon S3 Bucket on a daily basis to be ingested in Data Cloud. In what order should each process be run to ensure that freshly imported data is ready and available to use for any segment? To ensure that freshly imported data from an Amazon S3 Bucket is ready and available to use for any segment, the following processes should be run in this order: Refresh Data Stream: This process updates the data lake objects in Data Cloud with the latest data from the source system. It can be configured to run automatically or manually, depending on the data stream settings. Refreshing the data stream ensures that Data Cloud has the most recent and accurate data from the Amazon S3 Bucket. Identity Resolution: This process creates unified individual profiles by matching and consolidating source profiles from different data streams based on the identity resolution ruleset. It runs daily by default, but can be triggered manually as well. Identity resolution ensures that Data Cloud has a single view of each customer across different data sources. Calculated Insight: This process performs calculations on data lake objects or CRM data and returns a result as a new data object. It can be used to create metrics or measures for segmentation or analysis purposes. Calculated insights ensure that Data Cloud has the derived data that can be used for personalization or activation. 1: Configure Data Stream Refresh and Frequency - Salesforce 2: Identity Resolution Ruleset Processing Results - Salesforce 3: Calculated Insights – Salesforce

Data Cloud receives a nightly file of all ecommerce transactions from the previous day. Several segments and activations depend upon calculated insights from the updated data in order to maintain accuracy in the customer's scheduled campaign messages.What should the consultant do to ensure the ecommerce data is ready for use for each of the scheduled activations? The best option that the consultant should do to ensure the ecommerce data is ready for use for each of the scheduled activations is A. Use Flow to trigger a change data event on the ecommerce data to refresh calculated insights and segments before the activations are scheduled to run. This option allows the consultant to use the Flow feature of Data Cloud, which enables automation and orchestration of data processing tasks based on events or schedules. Flow can be used to trigger a change data event on the ecommerce data, which is a type of event that indicates that the data has been updated or changed. This event can then trigger the refresh of the calculated insights and segments that depend on the ecommerce data, ensuring that they reflect the latest data. The refresh of the calculated insights and segments can be completed before the activations are scheduled to run, ensuring that the customer's scheduled campaign messages are accurate and relevant. The other options are not as good as option A. Option B is incorrect because setting a refresh schedule for the calculated insights to occur every hour may not be sufficient or efficient. The refresh schedule may not align with the activation schedule, resulting in outdated or inconsistent data. The refresh schedule may also consume more resources and time than necessary, as the ecommerce data may not change every hour. Option C is incorrect because ensuring the activations are set to Incremental Activation and automatically publish every hour may not solve the problem. Incremental Activation is a feature that allows only the new or changed records in a segment to be activated, reducing the activation time and size. However, this feature does not ensure that the segment data is updated or refreshed based on the ecommerce data. The activation schedule may also not match the ecommerce data update schedule, resulting in inaccurate or irrelevant campaign messages. Option D is incorrect because ensuring the segments are set to Rapid Publish and set to refresh every hour may not be optimal or effective. Rapid Publish is a feature that allows segments to be published faster by skipping some validation steps, such as checking for duplicate records or invalid values. However, this feature may compromise the quality or accuracy of the segment data, and may not be suitable for all use cases. The refresh schedule may also have the same issues as option B, as it may not sync with the ecommerce data update schedule or the activation schedule, resulting in outdated or inconsistent data. Salesforce Data Cloud Consultant Exam Guide, Flow, Change Data Events, Calculated Insights, Segments, [Activation]

Which two requirements must be met for a calculated insight to appear in the segmentation canvas? Choose 2 answers A calculated insight is a custom metric or measure that is derived from one or more data model objects or data lake objects in Data Cloud. A calculated insight can be used in segmentation to filter or group the data based on the calculated value. However, not all calculated insights can appear in the segmentation canvas. There are two requirements that must be met for a calculated insight to appear in the segmentation canvas: The calculated insight must contain a dimension including the Individual or Unified Individual Id. A dimension is a field that can be used to categorize or group the data, such as name, gender, or location. The Individual or Unified Individual Id is a unique identifier for each individual profile in Data Cloud. The calculated insight must include this dimension to link the calculated value to the individual profile and to enable segmentation based on the individual profile attributes. The primary key of the segmented table must be a dimension in the calculated insight. The primary key is a field that uniquely identifies each record in a table. The segmented table is the table that contains the data that is being segmented, such as the Customer or the Order table. The calculated insight must include the primary key of the segmented table as a dimension to ensure that the calculated value is associated with the correct record in the segmented table and to avoid duplication or inconsistency in the segmentation results. Create a Calculated Insight, Use Insights in Data Cloud, Segmentation

A customer requests that their personal data be deleted. Which action should the consultant take to accommodate this request in Data Cloud? The Data Rights Subject Request tool is a feature that allows Data Cloud users to manage customer requests for data access, deletion, or portability. The tool provides a user interface and an API to create, track, and fulfill data rights requests. The tool also generates a report that contains the customer's personal data and the actions taken to comply with the request. The consultant should use this tool to accommodate the customer's request for data deletion in Data Cloud. Data Rights Subject Request Tool, Create a Data Rights Subject Request

What does the Ignore Empty Value option do in identity resolution? The Ignore Empty Value option in identity resolution allows customers to ignore empty fields when running reconciliation rules. Reconciliation rules are used to determine the final value of an attribute for a unified individual profile, based on the values from different sources. The Ignore Empty Value option can be set to true or false for each attribute in a reconciliation rule. If set to true, the reconciliation rule will skip any source that has an empty value for that attribute and move on to the next source in the priority order. If set to false, the reconciliation rule will consider any source that has an empty value for that attribute as a valid source and use it to populate the attribute value for the unified individual profile. The other options are not correct descriptions of what the Ignore Empty Value option does in identity resolution. The Ignore Empty Value option does not affect the custom match rules or the standard match rules, which are used to identify and link individuals across different sources based on their attributes. The Ignore Empty Value option also does not ignore individual object records with empty fields when running identity resolution rules, as identity resolution rules operate on the attribute level, not the record level. Data Cloud Identity Resolution Reconciliation Rule Input Configure Identity Resolution Rulesets Data and Identity in Data Cloud

Northern Trail Outfitters (NTO) is configuring an identity resolution ruleset based on Fuzzy Name and Normalized Email. What should NTO do to ensure the best email address is activated? NTO is using Fuzzy Name and Normalized Email as match rules to link together data from different sources into a unified individual profile. However, there might be cases where the same email address is available from more than one source, and NTO needs to decide which one to use for activation. For example, if Rachel has the same email address in Service Cloud and Marketing Cloud, but prefers to receive communications from NTO via Marketing Cloud, NTO needs to ensure that the email address from Marketing Cloud is activated. To do this, NTO can use the source priority order in activations, which allows them to rank the data sources in order of preference for activation. By placing Marketing Cloud higher than Service Cloud in the source priority order, NTO can make sure that the email address from Marketing Cloud is delivered to the activation target, such as an email campaign or a journey. This way, NTO can respect Rachel's preference and deliver a better customer experience.Reference: Configure Activations, Use Source Priority Order in Activations

A customer wants to create segments of users based on their Customer Lifetime Value. However, the source data that will be brought into Data Cloud does not include that key performance indicator (KPI). Which sequence of steps should the consultant follow to achieve this requirement? To create segments of users based on their Customer Lifetime Value (CLV), the sequence of steps that the consultant should follow is Ingest Data > Map Data to Data Model > Create Calculated Insight > Use in Segmentation. This is because the first step is to ingest the source data into Data Cloud using data streams. The second step is to map the source data to the data model, which defines the structure and attributes of the data. The third step is to create a calculated insight, which is a derived attribute that is computed based on the source or unified data. In this case, the calculated insight would be the CLV, which can be calculated using a formula or a query based on the sales order data. The fourth step is to use the calculated insight in segmentation, which is the process of creating groups of individuals or entities based on their attributes and behaviors. By using the CLV calculated insight, the consultant can segment the users by their predicted revenue from the lifespan of their relationship with the brand. The other options are incorrect because they do not follow the correct sequence of steps to achieve the requirement. Option B is incorrect because it is not possible to create a calculated insight before ingesting and mapping the data, as the calculated insight depends on the data model objects. Option C is incorrect because it is not possible to create a calculated insight before mapping the data, as the calculated insight depends on the data model objects. Option D is incorrect because it is not recommended to create a calculated insight before mapping the data, as the calculated insight may not reflect the correct data model structure and attributes. Reference: Data Streams Overview, Data Model Objects Overview, Calculated Insights Overview, Calculating Customer Lifetime Value (CLV) With Salesforce, [Segmentation Overview]

During discovery, which feature should a consultant highlight for a customer who has multiple data sources and needs to match and reconcile data about individuals into a single unified profile? Identity resolution is the feature that allows Data Cloud to match and reconcile data about individuals from multiple data sources into a single unified profile. Identity resolution uses rulesets to define how source profiles are matched and consolidated based on common attributes, such as name, email, phone, or party identifier. Identity resolution enables Data Cloud to create a 360- degree view of each customer across different data sources and systems. The other options are not the best features to highlight for this customer need because: A . Data cleansing is the process of detecting and correcting errors or inconsistencies in data, such as duplicates, missing values, or invalid formats. Data cleansing can improve the quality and accuracy of data, but it does not match or reconcile data across different data sources. B . Harmonization is the process of standardizing and transforming data from different sources into a common format and structure. Harmonization can enable data integration and interoperability, but it does not match or reconcile data across different data sources. C . Data consolidation is the process of combining data from different sources into a single data set or system. Data consolidation can reduce data redundancy and complexity, but it does not match or reconcile data across different data sources. 1: Data and Identity in Data Cloud | Salesforce Trailhead, 2: Data Cloud Identiy Resolution | Salesforce AI Research, 3: [Data Cleansing - Salesforce], 4: [Harmonization - Salesforce], 5: [Data Consolidation - Salesforce]

Northern Trail Outfitters (NTO) wants to send a promotional campaign for customers that have purchased within the past 6 months. The consultant created a segment to meet this requirement. Now, NTO brings an additional requirement to suppress customers who have made purchases within the last week. What should the consultant use to remove the recent customers? The consultant should use B. Segmentation exclude rules to remove the recent customers. Segmentation exclude rules are filters that can be applied to a segment to exclude records that meet certain criteria. The consultant can use segmentation exclude rules to exclude customers who have made purchases within the last week from the segment that contains customers who have purchased within the past 6 months. This way, the segment will only include customers who are eligible for the promotional campaign. The other options are not correct. Option A is incorrect because batch transforms are data processing tasks that can be applied to data streams or data lake objects to modify or enrich the data. Batch transforms are not used for segmentation or activation. Option C is incorrect because related attributes are attributes that are derived from the relationships between data model objects. Related attributes are not used for excluding records from a segment. Option D is incorrect because streaming insights are derived attributes that are calculated at the time of data ingestion. Streaming insights are not used for excluding records from a segment. Salesforce Data Cloud Consultant Exam Guide, Segmentation, Segmentation Exclude Rules

A new user of Data Cloud only needs to be able to review individual rows of ingested data and validate that it has been modeled successfully to its linked data model object. The user will also need to make changes if required. What is the minimum permission set needed to accommodate this use case? The Data Cloud User permission set is the minimum permission set needed to accommodate this use case. The Data Cloud User permission set grants access to the Data Explorer feature, which allows the user to review individual rows of ingested data and validate that it has been modeled successfully to its linked data model object. The user can also make changes to the data model object fields, such as adding or removing fields, changing field types, or creating formula fields. The Data Cloud User permission set does not grant access to other Data Cloud features or tasks, such as creating data streams, creating segments, creating activations, or managing users. The other permission sets are either too restrictive or too permissive for this use case. The Data Cloud for Marketing Specialist permission set only grants access to the segmentation and activation features, but not to the Data Explorer feature. The Data Cloud Admin permission set grants access to all Data Cloud features and tasks, including the Data Explorer feature, but it is more than what the user needs. The Data Cloud for Marketing Data Aware Specialist permission set grants access to the Data Explorer feature, but also to the segmentation and activation features, which are not required for this use case. Data Cloud Standard Permission Sets, Data Explorer, Set Up Data Cloud Unit

Which data stream category should be assigned to use the data for time-based operations in segmentation and calculated insights? Data streams are the sources of data that are ingested into Data Cloud and mapped to the data model. Data streams have different categories that determine how the data is processed and used in Data Cloud. Transaction data streams are used for time-based operations in segmentation and calculated insights, such as filtering by date range, aggregating by time period, or calculating time-to- event metrics. Transaction data streams are typically used for event data, such as purchases, clicks, or visits, that have a timestamp and a value associated with them. Data Streams, Data Stream Categories

Which data model subject area should be used for any Organization, Individual, or Member in the Customer 360 data model? The data model subject area that should be used for any Organization, Individual, or Member in the Customer 360 data model is the Party subject area. The Party subject area defines the entities that are involved in any business transaction or relationship, such as customers, prospects, partners, suppliers, etc. The Party subject area contains the following data model objects (DMOs): Organization: A DMO that represents a legal entity or a business unit, such as a company, a department, a branch, etc. Individual: A DMO that represents a person, such as a customer, a contact, a user, etc. Member: A DMO that represents the relationship between an individual and an organization, such as an employee, a customer, a partner, etc. The other options are not data model subject areas that should be used for any Organization, Individual, or Member in the Customer 360 data model. The Engagement subject area defines the actions that people take, such as clicks, views, purchases, etc. The Membership subject area defines the associations that people have with groups, such as loyalty programs, clubs, communities, etc. The Global Account subject area defines the hierarchical relationships between organizations, such as parent-child, subsidiary, etc. Data Model Subject Areas Party Subject Area Customer 360 Data Model

Which method should a consultant use when performing aggregations in windows of 15 minutes on data collected via the Interaction SDK or Mobile SDK? Streaming insight is a method that allows you to perform aggregations in windows of 15 minutes on data collected via the Interaction SDK or Mobile SDK. Streaming insight is a feature that enables you to create real-time metrics and insights based on streaming data from various sources, such as web, mobile, or IoT devices. Streaming insight allows you to define aggregation rules, such as count, sum, average, min, max, or percentile, and apply them to streaming data in time windows of 15 minutes. For example, you can use streaming insight to calculate the number of visitors, the average session duration, or the conversion rate for your website or app in 15-minute intervals. Streaming insight also allows you to visualize and explore the aggregated data in dashboards, charts, or tables. Streaming Insight, Create Streaming Insights

Northern Trail Outfitters is using the Marketing Cloud Starter Data Bundles to bring Marketing Cloud data into Data Cloud. What are two of the available datasets in Marketing Cloud Starter Data Bundles? Choose 2 answers The Marketing Cloud Starter Data Bundles are predefined data bundles that allow you to easily ingest data from Marketing Cloud into Data Cloud. The available datasets in Marketing Cloud Starter Data Bundles are Email, MobileConnect, and MobilePush. These datasets contain engagement events and metrics from different Marketing Cloud channels, such as email, SMS, and push notifications. By using these datasets, you can enrich your Data Cloud data model with Marketing Cloud data and create segments and activations based on your marketing campaigns and journeys. The other options are incorrect because they are not available datasets in Marketing Cloud Starter Data Bundles. Option A is incorrect because Personalization is not a dataset, but a feature of Marketing Cloud that allows you to tailor your content and messages to your audience. Option C is incorrect because Loyalty Management is not a dataset, but a product of Marketing Cloud that allows you to create and manage loyalty programs for your customers. Marketing Cloud Starter Data Bundles in Data Cloud, Connect Your Data Sources, Personalization in Marketing Cloud, Loyalty Management in Marketing Cloud

A customer has a custom Customer Email c object related to the standard Contact object in Salesforce CRM. This custom object stores the email address a Contact that they want to use for activation. To which data entity is mapped? The Contact Point_Email object is the data entity that represents an email address associated with an individual in Data Cloud. It is part of the Customer 360 Data Model, which is a standardized data model that defines common entities and relationships for customer data. The Contact Point_Email object can be mapped to any custom or standard object that stores email addresses in Salesforce CRM, such as the custom Customer Email__c object. The other options are not the correct data entities to map to because: A . The Contact object is the data entity that represents a person who is associated with an account that is a customer, partner, or competitor in Salesforce CRM. It is not the data entity that represents an email address in Data Cloud. C . The custom Customer Email__c object is not a data entity in Data Cloud, but a custom object in Salesforce CRM. It can be mapped to a data entity in Data Cloud, such as the Contact Point_Email object, but it is not a data entity itself. D . The Individual object is the data entity that represents a unique person in Data Cloud. It is the core entity for managing consent and privacy preferences, and it can be related to one or more contact points, such as email addresses, phone numbers, or social media handles. It is not the data entity that represents an email address in Data Cloud. Customer 360 Data Model: Individual and Contact Points - Salesforce, Contact Point_Email | Object Reference for the Salesforce Platform | Salesforce Developers, [Contact | Object Reference for the Salesforce Platform | Salesforce Developers], [Individual | Object Reference for the Salesforce Platform | Salesforce Developers]

During discovery, which feature should a consultant highlight for a customer who has multiple data sources and needs to match and reconcile data about individuals into a single unified profile? The feature that the consultant should highlight for a customer who has multiple data sources and needs to match and reconcile data about individuals into a single unified profile is D. Identity Resolution. Identity Resolution is the process of identifying, matching, and reconciling data about individuals across different data sources and creating a unified profile that represents a single view of the customer. Identity Resolution uses various methods and rules to determine the best match and reconciliation of data, such as deterministic matching, probabilistic matching, reconciliation rules, and identity graphs. Identity Resolution enables the customer to have a complete and accurate understanding of their customers and their interactions across different channels and touchpoints. Salesforce Data Cloud Consultant Exam Guide, Identity Resolution

Cumulus Financial uses Data Cloud to segment banking customers and activate them for direct mail via a Cloud File Storage activation. The company also wants to analyze individuals who have been in the segment within the last 2 years.Which Data Cloud component allows for this? The segment membership data model object is a Data Cloud component that allows for analyzing individuals who have been in a segment within a certain time period. The segment membership data model object is a table that stores the information about which individuals belong to which segments and when they were added or removed from the segments. This object can be used to create calculated insights, such as segment size, segment duration, segment overlap, or segment retention, that can help measure the effectiveness of segmentation and activation strategies. The segment membership data model object can also be used to create nested segments or segment exclusions based on the segment membership criteria, such as segment name, segment type, or segment date range. The other options are not correct because they are not Data Cloud components that allow for analyzing individuals who have been in a segment within the last 2 years. Nested segments and segment exclusions are features that allow for creating more complex segments based on existing segments, but they do not provide the historical data about segment membership. Calculated insights are custom metrics or measures that are derived from data model objects or data lake objects, but they do not store the segment membership information by themselves. Segment Membership Data Model Object, Create a Calculated Insight, Create a Nested Segment

Every day, Northern Trail Outfitters uploads a summary of the last 24 hours of store transactions to a new file in an Amazon S3 bucket, and files older than seven days are automatically deleted. Each file contains a timestamp in a standardized naming convention.Which two options should a consultant configure when ingesting this data stream? Choose 2 answers : When ingesting data from an Amazon S3 bucket, the consultant should configure the following options: The refresh mode should be set to "Upsert", which means that new and updated records will be added or updated in Data Cloud, while existing records will be preserved. This ensures that the data is always up to date and consistent with the source. The filename should contain a wildcard to accommodate the timestamp, which means that the file name pattern should include a variable part that matches the timestamp format. For example, if the file name is store_transactions_2023-12-18.csv, the wildcard could be store_transactions_*.csv. This ensures that the ingestion process can identify and process the correct file every day. The other options are not necessary or relevant for this scenario: Deletion of old files is a feature of the Amazon S3 bucket, not the Data Cloud ingestion process. Data Cloud does not delete any files from the source, nor does it require the source files to be deleted after ingestion. Full Refresh is a refresh mode that deletes all existing records in Data Cloud and replaces them with the records from the source file. This is not suitable for this scenario, as it would result in data loss and inconsistency, especially if the source file only contains the summary of the last 24 hours of transactions. Ingest Data from Amazon S3, Refresh Modes

Which solution provides an easy way to ingest Marketing Cloud subscriber profile attributes into Data Cloud on a daily basis? The solution that provides an easy way to ingest Marketing Cloud subscriber profile attributes into Data Cloud on a daily basis is the Marketing Cloud Data extension Data Stream. The Marketing Cloud Data extension Data Stream is a feature that allows customers to stream data from Marketing Cloud data extensions to Data Cloud data spaces. Customers can select which data extensions they want to stream, and Data Cloud will automatically create and update the corresponding data model objects (DMOs) in the data space. Customers can also map the data extension fields to the DMO attributes using a user interface or an API. The Marketing Cloud Data extension Data Stream can help customers ingest subscriber profile attributes and other data from Marketing Cloud into Data Cloud without writing any code or setting up any complex integrations. The other options are not solutions that provide an easy way to ingest Marketing Cloud subscriber profile attributes into Data Cloud on a daily basis. Automation Studio and Profile file API are tools that can be used to export data from Marketing Cloud to external systems, but they require customers to write scripts, configure file transfers, and schedule automations. Marketing Cloud Connect API is an API that can be used to access data from Marketing Cloud in other Salesforce solutions, such as Sales Cloud or Service Cloud, but it does not support streaming data to Data Cloud. Email Studio Starter Data Bundle is a data kit that contains sample data and segments for Email Studio, but it does not contain subscriber profile attributes or stream data to Data Cloud. Marketing Cloud Data Extension Data Stream Data Cloud Data Ingestion [Marketing Cloud Data Extension Data Stream API] [Marketing Cloud Connect API] [Email Studio Starter Data Bundle]

A customer has a requirement to be able to view the last time each segment was published within their Data Cloud org.Which two features should the consultant recommend to best address this requirement? Choose 2 answers A customer who wants to view the last time each segment was published within their Data Cloud org can use the dashboard and report features to achieve this requirement. A dashboard is a visual representation of data that can show key metrics, trends, and comparisons. A report is a tabular or matrix view of data that can show details, summaries, and calculations. Both dashboard and report features allow the user to create, customize, and share data views based on their needs and preferences. To view the last time each segment was published, the user can create a dashboard or a report that shows the segment name, the publish date, and the publish status fields from the segment object. The user can also filter, sort, group, or chart the data by these fields to get more insights and analysis. The user can also schedule, refresh, or export the dashboard or report data as needed. Dashboards, Reports

Which information is provided in a .csv file when activating to Amazon S3? When activating to Amazon S3, the information that is provided in a .csv file is the activated data payload. The activated data payload is the data that is sent from Data Cloud to the activation target, which in this case is an Amazon S3 bucket. The activated data payload contains the attributes and values of the individuals or entities that are included in the segment that is being activated. The activated data payload can be used for various purposes, such as marketing, sales, service, or analytics. The other options are incorrect because they are not provided in a .csv file when activating to Amazon S3. Option A is incorrect because an audit log is not provided in a .csv file, but it can be viewed in the Data Cloud UI under the Activation History tab. Option C is incorrect because the metadata regarding the segment definition is not provided in a .csv file, but it can be viewed in the Data Cloud UI under the Segmentation tab. Option D is incorrect because the manifest of origin sources within Data Cloud is not provided in a .csv file, but it can be viewed in the Data Cloud UI under the Data Sources tab. Data Activation Overview, Create and Activate Segments in Data Cloud, Data Activation Use Cases, View Activation History, Segmentation Overview, [Data Sources Overview]

Which operator should a consultant use to create a segment for a birthday campaign that is evaluated daily? To create a segment for a birthday campaign that is evaluated daily, the consultant should use the Is Anniversary Of operator. This operator compares a date field with the current date and returns true if the month and day are the same, regardless of the year. For example, if the date field is 1990-01-01 and the current date is 2023-01-01, the operator returns true. This way, the consultant can create a segment that includes all the customers who have their birthday on the same day as the current date, and the segment will be updated daily with the new birthdays. The other options are not the best operators to use for this purpose because: A . The Is Today operator compares a date field with the current date and returns true if the date is the same, including the year. For example, if the date field is 1990-01-01 and the current date is 2023-01-01, the operator returns false. This operator is not suitable for a birthday campaign, as it will only include the customers who were born on the same day and year as the current date, which is very unlikely. B . The Is Birthday operator is not a valid operator in Data Cloud. There is no such operator available in the segment canvas or the calculated insight editor. C . The Is Between operator compares a date field with a range of dates and returns true if the date is within the range, including the endpoints. For example, if the date field is 1990-01-01 and the range is 2022-12-25 to 2023-01-05, the operator returns true. This operator is not suitable for a birthday campaign, as it will only include the customers who have their birthday within a fixed range of dates, and the segment will not be updated daily with the new birthdays.

Luxury Retailers created a segment targeting high value customers that it activates through Marketing Cloud for email communication. The company notices that the activated count is smaller than the segment count. What is a reason for this? The reason for the activated count being smaller than the segment count is A. Data Cloud enforces the presence of Contact Point for Marketing Cloud activations. If the individual does not have a related Contact Point, it will not be activated. A Contact Point is a data model object that represents a channel or method of communication with an individual, such as email, phone, or social media. For Marketing Cloud activations, Data Cloud requires that the individual has a related Contact Point of type Email, which contains a valid email address. If the individual does not have such a Contact Point, or if the Contact Point is missing or invalid, the individual will not be activated and will not receive the email communication. Therefore, the activated count may be lower than the segment count, depending on how many individuals in the segment have a valid email Contact Point. Salesforce Data Cloud Consultant Exam Guide, Contact Point, Marketing Cloud Activation

A Data Cloud consultant recently added a new data source and mapped some of the data to a new custom data model object (DMO) that they want to use for creating segments. However, they cannot view the newly created DMO when trying to create a new segment.What is the cause of this issue? The cause of this issue is that the new custom data model object (DMO) is not of category Profile. A category is a property of a DMO that defines its purpose and functionality in Data Cloud. There are three categories of DMOs: Profile, Event, and Other. Profile DMOs are used to store attributes of individuals or entities, such as name, email, address, etc. Event DMOs are used to store actions or interactions of individuals or entities, such as purchases, clicks, visits, etc. Other DMOs are used to store any other type of data that does not fit into the Profile or Event categories, such as products, locations, categories, etc. Only Profile DMOs can be used for creating segments in Data Cloud, as segments are based on the attributes of individuals or entities. Therefore, if the new custom DMO is not of category Profile, it will not appear in the segmentation canvas. The other options are not correct because they are not the cause of this issue. Data ingestion is not a prerequisite for creating segments, as segments can be created based on the data model schema without actual data. The new DMO does not need to have a relationship to the individual DMO, as segments can be created based on any Profile DMO, regardless of its relationship to other DMOs. Segmentation is not only supported for the Individual and Unified Individual DMOs, as segments can be created based on any Profile DMO, including custom ones. Create a Custom Data Model Object from an Existing Data Model Object, Create a Segment in Data Cloud, Data Model Object Category

Cumulus Financial wants to segregate Salesforce CRM Account data based on Country for its Data Cloud users.What should the consultant do to accomplish this? Data spaces are a feature that allows Data Cloud users to create subsets of data based on filters and permissions. Data spaces can be used to segregate data based on different criteria, such as geography, business unit, or product line. In this case, the consultant can use the data spaces feature and apply filtering on the Account data lake object based on Country. This way, the Data Cloud users can access only the Account data that belongs to their respective countries. Data Spaces, Create a Data Space