A customer is concerned that the consolidation rate displayed in the identity resolution is quite low compared to their initial estimations. Which configuration change should a consultant consider in order to increase the consolidation rate? The consolidation rate is the amount by which source profiles are combined to produce unified profiles, calculated as 1 - (number of unified individuals / number of source individuals). For example, if you ingest 100 source records and create 80 unified profiles, your consolidation rate is 20%. To increase the consolidation rate, you need to increase the number of matches between source profiles, which can be done by adding more match rules. Match rules define the criteria for matching source profiles based on their attributes. By increasing the number of match rules, you can increase the chances of finding matches between source profiles and thus increase the consolidation rate. On the other hand, changing reconciliation rules, including additional attributes, or reducing the number of match rules can decrease the consolidation rate, as they can either reduce the number of matches or increase the number of unified profiles.
Which two common use cases can be addressed with Data Cloud? Choose 2 answers Data Cloud is a data platform that can help customers connect, prepare, harmonize, unify, query, analyze, and act on their data across various Salesforce and external sources. Some of the common use cases that can be addressed with Data Cloud are: Understand and act upon customer data to drive more relevant experiences. Data Cloud can help customers gain a 360-degree view of their customers by unifying data from different sources and resolving identities across channels. Data Cloud can also help customers segment their audiences, create personalized experiences, and activate data in any channel using insights and AI. Harmonize data from multiple sources with a standardized and extendable data model. Data Cloud can help customers transform and cleanse their data before using it, and map it to a common data model that can be extended and customized. Data Cloud can also help customers create calculated insights and related attributes to enrich their data and optimize identity resolution. The other two options are not common use cases for Data Cloud. Data Cloud does not provide data governance or backup and disaster recovery features, as these are typically handled by other Salesforce or external solutions. Learn How Data Cloud Works About Salesforce Data Cloud Discover Use Cases for the Platform Understand Common Data Analysis Use Cases
A Data Cloud customer wants to adjust their identity resolution rules to increase their accuracy of matches. Rather than matching on email address, they want to review a rule that joins their CRM Contacts with their Marketing Contacts, where both use the CRM ID as their primary key. Which two steps should the consultant take to address this new use case? Choose 2 answers To address this new use case, the consultant should map the primary key from the two systems to Party Identification, using CRM ID as the identification name for both, and create a matching rule based on party identification that matches on CRM ID as the party identification name. This way, the consultant can ensure that the CRM Contacts and Marketing Contacts are matched based on their CRM ID, which is a unique identifier for each individual. By using Party Identification, the consultant can also leverage the benefits of this attribute, such as being able to match across different entities and sources, and being able to handle multiple values for the same individual. The other options are incorrect because they either do not use the CRM ID as the primary key, or they do not use Party Identification as the attribute type. Configure Identity Resolution Rulesets, Identity Resolution Match Rules, Data Cloud Identity Resolution Ruleset, Data Cloud Identity Resolution Config Input
What does the Source Sequence reconciliation rule do in identity resolution? The Source Sequence reconciliation rule sets the priority of specific data sources when building attributes in a unified profile, such as a first or last name. This rule allows you to define which data source should be used as the primary source of truth for each attribute, and which data sources should be used as fallbacks in case the primary source is missing or invalid. For example, you can set the Source Sequence rule to use data from Salesforce CRM as the first priority, data from Marketing Cloud as the second priority, and data from Google Analytics as the third priority for the first name attribute. This way, the unified profile will use the first name value from Salesforce CRM if it exists, otherwise it will use the value from Marketing Cloud, and so on. This rule helps you to ensure the accuracy and consistency of the unified profile attributes across different data sources. Reference: Salesforce Data Cloud Consultant Exam Guide, Identity Resolution, Reconciliation Rules
What should an organization use to stream inventory levels from an inventory management system into Data Cloud in a fast and scalable, near-real-time way? The Ingestion API is a RESTful API that allows you to stream data from any source into Data Cloud in a fast and scalable way. You can use the Ingestion API to send data from your inventory management system into Data Cloud as JSON objects, and then use Data Cloud to create data models, segments, and insights based on your inventory data. The Ingestion API supports both batch and streaming modes, and can handle up to 100,000 records per second. The Ingestion API also provides features such as data validation, encryption, compression, and retry mechanisms to ensure data quality and security. Ingestion API Developer Guide, Ingest Data into Data Cloud
Northern Trail Outfitters (NTO), an outdoor lifestyle clothing brand, recently started a new line of business. The new business specializes in gourmet camping food. For business reasons as well as security reasons, it's important to NTO to keep all Data Cloud data separated by brand. Which capability best supports NTO's desire to separate its data by brand? Data spaces are logical containers that allow you to separate and organize your data by different criteria, such as brand, region, product, or business unit. Data spaces can help you manage data access, security, and governance, as well as enable cross-cloud data integration and activation. For NTO, data spaces can support their desire to separate their data by brand, so that they can have different data models, rules, and insights for their outdoor lifestyle clothing and gourmet camping food businesses. Data spaces can also help NTO comply with any data privacy and security regulations that may apply to their different brands. The other options are incorrect because they do not provide the same level of data separation and organization as data spaces. Data streams are used to ingest data from different sources into Data Cloud, but they do not separate the data by brand. Data model objects are used to define the structure and attributes of the data, but they do not isolate the data by brand. Data sources are used to identify the origin and type of the data, but they do not partition the data by brand. Data Spaces Overview, Create Data Spaces, Data Privacy and Security in Data Cloud, Data Streams Overview, Data Model Objects Overview, [Data Sources Overview]
Cumulus Financial uses Service Cloud as its CRM and stores mobile phone, home phone, and work phone as three separate fields for its customers on the Contact record. The company plans to use Data Cloud and ingest the Contact object via the CRM Connector. What is the most efficient approach that a consultant should take when ingesting this data to ensure all the different phone numbers are properly mapped and available for use in activation? The most efficient approach that a consultant should take when ingesting this data to ensure all the different phone numbers are properly mapped and available for use in activation is B. Ingest the Contact object and use streaming transforms to normalize the phone numbers from the Contact data stream into a separate Phone data lake object (DLO) that contains three rows, and then map this new DLO to the Contact Point Phone data map object. This approach allows the consultant to use the streaming transforms feature of Data Cloud, which enables data manipulation and transformation at the time of ingestion, without requiring any additional processing or storage. Streaming transforms can be used to normalize the phone numbers from the Contact data stream, such as removing spaces, dashes, or parentheses, and adding country codes if needed. The normalized phone numbers can then be stored in a separate Phone DLO, which can have one row for each phone number type (work, home, mobile). The Phone DLO can then be mapped to the Contact Point Phone data map object, which is a standard object that represents a phone number associated with a contact point. This way, the consultant can ensure that all the phone numbers are available for activation, such as sending SMS messages or making calls to the customers. The other options are not as efficient as option B. Option A is incorrect because it does not normalize the phone numbers, which may cause issues with activation or identity resolution. Option C is incorrect because it requires creating a calculated insight, which is an additional step that consumes more resources and time than streaming transforms. Option D is incorrect because it requires creating formula fields in the Contact data stream, which may not be supported by the CRM Connector or may cause conflicts with the existing fields in the Contact object. Salesforce Data Cloud Consultant Exam Guide, Data Ingestion and Modeling, Streaming Transforms, Contact Point Phone
What is Data Cloud's primary value to customers? Data Cloud is a platform that enables you to activate all your customer data across Salesforce applications and other systems. Data Cloud allows you to create a unified profile of each customer by ingesting, transforming, and linking data from various sources, such as CRM, marketing, commerce, service, and external data providers. Data Cloud also provides insights and analytics on customer behavior, preferences, and needs, as well as tools to segment, target, and personalize customer interactions. Data Cloud's primary value to customers is to provide a unified view of a customer and their related data, which can help you deliver better customer experiences, increase loyalty, and drive growth. Salesforce Data Cloud, When Data Creates Competitive Advantage
During an implementation project, a consultant completed ingestion of all data streams for their customer. Prior to segmenting and acting on that data, which additional configuration is required? After ingesting data from different sources into Data Cloud, the additional configuration that is required before segmenting and acting on that data is Identity Resolution. Identity Resolution is the process of matching and reconciling source profiles from different data sources and creating unified profiles that represent a single individual or entity. Identity Resolution enables you to create a 360- degree view of your customers and prospects, and to segment and activate them based on their attributes and behaviors. To configure Identity Resolution, you need to create and deploy a ruleset that defines the match rules and reconciliation rules for your data. The other options are incorrect because they are not required before segmenting and acting on the data. Data Activation is the process of sending data from Data Cloud to other Salesforce clouds or external destinations for marketing, sales, or service purposes. Calculated Insights are derived attributes that are computed based on the source or unified data, such as lifetime value, churn risk, or product affinity. Data Mapping is the process of mapping source attributes to unified attributes in the data model. These configurations can be done after segmenting and acting on the data, or in parallel with Identity Resolution, but they are not prerequisites for it. Identity Resolution Overview, Segment and Activate Data in Data Cloud, Configure Identity Resolution Rulesets, Data Activation Overview, Calculated Insights Overview, [Data Mapping Overview]
A consultant is discussing the benefits of Data Cloud with a customer that has multiple disjointed data sources. Which two functional areas should the consultant highlight in relation to managing customer data? Choose 2 answers Data Cloud is an open and extensible data platform that enables smarter, more efficient AI with secure access to first-party and industry data. Two functional areas that the consultant should highlight in relation to managing customer data are: Data Harmonization: Data Cloud harmonizes data from multiple sources and formats into a common schema, enabling a single source of truth for customer data. Data Cloud also applies data quality rules and transformations to ensure data accuracy and consistency. Unified Profiles: Data Cloud creates unified profiles of customers and prospects by linking data across different identifiers, such as email, phone, cookie, and device ID1. Unified profiles provide a holistic view of customer behavior, preferences, and interactions across channels and touchpoints. The other options are not correct because: Master Data Management: Master Data Management (MDM) is a process of creating and maintaining a single, consistent, and trusted source of master data, such as product, customer, supplier, or location data. Data Cloud does not provide MDM functionality, but it can integrate with MDM solutions to enrich customer data. Data Marketplace: Data Marketplace is a feature of Data Cloud that allows users to discover, access, and activate data from third-party providers, such as demographic, behavioral, and intent data. Data Marketplace is not a functional area related to managing customer data, but rather a source of external data that can enhance customer data. Salesforce Data Cloud [Data Harmonization for Data Cloud] [Unified Profiles for Data Cloud] [What is Master Data Management?] [Integrate Data Cloud with Master Data Management] [Data Marketplace for Data Cloud]
A retailer wants to unify profiles using Loyalty ID which is different than the unique ID of their customers. Which object should the consultant use in identity resolution to perform exact match rules on the Loyalty ID? The Party Identification object is the correct object to use in identity resolution to perform exact match rules on the Loyalty ID. The Party Identification object is a child object of the Individual object that stores different types of identifiers for an individual, such as email, phone, loyalty ID, social media handle, etc. Each identifier has a type, a value, and a source. The consultant can use the Party Identification object to create a match rule that compares the Loyalty ID type and value across different sources and links the corresponding individuals. The other options are not correct objects to use in identity resolution to perform exact match rules on the Loyalty ID. The Loyalty Identification object does not exist in Data Cloud. The Individual object is the parent object that represents a unified profile of an individual, but it does not store the Loyalty ID directly. The Contact Identification object is a child object of the Contact object that stores identifiers for a contact, such as email, phone, etc., but it does not store the Loyalty ID. Reference: Data Modeling Requirements for Identity Resolution Identity Resolution in a Data Space Configure Identity Resolution Rulesets Map Required Objects Data and Identity in Data Cloud
Which data model subject area defines the revenue or quantity for an opportunity by product family? The Sales Order subject area defines the details of an order placed by a customer for one or more products or services. It includes information such as the order date, status, amount, quantity, currency, payment method, and delivery method. The Sales Order subject area also allows you to track the revenue or quantity for an opportunity by product family, which is a grouping of products that share common characteristics or features. For example, you can use the Sales Order Line Item DMO to associate each product in an order with its product family, and then use the Sales Order Revenue DMO to calculate the total revenue or quantity for each product family in an opportunity. Sales Order Subject Area, Sales Order Revenue DMO Reference
Which configuration supports separate Amazon S3 buckets for data ingestion and activation? To support separate Amazon S3 buckets for data ingestion and activation, you need to configure dedicated S3 data sources in Data Cloud setup. Data sources are used to identify the origin and type of the data that you ingest into Data Cloud. You can create different data sources for each S3 bucket that you want to use for ingestion or activation, and specify the bucket name, region, and access credentials. This way, you can separate and organize your data by different criteria, such as brand, region, product, or business unit. The other options are incorrect because they do not support separate S3 buckets for data ingestion and activation. Multiple S3 connectors are not a valid configuration in Data Cloud setup, as there is only one S3 connector available. Dedicated S3 data sources in activation setup are not a valid configuration either, as activation setup does not require data sources, but activation targets. Separate user credentials for data stream and activation target are not sufficient to support separate S3 buckets, as you also need to specify the bucket name and region for each data source. Data Sources Overview, Amazon S3 Storage Connector, Data Spaces Overview, Data Streams Overview, Data Activation Overview
A customer wants to use the transactional data from their data warehouse in Data Cloud. They are only able to export the data via an SFTP site.How should the file be brought into Data Cloud? The SFTP Connector is a data source connector that allows Data Cloud to ingest data from an SFTP server. The customer can use the SFTP Connector to create a data stream from their exported file and bring it into Data Cloud as a data lake object. The other options are not the best ways to bring the file into Data Cloud because: B . The Cloud Storage Connector is a data source connector that allows Data Cloud to ingest data from cloud storage services such as Amazon S3, Azure Storage, or Google Cloud Storage. The customer does not have their data in any of these services, but only on an SFTP site. C . The Data Import Wizard is a tool that allows users to import data for many standard Salesforce objects, such as accounts, contacts, leads, solutions, and campaign members. It is not designed to import data from an SFTP site or for custom objects in Data Cloud. D . The Dataloader is an application that allows users to insert, update, delete, or export Salesforce records. It is not designed to ingest data from an SFTP site or into Data Cloud. SFTP Connector - Salesforce, Create Data Streams with the SFTP Connector in Data Cloud - Salesforce, Data Import Wizard - Salesforce, Salesforce Data Loader
Cumulus Financial is currently using Data Cloud and ingesting transactional data from its backend system via an S3 Connector in upsert mode. During the initial setup six months ago, the company created a formula field in Data Cloud to create a custom classification. It now needs to update this formula to account for more classifications. What should the consultant keep in mind with regard to formula field updates when using the S3 Connector? A formula field is a field that calculates a value based on other fields or constants. When using the S3 Connector to ingest data from an Amazon S3 bucket, Data Cloud supports creating and updating formula fields on the data lake objects (DLOs) that store the data from the S3 source. However, the formula field updates are not applied immediately, but rather at the next incremental upsert refresh of the data stream. An incremental upsert refresh is a process that adds new records and updates existing records from the S3 source to the DLO based on the primary key field. Therefore, the consultant should keep in mind that the formula field updates will affect both new and existing records, but only after the next incremental upsert refresh of the data stream. The other options are incorrect because Data Cloud does not initiate a full refresh of data from S3, does not update the formula only for new records, and does support formula field updates for data streams of type upsert. Create a Formula Field, Amazon S3 Connection, Data Lake Object
Northern Trail Outfitters wants to implement Data Cloud and has several use cases in mind. Which two use cases are considered a good fit for Data Cloud? Choose 2 answers Data Cloud is a data platform that can help customers connect, prepare, harmonize, unify, query, analyze, and act on their data across various Salesforce and external sources. Some of the use cases that are considered a good fit for Data Cloud are: To ingest and unify data from various sources to reconcile customer identity. Data Cloud can help customers bring all their data, whether streaming or batch, into Salesforce and map it to a common data model. Data Cloud can also help customers resolve identities across different channels and sources and create unified profiles of their customers. To use harmonized data to more accurately understand the customer and business impact. Data Cloud can help customers transform and cleanse their data before using it, and enrich it with calculated insights and related attributes. Data Cloud can also help customers create segments and audiences based on their data and activate them in any channel. Data Cloud can also help customers use AI to predict customer behavior and outcomes. The other two options are not use cases that are considered a good fit for Data Cloud. Data Cloud does not provide features to create and orchestrate cross-channel marketing messages, as this is typically handled by other Salesforce solutions such as Marketing Cloud. Data Cloud also does not eliminate the need for separate business intelligence and IT data management tools, as it is designed to work with them and complement their capabilities. Learn How Data Cloud Works About Salesforce Data Cloud Discover Use Cases for the Platform Understand Common Data Analysis Use Cases
What does it mean to build a trust-based, first-party data asset? Building a trust-based, first-party data asset means collecting, managing, and activating data from your own customers and prospects in a way that respects their privacy and preferences. It also means providing them with clear and honest information about how you use their data, what benefits they can expect from sharing their data, and how they can control their data. By doing so, you can create a mutually beneficial relationship with your customers, where they trust you to use their data responsibly and ethically, and you can deliver more relevant and personalized experiences to them. A trust-based, first-party data asset can help you improve customer loyalty, retention, and growth, as well as comply with data protection regulations and standards. Use first-party data for a powerful digital experience, Why first-party data is the key to data privacy, Build a first-party data strategy
During a privacy law discussion with a customer, the customer indicates they need to honor requests for the right to be forgotten. The consultant determines that Consent API will solve this business need.Which two considerations should the consultant inform the customer about? Choose 2 answers When advising a customer about using the Consent API in Salesforce to comply with requests for the right to be forgotten, the consultant should focus on two primary considerations: Data deletion requests are submitted for Individual profiles (Answer C): The Consent API in Salesforce is designed to handle data deletion requests specifically for individual profiles. This means that when a request is made to delete data, it is targeted at the personal data associated with an individual's profile in the Salesforce system. The consultant should inform the customer that the requests must be specific to individual profiles to ensure accurate processing and compliance with privacy laws. Data deletion requests submitted to Data Cloud are passed to all connected Salesforce clouds (Answer D): When a data deletion request is made through the Consent API in Salesforce Data Cloud, the request is not limited to the Data Cloud alone. Instead, it propagates through all connected Salesforce clouds, such as Sales Cloud, Service Cloud, Marketing Cloud, etc. This ensures comprehensive compliance with the right to be forgotten across the entire Salesforce ecosystem. The customer should be aware that the deletion request will affect all instances of the individual's data across the connected Salesforce environments.
A consultant wants to ensure that every segment managed by multiple brand teams adheres to the same set of exclusion criteria, that are updated on a monthly basis. What is the most efficient option to allow for this capability? The most efficient option to allow for this capability is to create a reusable container block with common criteria. A container block is a segment component that can be reused across multiple segments. A container block can contain any combination of filters, nested segments, and exclusion criteria. A consultant can create a container block with the exclusion criteria that apply to all the segments managed by multiple brand teams, and then add the container block to each segment. This way, the consultant can update the exclusion criteria in one place and have them reflected in all the segments that use the container block. The other options are not the most efficient options to allow for this capability. Creating, publishing, and deploying a data kit is a way to share data and segments across different data spaces, but it does not allow for updating the exclusion criteria on a monthly basis. Creating a nested segment is a way to combine segments using logical operators, but it does not allow for excluding individuals based on specific criteria. Creating a segment and copying it for each brand is a way to create multiple segments with the same exclusion criteria, but it does not allow for updating the exclusion criteria in one place. Create a Container Block Create a Segment in Data Cloud Create and Publish a Data Kit Create a Nested Segment
Cumulus Financial created a segment called Multiple Investments that contains individuals who have invested in two or more mutual funds.The company plans to send an email to this segment regarding a new mutual fund offering, and wants to personalize the email content with information about each customer's current mutual fund investments.How should the Data Cloud consultant configure this activation? To personalize the email content with information about each customer's current mutual fund investments, the Data Cloud consultant needs to add related attributes to the activation. Related attributes are additional data fields that can be sent along with the segment to the target system for personalization or analysis purposes. In this case, the consultant needs to add the Fund Name attribute, which contains the name of the mutual fund that the customer has invested in, and apply a filter for Fund Type equal to "Mutual Fund" to ensure that only relevant data is sent. The other options are not correct because: A . Including Fund Type equal to "Mutual Fund" as a related attribute is not enough to personalize the email content. The consultant also needs to include the Fund Name attribute, which contains the specific name of the mutual fund that the customer has invested in. C . Adding related attribute Fund Type is not enough to personalize the email content. The consultant also needs to add the Fund Name attribute, which contains the specific name of the mutual fund that the customer has invested in, and apply a filter for Fund Type equal to "Mutual Fund" to ensure that only relevant data is sent. D . Including Fund Name and Fund Type by default for post processing in the target system is not a valid option. The consultant needs to add the related attributes and filters during the activation configuration in Data Cloud, not after the data is sent to the target system. Add Related Attributes to an Activation - Salesforce, Related Attributes in Activation - Salesforce, Prepare for Your Salesforce Data Cloud Consultant Credential
A customer notices that their consolidation rate has recently increased. They contact the consultant to ask why. What are two likely explanations for the increase? Choose 2 answers The consolidation rate is a metric that measures the amount by which source profiles are combined to produce unified profiles in Data Cloud, calculated as 1 - (number of unified profiles / number of source profiles). A higher consolidation rate means that more source profiles are matched and merged into fewer unified profiles, while a lower consolidation rate means that fewer source profiles are matched and more unified profiles are created. There are two likely explanations for why the consolidation rate has recently increased for a customer: New data sources have been added to Data Cloud that largely overlap with the existing profiles. This means that the new data sources contain many profiles that are similar or identical to the profiles from the existing data sources. For example, if a customer adds a new CRM system that has the same customer records as their old CRM system, the new data source will overlap with the existing one. When Data Cloud ingests the new data source, it will use the identity resolution ruleset to match and merge the overlapping profiles into unified profiles, resulting in a higher consolidation rate. Identity resolution rules have been added to the ruleset to increase the number of matched profiles. This means that the customer has modified their identity resolution ruleset to include more match rules or more match criteria that can identify more profiles as belonging to the same individual. For example, if a customer adds a match rule that matches profiles based on email address and phone number, instead of just email address, the ruleset will be able to match more profiles that have the same email address and phone number, resulting in a higher consolidation rate. Reference: Identity Resolution Calculated Insight: Consolidation Rates for Unified Profiles, Configure Identity Resolution Rulesets
A segment fails to refresh with the error "Segment references too many data lake objects (DLOS)". Which two troubleshooting tips should help remedy this issue? Choose 2 answers The error "Segment references too many data lake objects (DLOs)" occurs when a segment query exceeds the limit of 50 DLOs that can be referenced in a single query. This can happen when the segment has too many filters, nested segments, or exclusion criteria that involve different DLOs. To remedy this issue, the consultant can try the following troubleshooting tips: Split the segment into smaller segments. The consultant can divide the segment into multiple segments that have fewer filters, nested segments, or exclusion criteria. This can reduce the number of DLOs that are referenced in each segment query and avoid the error. The consultant can then use the smaller segments as nested segments in a larger segment, or activate them separately. Use calculated insights in order to reduce the complexity of the segmentation query. The consultant can create calculated insights that are derived from existing data using formulas. Calculated insights can simplify the segmentation query by replacing multiple filters or nested segments with a single attribute. For example, instead of using multiple filters to segment individuals based on their purchase history, the consultant can create a calculated insight that calculates the lifetime value of each individual and use that as a filter. The other options are not troubleshooting tips that can help remedy this issue. Refining segmentation criteria to limit up to five custom data model objects (DMOs) is not a valid option, as the limit of 50 DLOs applies to both standard and custom DMOs. Spacing out the segment schedules to reduce DLO load is not a valid option, as the error is not related to the DLO load, but to the segment query complexity. Troubleshoot Segment Errors Create a Calculated Insight Create a Segment in Data Cloud
A consultant is working in a customer's Data Cloud org and is asked to delete the existing identity resolution ruleset.Which two impacts should the consultant communicate as a result of this action? Choose 2 answers Deleting an identity resolution ruleset has two major impacts that the consultant should communicate to the customer. First, it will permanently remove all unified customer data that was created by the ruleset, meaning that the unified profiles and their attributes will no longer be available in Data Cloud. Second, it will eliminate dependencies on data model objects that were used by the ruleset, meaning that the data model objects can be modified or deleted without affecting the ruleset. These impacts can have significant consequences for the customer's data quality, segmentation, activation, and analytics, so the consultant should advise the customer to carefully consider the implications of deleting a ruleset before proceeding. The other options are incorrect because they are not impacts of deleting a ruleset. Option A is incorrect because deleting a ruleset will not remove all individual data, but only the unified customer data. The individual data from the source systems will still be available in Data Cloud. Option D is incorrect because deleting a ruleset will not remove all source profile data, but only the unified customer data. The source profile data from the data streams will still be available in Data Cloud. Delete an Identity Resolution Ruleset
Data Cloud receives a nightly file of all ecommerce transactions from the previous day. Several segments and activations depend upon calculated insights from the updated data in order to maintain accuracy in the customer's scheduled campaign messages.What should the consultant do to ensure the ecommerce data is ready for use for each of the scheduled activations? The best option that the consultant should do to ensure the ecommerce data is ready for use for each of the scheduled activations is A. Use Flow to trigger a change data event on the ecommerce data to refresh calculated insights and segments before the activations are scheduled to run. This option allows the consultant to use the Flow feature of Data Cloud, which enables automation and orchestration of data processing tasks based on events or schedules. Flow can be used to trigger a change data event on the ecommerce data, which is a type of event that indicates that the data has been updated or changed. This event can then trigger the refresh of the calculated insights and segments that depend on the ecommerce data, ensuring that they reflect the latest data. The refresh of the calculated insights and segments can be completed before the activations are scheduled to run, ensuring that the customer's scheduled campaign messages are accurate and relevant. The other options are not as good as option A. Option B is incorrect because setting a refresh schedule for the calculated insights to occur every hour may not be sufficient or efficient. The refresh schedule may not align with the activation schedule, resulting in outdated or inconsistent data. The refresh schedule may also consume more resources and time than necessary, as the ecommerce data may not change every hour. Option C is incorrect because ensuring the activations are set to Incremental Activation and automatically publish every hour may not solve the problem. Incremental Activation is a feature that allows only the new or changed records in a segment to be activated, reducing the activation time and size. However, this feature does not ensure that the segment data is updated or refreshed based on the ecommerce data. The activation schedule may also not match the ecommerce data update schedule, resulting in inaccurate or irrelevant campaign messages. Option D is incorrect because ensuring the segments are set to Rapid Publish and set to refresh every hour may not be optimal or effective. Rapid Publish is a feature that allows segments to be published faster by skipping some validation steps, such as checking for duplicate records or invalid values. However, this feature may compromise the quality or accuracy of the segment data, and may not be suitable for all use cases. The refresh schedule may also have the same issues as option B, as it may not sync with the ecommerce data update schedule or the activation schedule, resulting in outdated or inconsistent data. Salesforce Data Cloud Consultant Exam Guide, Flow, Change Data Events, Calculated Insights, Segments, [Activation]
Northern Trail Outfitters (NTO) is configuring an identity resolution ruleset based on Fuzzy Name and Normalized Email. What should NTO do to ensure the best email address is activated? NTO is using Fuzzy Name and Normalized Email as match rules to link together data from different sources into a unified individual profile. However, there might be cases where the same email address is available from more than one source, and NTO needs to decide which one to use for activation. For example, if Rachel has the same email address in Service Cloud and Marketing Cloud, but prefers to receive communications from NTO via Marketing Cloud, NTO needs to ensure that the email address from Marketing Cloud is activated. To do this, NTO can use the source priority order in activations, which allows them to rank the data sources in order of preference for activation. By placing Marketing Cloud higher than Service Cloud in the source priority order, NTO can make sure that the email address from Marketing Cloud is delivered to the activation target, such as an email campaign or a journey. This way, NTO can respect Rachel's preference and deliver a better customer experience.Reference: Configure Activations, Use Source Priority Order in Activations
A customer wants to create segments of users based on their Customer Lifetime Value. However, the source data that will be brought into Data Cloud does not include that key performance indicator (KPI). Which sequence of steps should the consultant follow to achieve this requirement? To create segments of users based on their Customer Lifetime Value (CLV), the sequence of steps that the consultant should follow is Ingest Data > Map Data to Data Model > Create Calculated Insight > Use in Segmentation. This is because the first step is to ingest the source data into Data Cloud using data streams. The second step is to map the source data to the data model, which defines the structure and attributes of the data. The third step is to create a calculated insight, which is a derived attribute that is computed based on the source or unified data. In this case, the calculated insight would be the CLV, which can be calculated using a formula or a query based on the sales order data. The fourth step is to use the calculated insight in segmentation, which is the process of creating groups of individuals or entities based on their attributes and behaviors. By using the CLV calculated insight, the consultant can segment the users by their predicted revenue from the lifespan of their relationship with the brand. The other options are incorrect because they do not follow the correct sequence of steps to achieve the requirement. Option B is incorrect because it is not possible to create a calculated insight before ingesting and mapping the data, as the calculated insight depends on the data model objects. Option C is incorrect because it is not possible to create a calculated insight before mapping the data, as the calculated insight depends on the data model objects. Option D is incorrect because it is not recommended to create a calculated insight before mapping the data, as the calculated insight may not reflect the correct data model structure and attributes. Reference: Data Streams Overview, Data Model Objects Overview, Calculated Insights Overview, Calculating Customer Lifetime Value (CLV) With Salesforce, [Segmentation Overview]
During discovery, which feature should a consultant highlight for a customer who has multiple data sources and needs to match and reconcile data about individuals into a single unified profile? Identity resolution is the feature that allows Data Cloud to match and reconcile data about individuals from multiple data sources into a single unified profile. Identity resolution uses rulesets to define how source profiles are matched and consolidated based on common attributes, such as name, email, phone, or party identifier. Identity resolution enables Data Cloud to create a 360- degree view of each customer across different data sources and systems. The other options are not the best features to highlight for this customer need because: A . Data cleansing is the process of detecting and correcting errors or inconsistencies in data, such as duplicates, missing values, or invalid formats. Data cleansing can improve the quality and accuracy of data, but it does not match or reconcile data across different data sources. B . Harmonization is the process of standardizing and transforming data from different sources into a common format and structure. Harmonization can enable data integration and interoperability, but it does not match or reconcile data across different data sources. C . Data consolidation is the process of combining data from different sources into a single data set or system. Data consolidation can reduce data redundancy and complexity, but it does not match or reconcile data across different data sources. 1: Data and Identity in Data Cloud | Salesforce Trailhead, 2: Data Cloud Identiy Resolution | Salesforce AI Research, 3: [Data Cleansing - Salesforce], 4: [Harmonization - Salesforce], 5: [Data Consolidation - Salesforce]
A new user of Data Cloud only needs to be able to review individual rows of ingested data and validate that it has been modeled successfully to its linked data model object. The user will also need to make changes if required. What is the minimum permission set needed to accommodate this use case? The Data Cloud User permission set is the minimum permission set needed to accommodate this use case. The Data Cloud User permission set grants access to the Data Explorer feature, which allows the user to review individual rows of ingested data and validate that it has been modeled successfully to its linked data model object. The user can also make changes to the data model object fields, such as adding or removing fields, changing field types, or creating formula fields. The Data Cloud User permission set does not grant access to other Data Cloud features or tasks, such as creating data streams, creating segments, creating activations, or managing users. The other permission sets are either too restrictive or too permissive for this use case. The Data Cloud for Marketing Specialist permission set only grants access to the segmentation and activation features, but not to the Data Explorer feature. The Data Cloud Admin permission set grants access to all Data Cloud features and tasks, including the Data Explorer feature, but it is more than what the user needs. The Data Cloud for Marketing Data Aware Specialist permission set grants access to the Data Explorer feature, but also to the segmentation and activation features, which are not required for this use case. Data Cloud Standard Permission Sets, Data Explorer, Set Up Data Cloud Unit
Which data model subject area should be used for any Organization, Individual, or Member in the Customer 360 data model? The data model subject area that should be used for any Organization, Individual, or Member in the Customer 360 data model is the Party subject area. The Party subject area defines the entities that are involved in any business transaction or relationship, such as customers, prospects, partners, suppliers, etc. The Party subject area contains the following data model objects (DMOs): Organization: A DMO that represents a legal entity or a business unit, such as a company, a department, a branch, etc. Individual: A DMO that represents a person, such as a customer, a contact, a user, etc. Member: A DMO that represents the relationship between an individual and an organization, such as an employee, a customer, a partner, etc. The other options are not data model subject areas that should be used for any Organization, Individual, or Member in the Customer 360 data model. The Engagement subject area defines the actions that people take, such as clicks, views, purchases, etc. The Membership subject area defines the associations that people have with groups, such as loyalty programs, clubs, communities, etc. The Global Account subject area defines the hierarchical relationships between organizations, such as parent-child, subsidiary, etc. Data Model Subject Areas Party Subject Area Customer 360 Data Model
Which method should a consultant use when performing aggregations in windows of 15 minutes on data collected via the Interaction SDK or Mobile SDK? Streaming insight is a method that allows you to perform aggregations in windows of 15 minutes on data collected via the Interaction SDK or Mobile SDK. Streaming insight is a feature that enables you to create real-time metrics and insights based on streaming data from various sources, such as web, mobile, or IoT devices. Streaming insight allows you to define aggregation rules, such as count, sum, average, min, max, or percentile, and apply them to streaming data in time windows of 15 minutes. For example, you can use streaming insight to calculate the number of visitors, the average session duration, or the conversion rate for your website or app in 15-minute intervals. Streaming insight also allows you to visualize and explore the aggregated data in dashboards, charts, or tables. Streaming Insight, Create Streaming Insights
Northern Trail Outfitters is using the Marketing Cloud Starter Data Bundles to bring Marketing Cloud data into Data Cloud. What are two of the available datasets in Marketing Cloud Starter Data Bundles? Choose 2 answers The Marketing Cloud Starter Data Bundles are predefined data bundles that allow you to easily ingest data from Marketing Cloud into Data Cloud. The available datasets in Marketing Cloud Starter Data Bundles are Email, MobileConnect, and MobilePush. These datasets contain engagement events and metrics from different Marketing Cloud channels, such as email, SMS, and push notifications. By using these datasets, you can enrich your Data Cloud data model with Marketing Cloud data and create segments and activations based on your marketing campaigns and journeys. The other options are incorrect because they are not available datasets in Marketing Cloud Starter Data Bundles. Option A is incorrect because Personalization is not a dataset, but a feature of Marketing Cloud that allows you to tailor your content and messages to your audience. Option C is incorrect because Loyalty Management is not a dataset, but a product of Marketing Cloud that allows you to create and manage loyalty programs for your customers. Marketing Cloud Starter Data Bundles in Data Cloud, Connect Your Data Sources, Personalization in Marketing Cloud, Loyalty Management in Marketing Cloud
A customer has a custom Customer Email c object related to the standard Contact object in Salesforce CRM. This custom object stores the email address a Contact that they want to use for activation. To which data entity is mapped? The Contact Point_Email object is the data entity that represents an email address associated with an individual in Data Cloud. It is part of the Customer 360 Data Model, which is a standardized data model that defines common entities and relationships for customer data. The Contact Point_Email object can be mapped to any custom or standard object that stores email addresses in Salesforce CRM, such as the custom Customer Email__c object. The other options are not the correct data entities to map to because: A . The Contact object is the data entity that represents a person who is associated with an account that is a customer, partner, or competitor in Salesforce CRM. It is not the data entity that represents an email address in Data Cloud. C . The custom Customer Email__c object is not a data entity in Data Cloud, but a custom object in Salesforce CRM. It can be mapped to a data entity in Data Cloud, such as the Contact Point_Email object, but it is not a data entity itself. D . The Individual object is the data entity that represents a unique person in Data Cloud. It is the core entity for managing consent and privacy preferences, and it can be related to one or more contact points, such as email addresses, phone numbers, or social media handles. It is not the data entity that represents an email address in Data Cloud. Customer 360 Data Model: Individual and Contact Points - Salesforce, Contact Point_Email | Object Reference for the Salesforce Platform | Salesforce Developers, [Contact | Object Reference for the Salesforce Platform | Salesforce Developers], [Individual | Object Reference for the Salesforce Platform | Salesforce Developers]
Cumulus Financial uses Data Cloud to segment banking customers and activate them for direct mail via a Cloud File Storage activation. The company also wants to analyze individuals who have been in the segment within the last 2 years.Which Data Cloud component allows for this? The segment membership data model object is a Data Cloud component that allows for analyzing individuals who have been in a segment within a certain time period. The segment membership data model object is a table that stores the information about which individuals belong to which segments and when they were added or removed from the segments. This object can be used to create calculated insights, such as segment size, segment duration, segment overlap, or segment retention, that can help measure the effectiveness of segmentation and activation strategies. The segment membership data model object can also be used to create nested segments or segment exclusions based on the segment membership criteria, such as segment name, segment type, or segment date range. The other options are not correct because they are not Data Cloud components that allow for analyzing individuals who have been in a segment within the last 2 years. Nested segments and segment exclusions are features that allow for creating more complex segments based on existing segments, but they do not provide the historical data about segment membership. Calculated insights are custom metrics or measures that are derived from data model objects or data lake objects, but they do not store the segment membership information by themselves. Segment Membership Data Model Object, Create a Calculated Insight, Create a Nested Segment
Every day, Northern Trail Outfitters uploads a summary of the last 24 hours of store transactions to a new file in an Amazon S3 bucket, and files older than seven days are automatically deleted. Each file contains a timestamp in a standardized naming convention.Which two options should a consultant configure when ingesting this data stream? Choose 2 answers : When ingesting data from an Amazon S3 bucket, the consultant should configure the following options: The refresh mode should be set to "Upsert", which means that new and updated records will be added or updated in Data Cloud, while existing records will be preserved. This ensures that the data is always up to date and consistent with the source. The filename should contain a wildcard to accommodate the timestamp, which means that the file name pattern should include a variable part that matches the timestamp format. For example, if the file name is store_transactions_2023-12-18.csv, the wildcard could be store_transactions_*.csv. This ensures that the ingestion process can identify and process the correct file every day. The other options are not necessary or relevant for this scenario: Deletion of old files is a feature of the Amazon S3 bucket, not the Data Cloud ingestion process. Data Cloud does not delete any files from the source, nor does it require the source files to be deleted after ingestion. Full Refresh is a refresh mode that deletes all existing records in Data Cloud and replaces them with the records from the source file. This is not suitable for this scenario, as it would result in data loss and inconsistency, especially if the source file only contains the summary of the last 24 hours of transactions. Ingest Data from Amazon S3, Refresh Modes
A customer has a requirement to be able to view the last time each segment was published within their Data Cloud org.Which two features should the consultant recommend to best address this requirement? Choose 2 answers A customer who wants to view the last time each segment was published within their Data Cloud org can use the dashboard and report features to achieve this requirement. A dashboard is a visual representation of data that can show key metrics, trends, and comparisons. A report is a tabular or matrix view of data that can show details, summaries, and calculations. Both dashboard and report features allow the user to create, customize, and share data views based on their needs and preferences. To view the last time each segment was published, the user can create a dashboard or a report that shows the segment name, the publish date, and the publish status fields from the segment object. The user can also filter, sort, group, or chart the data by these fields to get more insights and analysis. The user can also schedule, refresh, or export the dashboard or report data as needed. Dashboards, Reports
Which information is provided in a .csv file when activating to Amazon S3? When activating to Amazon S3, the information that is provided in a .csv file is the activated data payload. The activated data payload is the data that is sent from Data Cloud to the activation target, which in this case is an Amazon S3 bucket. The activated data payload contains the attributes and values of the individuals or entities that are included in the segment that is being activated. The activated data payload can be used for various purposes, such as marketing, sales, service, or analytics. The other options are incorrect because they are not provided in a .csv file when activating to Amazon S3. Option A is incorrect because an audit log is not provided in a .csv file, but it can be viewed in the Data Cloud UI under the Activation History tab. Option C is incorrect because the metadata regarding the segment definition is not provided in a .csv file, but it can be viewed in the Data Cloud UI under the Segmentation tab. Option D is incorrect because the manifest of origin sources within Data Cloud is not provided in a .csv file, but it can be viewed in the Data Cloud UI under the Data Sources tab. Data Activation Overview, Create and Activate Segments in Data Cloud, Data Activation Use Cases, View Activation History, Segmentation Overview, [Data Sources Overview]
Which operator should a consultant use to create a segment for a birthday campaign that is evaluated daily? To create a segment for a birthday campaign that is evaluated daily, the consultant should use the Is Anniversary Of operator. This operator compares a date field with the current date and returns true if the month and day are the same, regardless of the year. For example, if the date field is 1990-01-01 and the current date is 2023-01-01, the operator returns true. This way, the consultant can create a segment that includes all the customers who have their birthday on the same day as the current date, and the segment will be updated daily with the new birthdays. The other options are not the best operators to use for this purpose because: A . The Is Today operator compares a date field with the current date and returns true if the date is the same, including the year. For example, if the date field is 1990-01-01 and the current date is 2023-01-01, the operator returns false. This operator is not suitable for a birthday campaign, as it will only include the customers who were born on the same day and year as the current date, which is very unlikely. B . The Is Birthday operator is not a valid operator in Data Cloud. There is no such operator available in the segment canvas or the calculated insight editor. C . The Is Between operator compares a date field with a range of dates and returns true if the date is within the range, including the endpoints. For example, if the date field is 1990-01-01 and the range is 2022-12-25 to 2023-01-05, the operator returns true. This operator is not suitable for a birthday campaign, as it will only include the customers who have their birthday within a fixed range of dates, and the segment will not be updated daily with the new birthdays.
Luxury Retailers created a segment targeting high value customers that it activates through Marketing Cloud for email communication. The company notices that the activated count is smaller than the segment count. What is a reason for this? The reason for the activated count being smaller than the segment count is A. Data Cloud enforces the presence of Contact Point for Marketing Cloud activations. If the individual does not have a related Contact Point, it will not be activated. A Contact Point is a data model object that represents a channel or method of communication with an individual, such as email, phone, or social media. For Marketing Cloud activations, Data Cloud requires that the individual has a related Contact Point of type Email, which contains a valid email address. If the individual does not have such a Contact Point, or if the Contact Point is missing or invalid, the individual will not be activated and will not receive the email communication. Therefore, the activated count may be lower than the segment count, depending on how many individuals in the segment have a valid email Contact Point. Salesforce Data Cloud Consultant Exam Guide, Contact Point, Marketing Cloud Activation
A Data Cloud consultant recently added a new data source and mapped some of the data to a new custom data model object (DMO) that they want to use for creating segments. However, they cannot view the newly created DMO when trying to create a new segment.What is the cause of this issue? The cause of this issue is that the new custom data model object (DMO) is not of category Profile. A category is a property of a DMO that defines its purpose and functionality in Data Cloud. There are three categories of DMOs: Profile, Event, and Other. Profile DMOs are used to store attributes of individuals or entities, such as name, email, address, etc. Event DMOs are used to store actions or interactions of individuals or entities, such as purchases, clicks, visits, etc. Other DMOs are used to store any other type of data that does not fit into the Profile or Event categories, such as products, locations, categories, etc. Only Profile DMOs can be used for creating segments in Data Cloud, as segments are based on the attributes of individuals or entities. Therefore, if the new custom DMO is not of category Profile, it will not appear in the segmentation canvas. The other options are not correct because they are not the cause of this issue. Data ingestion is not a prerequisite for creating segments, as segments can be created based on the data model schema without actual data. The new DMO does not need to have a relationship to the individual DMO, as segments can be created based on any Profile DMO, regardless of its relationship to other DMOs. Segmentation is not only supported for the Individual and Unified Individual DMOs, as segments can be created based on any Profile DMO, including custom ones. Create a Custom Data Model Object from an Existing Data Model Object, Create a Segment in Data Cloud, Data Model Object Category