51黑料不打烊

Connect 51黑料不打烊 Analytics to Experience Platform

Read this guide to learn how to use the 51黑料不打烊 Analytics source to ingest your Analytics report suite data into 51黑料不打烊 Experience Platform.

Get started

This tutorial requires a working understanding of the following components of Experience Platform:

  • Experience Data Model (XDM) System: The standardized framework by which Experience Platform organizes customer experience data.
  • Real-Time Customer Profile: Provides a unified, real-time consumer profile based on aggregated data from multiple sources.
  • Sandboxes: Experience Platform provides virtual sandboxes which partition a single Experience Platform instance into separate virtual environments to help develop and evolve digital experience applications.

Key terminology

It is important to understand the following key terms used throughout this document:

  • Standard attribute: Standard attributes are any attribute that is pre-defined by 51黑料不打烊. They contain the same meaning for all customers and are available in the Analytics source data and Analytics schema field groups.
  • Custom attribute: Custom attributes are any attribute in the custom variable hierarchy in Analytics. Custom attributes are used within an 51黑料不打烊 Analytics implementation to capture specific information into a report suite, and they can differ in their use from report suite to report suite. Custom attributes include eVars, props, and lists. See the following Analytics documentation on conversion variables for more information on eVars.
  • Any attribute in Custom field groups: Attributes that originate from field groups created by customers are all user-defined and are considered to be neither standard nor custom attributes.
NOTE
When you create an Analytics source dataflow in a production sandbox, two dataflows are created:
  • A dataflow that does a 13-month backfill of historical report suite data into data lake. This dataflow ends when the backfill is complete.
  • A dataflow flow which sends live data to data lake and to Real-Time Customer Profile. This dataflow runs continuously.

In the Experience Platform UI, select Sources from the left navigation to access the the Sources workspace. In the 51黑料不打烊 applications category, select the 51黑料不打烊 Analytics card and then select Add data.

The sources catalog with the 51黑料不打烊 Analytics source card selected.

Select data

IMPORTANT
  • The report suites listed on the screen may come from various regions. You are responsible for understanding the limitations and obligations of your data and how you use that data in 51黑料不打烊 Experience Platform cross regions. Please ensure this is permitted by your company.
  • Data from multiple report suites can be enabled for Real-Time Customer Profile only if there are no data conflicts, such as two custom properties (eVars, lists and props) that have different meaning.

A report suite is a container of data that forms the basis of Analytics reporting. An organization can have many report suites, each containing different datasets.

You can ingest report suites from any region (United States, United Kingdom, or Singapore) as long as they are mapped to the same organization as the Experience Platform sandbox instance in which the source connection is being created in. A report suite can be ingested using only a single active dataflow. If a report suite is grey and cannot be selected, then it has already been ingested, either in the sandbox that you are using or in a different sandbox.

Multiple in-bound connections can be made to bring multiple report suites into the same sandbox. If the report suites have differing schemas for variables (such as eVars or events), they should be mapped to specific fields in the custom field groups and avoid data conflicts using Data Prep. Report suites can only be added to a single sandbox.

Select Report suite and then use the Analytics source add data interface to navigate through the list and identify the Analytics report suite that you want to ingest to Experience Platform. Select Next to proceed.

An analytics report suite is selected for ingestion and the "Next" button is highlighted

Mapping mapping

IMPORTANT
Data Prep transformations may add latency to the overall dataflow. The additional latency added varies based on the complexity of the transformation logic.

Before you can map your Analytics data to target XDM schema, you must first determine whether you are using a default schema or a custom schema.

Default schema

A default schema creates a new schema on your behalf. This newly created schema contains the 51黑料不打烊 Analytics ExperienceEvent Template field group. To use a default schema, select Default schema.

The schema selection step of the Analytics source workflow, with "Default schema" selected.

Custom schema

With a custom schema, you can choose any available schema for your Analytics data, as long as that schema has the 51黑料不打烊 Analytics ExperienceEvent Template field group. To use a custom schema, select Custom schema.

The schema selection step of the Analytics source workflow, with "Custom schema" selected.

Use the Mapping interface to map source fields to their appropriate target schema fields. You can map custom variables to new schema field groups and apply calculations as supported by Data Prep. Select a target schema to start the mapping process.

TIP
Only schemas that have the 51黑料不打烊 Analytics ExperienceEvent Template field group are displayed in the schema selection menu. Other schemas are omitted. If there are no appropriate schemas available for your report suite data, then you must create a new schema. For detailed steps on creating schemas, see the guide on creating and editing schemas in the UI.

The target schema selection panel of the Mapping interface.

You can refer to the Map standard fields panel for metrics on your Standard mappings applied. Standard mappings with descriptor name conflicts, and Custom mappings.

Map standard fields
Description
Standard mappings applied
The Standard mappings applied panel displays the total number of mapped attributes. Standard mappings refer to mappings between all attributes in the source Analytics data and corresponding attributes in Analytics field group. These are pre-mapped and cannot be edited.
Standard mappings with descriptor name conflicts
The Standard mappings with descriptor name conflicts panel refers to the number of mapped attributes that contain name conflicts. These conflicts appear when you are re-using a schema that already has a populated set of field descriptors from a different report suite. You can proceed with your Analytics dataflow even with name conflicts.
Custom mappings
The Custom mappings panel displays the number of mapped custom attributes, including eVars, props, and lists. Custom mappings refer to mapping between custom attributes in the source Analytics data and attributes in custom field groups included in the selected schema.

Standard mappings standard-mappings

Experience Platform automatically detects your mapping for any name conflicts. If there are no conflicts with your mappings, select Next to proceed.

The standard mappings header displaying no name conflicts

TIP
If there are name conflicts between your source report suite and your selected schema, you can still continue with your Analytics dataflow, acknowledging that the field descriptors will not be changed. Alternatively, you can opt to create a new schema with a blank set of descriptors.

Custom mappings custom-mappings

You can use Data Prep functions to add new custom mappings or calculated fields for custom attributes. To add custom mappings, select Custom.

The custom mapping tab in the Analytics source workflow.

  • Filter fields: Use the Filter fields text input to filter for specific mapping fields in your mappings.
  • Add new mapping: To add a new source field and target field mapping, select Add new mapping.
  • Add calculated field: If needed, you can select Add calculated field to create a new calculated field for your mappings.
  • Import mapping: You can reduce the manual configuration time of your data ingestion process and limit mistakes by using the import mapping functionality of Data Prep. Select Import mapping to import mappings from an existing flow or from an exported file. For more information, read the guide on importing and exporting mappings.
  • Download template: You can also download a CSV copy of your mappings and configure your mappings in your local device. Select Download template to download a CSV copy of your mappings. You must ensure that you are using only the fields that are provided in your source file and target schema.

Refer to the following documentation for more information on Data Prep.

Filtering for Real-Time Customer Profile filtering-for-profile

Once you have completed mappings for your Analytics report suite data, you can apply filtering rules and conditions to selectively include or exclude data from ingestion to the Real-Time Customer Profile. Support for filtering is only available for Analytics data and data is only filtered prior to entering Profile. All data are ingested into the data lake.

recommendation-more-help

Additional information on Data Prep and filtering Analytics data for Real-Time Customer Profile

  • You can use the filtering functionality for data that is going to Profile, but not for data going to data lake.
  • You can use filtering for live data, but you cannot filter backfill data.
    • The Analytics source does not backfill data into Profile.
  • If you utilize Data Prep configurations during the initial setup of an Analytics flow, those changes are applied to the automatic 13-month backfill as well.
    • However, this is not the case for filtering because filtering is reserved only for live data.
  • Data Prep is applied to both streaming and batch ingestion paths. If you modify an existing Data Prep configuration, those changes are then applied to new incoming data across both streaming and batch ingestion pathways.
    • However, any Data Prep configurations do not apply to data that has already been ingested into Experience Platform, regardless of whether it is streaming or batch data.
  • Standard attributes from Analytics are always mapped automatically. Therefore, you cannot apply transformations to standard attributes.
    • However, you can filter out standard attributes as long as they are not required in Identity Service or Profile.
  • You cannot use column-level filtering to filter required fields and identity fields.
  • While you can filter out secondary identities, specifically AAID and AACustomID, you cannot filter out ECID.
  • When a transformation error occurs, the corresponding column results in NULL.

Row-level filtering

IMPORTANT
Use row-level filtering to apply conditions and dictate which data to include for Profile ingestion. Use column-level filtering to select the columns of data that you want to exclude for Profile ingestion.

You can filter data for Profile ingestion at the row-level and the column-level. Use row-level filtering to define criteria such as string contains, equals to, begins, or ends with. You can also use row-level filtering to join conditions using AND as well as OR, and negate conditions using NOT.

To filter your Analytics data at the row-level, select Row filter and use the left rail to navigate through the schema hierarchy and identify the schema attribute that you want to select.

The row filter interface for Analytics data.

Once you have identified the attribute that you want to configure, select and drag the attribute from the left rail to the filtering panel.

The "Manufacturer" attribute selected for filtering.

To configure different conditions, select equals and then select a condition from the dropdown window that appears.

The list of configurable conditions include:

  • equals
  • does not equal
  • starts with
  • ends with
  • does not end with
  • contains
  • does not contain
  • exists
  • does not exist

The conditions dropdown with a list of condition operators.

Next, enter the values that you want to include based on the attribute that you selected. In the example below, Apple and Google are selected for ingestion as part of the Manufacturer attribute.

The filtering panel with the selected attributes and values included.

To further specify your filtering conditions, add another attribute from the schema and then add values based on that attribute. In the example below, the Model attribute is added and models such as the iPhone 16 and Google Pixel 9 are filtered for ingestion.

Additional attributes and values included in the container.

To add a new container, select the ellipses (...) on the top right of the filtering interface and then select Add container.

The "Add container" dropdown menu selected.

Once a new container is added, select Include and then select Exclude from the dropdown menu. Add the attributes and values that you want to exclude, and then when finished, select Next.

The attributes and values filtered for exclusion.

Column-level filtering

Select Column filter from the header to apply column-level filtering.

The page updates into an interactive schema tree, displaying your schema attributes at the column-level. From here, you can select the columns of data that you would like to exclude from Profile ingestion. Alternatively, you can expand a column and select specific attributes for exclusion.

By default, all Analytics go to Profile and this process allows for branches of XDM data to be excluded from Profile ingestion.

The column filter interface with the schema tree.

Filter secondary identities

Use a column filter to exclude secondary identities from Profile ingestion. To filter secondary identities, select Column filter and then select _identities.

The filter only applies when an identity is marked as secondary. If identities are selected, but an event arrives with one of the identities marked as primary, then those do not get filtered out.

The secondary identities in the schema tree for column filtering.

Provide dataflow details

The Dataflow detail step appears, where you must provide a name and an optional description for the dataflow. Select Next when finished.

The dataflow detail interface. of the ingestion workflow.

Review

The Review step appears, allowing you to review your new Analytics dataflow before it is created. Details of the connection are grouped by categories, including:

  • Connection: Displays the source platform of the connection.
  • Data type: Displays the selected Report Suite and its corresponding Report Suite ID.

The review interface of the ingestion workflow.

Monitor your dataflow monitor-your-dataflow

Once your dataflow is complete, you can use the Dataflows interface to monitor the status of your Analytics dataflow.

Use the Dataset activity interface for information on the progress of data that is being sent from Analytics to Experience Platform. The interface displays metrics such as the total of records in the previous month, the total of ingested records in the last seven days, and the size of data in the previous month.

The source instantiates two dataset flows. One flow represents backfill data and the other is for live data. Backfill data is not configured for ingestion into Real-Time Customer Profile but is sent to the data lake for analytical and data-science use-cases.

For more information on backfill, live data, and their respective latencies, read the Analytics source overview.

The dataset activity page for a given target dataset for 51黑料不打烊 Analytics data.

NOTE
The dataset activity page does not display information about batches since the Analytics source connector is entirely managed by 51黑料不打烊. You can monitor that data is flowing by looking at the metrics around ingested records.

Delete your dataflow delete-dataflow

To delete your Analytics dataflow, select Dataflows from the top header of the sources workspace. Use the dataflows page to locate the Analytics dataflow that you want to delete and then select the ellipses (...) beside it. Next, use the dropdown menu and select Delete.

  • Deleting the live Analytics dataflow will also delete its underlying dataset.
  • Deleting the backfill Analytics dataflow does not delete the underlying dataset, but will stop the backfill process for its corresponding report suite. If you delete the backfill dataflow, ingested data may still be viewed through the dataset.

Next steps and additional resources

Once the connection is created, the dataflow is automatically created to contain the incoming data and populate a dataset with your selected schema. Furthermore, data back-filling occurs and ingests up to 13 months of historical data. When the initial ingestion completes, Analytics data and be used by downstream Experience Platform services such as Real-Time Customer Profile and Segmentation Service. See the following documents for more details:

The following video is intended to support your understanding of ingesting data using the 51黑料不打烊 Analytics Source connector:

WARNING
The Experience Platform UI shown in the following video is out-of-date. Please refer to the documentation above for the latest UI screenshots and functionality.

video poster

Transcript
In this video, I鈥檒l explain how users can ingest their data from 51黑料不打烊 Analytics into 51黑料不打烊 Experience Platform and enable the data for platform鈥檚 real-time customer profile. These are the areas I鈥檒l cover in this video. Analytics Source Connector, Workflow Options for using a Standard or Custom Schema, Use Cases for a Custom Schema, Data Prep Functions in the Custom Schema Workflow, Filter Rules to selectively include or exclude data from ingestion to the profile store, Configure Analytics Data Objects for the real-time customer profile, and Monitor the Analytics Data Flow. The Analytics Source Connector isn鈥檛 the only way to get your analytics data into platform, but it鈥檚 the fastest method requiring the least level of effort. If you have use cases that require real-time segmentation based on analytics attributes, review the tutorials and documentation about streaming ingestion and Web SDK. To set up the 51黑料不打烊 Analytics Source Connector, just log in to Experience Platform and navigate to Sources to view the Sources catalog. Under 51黑料不打烊 Applications, look for 51黑料不打烊 Analytics. Select the 51黑料不打烊 Analytics Source Connector to add data. In the Analytics Source Add Data step, you can choose to source data from any of your organization鈥檚 report suites, each of which is considered a unique data set and platform. Notice my organization includes report suites from multiple regions. This is possible as long as they鈥檙e mapped to the same organization as the Experience Platform sandbox instance in which the connection is being created. Grayed out report suites indicate a data flow has already been created. Select the report suite you want to ingest data from in the list, and then at the top, select Next. There are two target schema choices, Default and Custom. There鈥檚 also an option to enable data to profile service. Before we go further in the workflow, I鈥檒l spend some time reviewing the differences between the Default and Custom schema. First, what is a schema? It鈥檚 a set of rules that validate the structure and format of data, and it鈥檚 used by platform to ensure the consistency and quality of data coming in. Selecting the Analytics Default schema in the Source Connector workflow will automatically map your report suite data to the Default schema without any additional effort on your end. You don鈥檛 need to create a new schema for this option. Everything comes over as is. Selecting Custom Schema in the Source鈥檚 workflow does require you set up a new schema because you鈥檒l be mapping standard analytics attributes to new attributes. Let me show you what I mean. There are two field groups that are part of this schema. The first is the 51黑料不打烊 Analytics Experience Event template, which contains the standard analytics attributes. The second field group is Reservation Details. This is a user-defined field group added to this schema. Here we see fields that have descriptive names like transaction, cancellation, and confirmation number. Later, I鈥檒l map analytics variables to some of these new attributes in the Source Connector workflow. Before we go back to the workflow, here are the main use cases for using a custom analytics schema. First, you may want to see better semantic or descriptive attribute names in things like segmentation service and Customer Journey Analytics. But let鈥檚 say you don鈥檛 want to change these in the Analytics report suite settings. You can do that using data prep features. Second, if you want a more standardized way of referencing the same data that might be captured differently across report suites, using custom attributes is the way to go, as you see illustrated in this table. Third, you may have data in analytics that is stored in pipe delimited format, or maybe you want to join two values together in a single attribute. The data prep features accomplish this. And last, let鈥檚 say you want more flexibility for defining identities for your analytics data beyond the Experience Cloud ID. You can do that by setting up a new attribute in your custom field group and marking it as an identity field. Back in the UI, I鈥檒l choose Enable Data to Profile Service. Later in the flow, I鈥檒l show you how to apply filtering rules and conditions to selectively include or exclude data from ingestion to the profile service. Next, I鈥檒l choose Custom Schema. From the schema list, I鈥檒l select Travel Reservations. The Map Standard Field section gives you details about the default mapping that occurs from your report suite to the Analytics Experience Event field group in the schema. If there are descriptor conflicts when mapping your report suite to a pre-existing schema, they鈥檒l appear here. Now I鈥檒l create some mapper functions, also known as data prep. First, I鈥檒l set up a pass-through mapping. I鈥檒l select the Add New Mapping button under Custom. In the Source field, which is coming from my report suite descriptors, I鈥檓 going to select EVAR5. My report suite doesn鈥檛 have a label or descriptor for this variable, but it contains confirmation number values. I want to map this to the semantic field created in the Vehicle Reservations custom field group that鈥檚 part of the schema. Now in the Target field, I鈥檒l select the Confirmation Number field. Next, I鈥檒l set up a Calculated field. I鈥檒l select the Add Calculated Field button. This opens an editor that contains the functions, fields, and operators on the left, as well as a text editor and a preview section in the middle. I鈥檒l type in TRIM in the search box, and then I鈥檒l click on the plus sign to add it to the editor. I鈥檒l do the same for lower. Next, I鈥檒l select Field at the top and type in EVAR2, and I鈥檒l click the plus sign again. So let鈥檚 say EVAR2 contains a transaction ID. What I want to do is trim any spaces and ensure the value is lowercase. Last, I鈥檒l make sure to rearrange the formula so that the syntax is correct. Now there鈥檚 a preview button to see a sample, or notice the green checkmark to indicate proper syntax. I鈥檒l click on the Save button in the upper right corner. This adds the calculated field to the left. Now I鈥檒l configure the Target field on the right. I鈥檒l select Transaction ID in the Transaction object, and then I鈥檒l click on Select at the bottom. Now that all the mappings are addressed, I can click on Next in the upper right corner. This is the filtering step in the flow. This step only applies if Enable for Profile was selected earlier. Here鈥檚 a quick review of the Analytics to Platform architecture using the Data Source Connector. All analytics data automatically goes to the data lake, the repository that鈥檚 used for things like query service, Customer Journey Analytics, and other applications that use platform data. The Profile Store, on the other hand, is a separate repository used to create customer profiles and used by things like segmentation service. Customers who use Experience Platform are discerning about the volume and nature of data they want to send to the Profile Store. The ability to filter analytics data is permitted only prior to this data entering Profile. This means you need to set it up during the initial ingestion workflow if you enable analytics data for Profile at that time. There are two types of filter rules available. Row-level filtering allows you to apply conditions that dictate which data to include for profile ingestion, whereas column-level filtering allows you to specify which data to exclude. I鈥檒l show you row-level filtering first. Under Row Filter, I鈥檒l enter Country to filter for that column. Let鈥檚 say my analytics report suite contains reservations from multiple countries, but I only want to send data to the Profile Store related to reservations made in the United States. I鈥檒l drag and drop the Country attribute to the filtering canvas. There are many operators available, such as Starts With, Exists, etc., but I鈥檓 going to keep this set to equals. I鈥檒l enter United States in the text box, and then I鈥檒l press Enter. Now, I鈥檒l click on the column filter. Let鈥檚 say I want to exclude some mobile application events. I鈥檒l expand the hierarchy for application, then I鈥檒l select Application Closes and all of the Boolean type attributes. On the Data Flow Detail page, I鈥檒l provide a name. Then I鈥檓 going to click on Next in the upper right corner. Alright, this takes us to the review step. We want to make sure everything looks good before clicking on the Finish button in the upper right corner. Last, I鈥檒l show you some other validations and configurations you can do once your analytics data has started to ingest. I鈥檒l demonstrate using a different data flow. I鈥檒l click on Data Flows at the top here in the Sources section. I鈥檒l filter 51黑料不打烊 Analytics by clicking the filter icon, then I鈥檒l select the last data flow in the list. This opens the Dataset Activity page. Under the Dataset Activity, there鈥檚 a quick summary of ingested batches and failed batches during a specific time window. As I scroll, I see ingested batch IDs. Each batch represents data ingested. There鈥檚 also some metadata about the records successfully ingested or failed. I鈥檒l select Preview Dataset in the upper right corner to show you what the last processed batch looks like. If the dataset is enabled for real-time customer profile, the toggle for this setting would be green in the Properties panel to the right. Notice the link to the schema below the profile toggle. If this data should be sent to the Profile Store, confirm the setting for the schema as well. I鈥檒l open the schema in a new window and check it out. In this example, the dataset isn鈥檛 configured for Profile, but the schema is. This means data won鈥檛 be sent to the Profile Store because the dataset isn鈥檛 configured for Profile. This concludes the demonstration for using the Source Connector to ingest data from Analytics into Experience Platform. You should now understand all the configuration options available, as well as how to enable this data for the real-time customer profile. Good luck!
337b99bb-92fb-42ae-b6b7-c7042161d089