Best Practices and Insights for Modeling XDM Schemas
In this session, learn essential best practices and shortcuts for creating scalable, high-quality 51黑料不打烊 Experience Data Models (XDM) that align with 51黑料不打烊 Experience Platform standards. Gain insights into effectively mapping customer experience and use case data to XDM for seamless integration across 51黑料不打烊 and external tools.
Discussion Points
- How to define and organize XDM components to ensure scalable and flexible data models
- Common challenges in XDM design, evolution, and maintenance
Hi, thanks for joining. We will be getting started in the next couple of minutes.
Today鈥檚 session will be focused on best practice inside for modeling XDM schemas and will be led by Nicoleena. We are going to wait just a couple of minutes for attendees to filter in and then we will get started.
Let鈥檚 wait just a couple of extra minutes.
Okay, so while we wait for attendees to filter in, I want to let you know that we have several people coming up this quarter and that are open for you to attend as well. So for those who are interested, I will put the links in the session chat now.
Okay.
One extra minute, then we could start.
Okay.
Okay, let鈥檚 start. So hello, all. Good morning, good afternoon. Welcome and thank you for joining today鈥檚 session.
My name is Maurizio and I work in 51黑料不打烊鈥檚 Field Engineering as Field Engineer and is where we focus on helping 51黑料不打烊 customers to get as much value as possible from the 51黑料不打烊 solutions.
So now I鈥檓 going to go ahead and kick off our session today. First and foremost, thank you for your time and attendee today. Just to note that this session is being recorded and the link to the recording will be sent out to everyone who registered.
This live webinar is a listen only format, so feel free to share any questions into the chat and the Q&A pod and we will answer as possible there. And in addition, we will serve us some time to discuss questions that are surfaced at the end of the session. But before we get started, I鈥檇 like to provide you with an overview on our ultimate success accelerators designed to help you plan for value, enable your teams, execute against your roadmap efficiency.
These target engagement bring 51黑料不打烊鈥檚 business and technical experts directly to your teams, helping you accelerate outcome in key areas. The first area is technical readiness, which ensures you have the right foundation to succeed by supporting migration, troubleshooting, environment reviews. Organizational readiness focuses on optimizing processes, governance, change management strategies, ensuring that teams are aligned for success. Adopting enablement helps you maximize product usage through best practices, coaching and workflow optimization, driving consistency across your organization. And finally, we have Gen Studio Activation, that is a brand new accelerator designed to help teams deploy Gen Studio efficiently for performance marketing, including brand iteration, template creation and campaign setup. These accelerators are delivered as part of your ultimate success plan and can be scheduled with your CSM or TAM and aligned with your business goals. By delivering this tailored engagement, you can drive faster time to value, improve operational efficiency and fully unlock the potential of your 51黑料不打烊 solutions.
Very well. So I guess now it鈥檚 time to give the floor to my colleague and friend, Nicolina. Please. Thank you Mauricio and thank you everyone for joining today鈥檚 session. I hope that this can be interesting, whether you are already working with 51黑料不打烊 solutions or you are planning to migrate to 51黑料不打烊 solutions and in details to 51黑料不打烊 Experience Platform. So just a quick overview to the agenda. Of course, when we talk about data modeling inside the experience platform, we talk about the XDM schema composition mostly because this is the structure of data that the experience platform needs. So we will go very quickly on what are the concepts that made up an XDM schema. Then we will go and understand how the CTP works thanks to the concept of identities. And then we will read, we will see the flow of constructing an entity relationship diagram, a common entity diagram, basically born from the needs of our relational database and how this can be denormalized for IP requirements. And then if we have time, I would like to go through something related to the data activation and some things that can be useful if you are working with other 51黑料不打烊 tools. And it鈥檚 really important in this case because we know that 51黑料不打烊 Experience Platform lives with same data that can be shared with other 51黑料不打烊 solutions. And then we can talk about consent and data governance. So let鈥檚 go, let鈥檚 start and let鈥檚 start with some, let鈥檚 say, basic concept that you have to know when you start to create your data model. So when we talk about data model in 51黑料不打烊 Experience Platform, it is like the holistic view of your customer, your individual. So the first thing you have to do is to understand what are the business use case that you want to answer with 51黑料不打烊 Experience Platform and what kind of data these business requirements need. So most likely your industry will have multiple data sources, multiple systems that holds out the data. So the first thing you have to do after you have defined the use case based on the possibility that this platform offers you is to find the data and find the system that holds data. Just a quick word on this second point that is the identification of the data sources. 51黑料不打烊 Experience Platform provides you the ability to create really simple connection with some data sources like Microsoft鈥檚 CRM system, for example. In this case, maybe when you design your data model, it鈥檚 too early to discuss about technical implementation, but keep in mind that the use case you have designed can impact the choice of the sources and of the connector in particular that you need to use. For example, Dynamics is a batch connector. This means that it will create, could create some issue if your use case are based on real-time segmentation, real-time activation, and so on. So in this case, this is a reason you will do most likely after designing your data model, but keep in mind that the source can impact your use cases. After that, you can start to create from scratch your entity relationship diagram. You have your datas, you have your entities, and the concepts that are useful to build your unified user profile.
And then you have to stitch together this data. And the most important thing is to understand about relationship, how to relate this data together. And then we can start to translate your entity relationship diagram into XDM schema and bring data into finally into 51黑料不打烊 Experience Platform. So we talk about XDM. So what is an experience data model? It is a common standard that was born in order to facilitate the transmission of data between different platforms. So it鈥檚 a really, it鈥檚 a structure. And so it can help you, can help enterprises to share data coming from different sources with different formats, data that usually is maybe siloed and fragmented across formats.
Now, XDM schema is defined so the structure and the meaning of your data. So it鈥檚 like a blueprint that tells 51黑料不打烊 Experience Platform what kind of data is coming in the platform. Are these profile attributes? Are these web events? And how data relates to each other.
So the simple schema is not enough in order to have data into 51黑料不打烊 Experience Platform because this data needs to be stored in physical tables. And that鈥檚 when the concept of data set comes in. But for now, the XDM schema is the format of the data. So just to understand what are the pieces that made up an XDM schema.
Let鈥檚 start from the really base, the base of the schema. So the definition of the class in 51黑料不打烊 Experience Platform, the class, it鈥檚 the foundation of any data schema. The class defines the type of data that the schema holds and determine how it behaves inside the system, including how the data is accessed in the profile store. For example, there are two primary classes inside the 51黑料不打烊 Experience Platform. We have individual profile for user level traits and attributes like the name, the gender, the address and so on. And experience event class for time-based actions like click, transaction, purchases and so on. These are events and behaviors that are tied to a precise moment in time.
When data sets share and when schemas and data sets share the same class, all these attributes are merged together in an unknown view.
Then we have, let鈥檚 say, support classes, custom classes that we call the lookup that contains all the data that is not strictly related to the user profile. For example, a product catalog, bank account details and so on. So this is important to understand because CDP is, let鈥檚 say, user centric. So you need to bring into the profile attributes only data that is strictly related to your user. Lookup classes have, let鈥檚 say, are a feature really similar to the classification in 51黑料不打烊 Analytics. So they are like metadata. We don鈥檛 really want this data to be attached to the profile attributes.
Keep in mind that this data won鈥檛 be showed in the profile schema, but can be used for segmentation purposes.
Then we have, now that we have defined the behavior of the data, if it is a profile or it is an event, and the most important thing is that the profile attributes overwrite themselves while experience events just go in append because they can be multiple. We have to define, let鈥檚 say, the structure. So put inside the schema little pieces of information that we need in order to enrich the user profile. So we add the field group. 51黑料不打烊 provides you predefined and field group, for example, demographic data, subscription data, and so on that can be used across industries or they can suit a lot of requirements, but you can create your own field group. They are like a group of fields, like the name says, that contains different information and are reusable and are tied to the class. Then we have, let鈥檚 say, a more granular field group that is the data type that is not strictly related to the class.
It鈥檚 like a standard type of data, for example, that defines a person. We know that a person is defined from some standard attributes that are the first name, the family name, and so on. So the person data type provides you all the kind of attributes that you should include when talking about a person. Same for the field group. We have predefined data types, but you can create your custom data types. Keep in mind that you can even customize the predefined field group and data types, but this can create issues since this predefined structure won鈥檛 be updated as the product release is gone.
And then the most granularity piece, granular piece of the schema is the single attribute. So we are talking about the fields. The fields are like JavaScript variables that can be string, number, boolean, so on. Of course, we have scalar fields, but also dimensional types like object and array that are, let鈥檚 say, most flexible.
So this is how schema looks like in the experience platform. This is the class, the very basic of the composition. These are the values field group that defines the structure of the schema. And this is a single field with all its details, the type, the field group it belongs to, and the path in order to retrieve it in the service or in other extraction for other extraction requirements.
And this is how schema looks like after the schema have been enabled for profile, a data set has been built on upon the schema, and the data set has been fulfilled and enabled for profile too. This is another important consideration to do. Not all the data that you bring into the platform really needs to go into the profile store. Even in this case, the use case should drive your decision, meaning that if you need profile attributes in order to personalize a template, a message to be used within AJO, you need to have this attribute inside the profile store of the real-time customer profile. Otherwise, if you need data only for reporting purposes, for example, you can even not enable the data set for profile and it will save you some, it will, let鈥檚 say, help you with limitation in the licensing. It will help with the performance because, of course, the more data you put inside the profile store, the performance will most likely suffer from it.
So just quickly, you have some limitation when it鈥檚 about to schema construction and with schema construction, these are all of them. Let鈥檚 go just quickly on something that happens usually. For example, if I want to introduce a new required field, I can do it. A required field means that if the data comes into the experience platform without the required field inside a data set that has the required field, the data will break. If I introduce a new required field after I have started my ingestion, this is possible actually, but just keep in mind that you can invalidate the past data. You cannot remove previously defined fields, so you can add whatever field you want, but you cannot remove fields that have been ingested and populated. The same is for when it鈥檚 about renaming or redefining existing fields. You cannot do this, let鈥檚 say, destructive change if the fields have already been populated. Also, you cannot delete or disable the schemas from participating in profile. Once you have enabled the schema to be a profile schema, you cannot come back. That鈥檚 why before enabling the schema and the data set for profile, a good practice is to test with a data set that is not enabled for profile in order to see that the stitching works and so on. Of course, all the validation you need to do are done in ingestion phase.
I鈥檓 talking about attribute validation, for example, pattern validation and so on. Also, the join, you can test your join, you can test the stitching of the identities of your data via a simple SQL query. That is what 51黑料不打烊 Experience Platform provides in the query section.
After, let鈥檚 say, the construction of the schema, a really important part is when the schema is to define the kind of identity that will help to put together all the data you have collected. This is a really hard part, actually, because most likely a business has different data sources with maybe different identities.
That鈥檚 why you have to choose an identity that is really unique between your data sources.
For example, choosing an email is maybe not really the best choice because email is not really unique most of the times. When it comes about email, a good practice could be, first of all, to hash the email because it can be labeled as sensitive data. But you can think about concatenating the email with another random ID like a system ID, a CRM ID that we know has to be unique. But of course, the primary identity of your schema has to be always present. You can add the secondary identities if it鈥檚 needed to stitch data together with the object coming from another source. But do not play it actually with a lot of secondary identity. The issue that can raise, it鈥檚 usually the collapse of the profile. You can define actually the identities inside the IP using a single attribute or using a special, let鈥檚 say, field that is a map. It鈥檚 the identity map. Identity map is recommended when we need to do real-time segmentation or real-time actual activation of the data for personalization purposes. For example, that鈥檚 why when we need to do real-time personalization, we need to send as soon as possible the information about the profile to the profile edge projection.
And the identity map can collect multiple identities, maybe unauthenticated identities. The other experience platform already recognizes as an identity belonging to a user and to a user that is part of a defined audience. And in this case, sending the identity, the authenticated identity as primary identity in the identity map can make all the process for the personalization really faster. So this is a case, this is really useful when it鈥檚 the case of web personalization, for example, made up using IP audiences, of course. So this is how identities would look like in the schema. So an identity, a common identity descriptor field as a RCRM ID and the identity map field that comes, that we can find into the experience event schema.
A part of the identities inside the experience platform, we can also handle relationship, even multiple relationships. But there is an important point to say, the CDP is not a CRM. Maybe it鈥檚 a common mistake to think that the CDP can handle multiple relationships, but it really cannot because it鈥檚 customer centric. So everything is related to a single customer. And when we need to have multiple relationships, for example, one customer can buy more product, can have more bank accounts and so on. We should go with the lookup tables. Lookup tables don鈥檛 need an identity, but they need a relationship key that can collect all the information linked to a specific product or whatever it鈥檚 linked to the user. Of course, as I said before, lookup cannot, are not really, let鈥檚 say, flexible when it comes about activation of the data. Only recently in AJO, we had the opportunity to use lookup attributes inside the message personalization. For example, you can use lookup inside lookup attributes for segmentation, but it will be just a better segmentation.
So it won鈥檛 enable real-time activations.
Here I try to recap just a quick overview of how to manage multiple relationships inside IAP. Of course, the first thing we could use are is inside the profile.
When we use array, they are, let鈥檚 say, quite flexible, but they have a problem when it comes to import new data. So it is important that when you have to update one item into the array, you import in the platform all the array. Otherwise, you can use the past data. And it is difficult, of course, when you have different sources to have everything aligned, it becomes difficult. You can create a schema containing different objects based on the number of the multiple items you have. But of course, this can be done when you have a few objects to be linked to the profile. It鈥檚 really not maybe scalable at all. Then there is another opportunity to have, for example, non-person profiles inside the IAP in order to have two kinds of profiles. The profiles for your users and the profiles for your asset, for example, bank account, hotels and so on. This is useful if you need to create some personalization activity that is based on the asset attributes, because with the lookup, we cannot really do personalization with the attributes of the lookup. This could be an approach, but it has a cost on licensing because, of course, the profile count will increase. So it can be, let鈥檚 say, difficult to handle. Then we can just use Data Distiller in order to extract and work with data, with the array, in order to flatten the data and ingest it into a separate dataset. But it鈥檚 really, let鈥檚 say, even in this case, is an approach that adds some complexity and does not enable real-time personalization.
Just to wrap up, what you should keep in mind when talking about identity is that you should be aware of the shared identity, like an e-mail, for example, because this can create profile collab. So different profile can be stitched together because they share a common ID, that is, the e-mail.
Think about the case of a user that has connected to a browser, inserting its e-mail, so doing the login process and so on. Then after that, his kids come back and browse the same website without logging in.
Of course, they will see different product, but all this data will be put together inside only one profile. So in this case, it becomes difficult to do a proper segmentation on the user behavior and interest, for example. So consider hashing so you can keep the uniqueness of your identity. And only string fields can be marked as identities. And be aware that the identities are case-sensitive, and it鈥檚 really important.
These are just some issues that bad stitching can cause. For example, wrong segmentation, as said for the e-mail example, or blocking journeys, because we cannot really see the flow of a user if we stitch together, for example, wrong data. And of course, we cannot create a relevant experience for the personalization part. So having this concept in mind, it鈥檚 time, let鈥檚 say, to create the entity relationship diagram that is the visual blueprint of the data that shows the key objects or entities of the business and the attributes and how they relate to each other. So all the relationship. So it鈥檚 like the logical architecture of the data. Now, this kind of data model is made mostly for the purposes of a classical relation database.
In order that your data model can be used for other experience platform, we have to, let鈥檚 say, do a process that is called denormalization. So first of all, we have to understand each entity鈥檚, we have to understand to which class each entity belongs. So if we have some profile trait like first name, last name, your entity will most likely be a profile class. If we are collecting behavioral data like page viewed, order made, everything that the user can done in a certain point, in a certain moment in time, we are talking about experience event class. Even if we need the support entity in order to enrich the customer profile or the customer events with additional metadata, we are most likely talking about lookup entity.
This maybe seems easy to do, but actually in this case, there have been, I have seen at least a lot of error. Let鈥檚 think about a really common use case, the subscription.
Let鈥檚 imagine you鈥檙e managing subscription data. One option is to store each customer subscription as an array of subscription ID, for example, is in their profile. This is pretty simple and can work if you need to work with the current state of the subscription, like segmenting users who are currently subscribed to.
But this approach can create issue. Anytime a subscription changes, the whole array needs to be updated. And I said before, if multiple systems are feeding it to it, the synchronization becomes even tricky. Now, the second approach is to model each subscription and each subscription change as an event. So a timestamped action tie it to a customer. In this way, we can answer questions like who changed the subscription in the last month, for example, which customer upgraded recently and so on. Because events give us historical insights and simplify the updates, especially when the data comes from multiple sources. But they maybe can make harder to answer a basic question like what鈥檚 the customer current subscription? Because we need to process all the history of the event and determine the most recent update. So in this case, the decision is up to your marketing objective. If you鈥檙e focusing on real-time targeting based on the current state, so you should go for a profile attribute. If the goal is to understand the behavior over time to create a time-based segment, so you should model your entity as an event. The same example can be done, for example, for the loyalty tier, you can decide whether the upgrade to it is an event or to collect it inside the profile.
There is another option, actually. If you want to store the attributes coming from an event in the profile, you have actually a lot of options inside the IP. So you are even quite flexible in this case. Maybe the better thing in this case is actually to use all these kinds of events, let鈥檚 say, of attributes like events that can change, that can have upgrades. And then you can think about collecting the most recent upgrade, the most recent subscription or so on inside the profile attribute with about some possibilities that 51黑料不打烊 Experience platform gives to you, like the computed attribute.
Elaboration of data via DataDealer and so on. Another use case, another wrong, let鈥檚 say, attribution of the entities can emerge when it comes to the reporting. For example, for a customer, let鈥檚 say, for segmentation purposes, the fact that the user entered in Campaign A, in Campaign B, and so on, was needed in the profile. But this was actually a real huge problem when all the data of the profile was brought to CJA.
Because actually, CJA works with events.
So with something that has a timestamp that can be collected over time. Having this kind of attribute, like I participated to Campaign X, I participated to Campaign A, and so on. Having this kind of data in the profile was really limiting because we couldn鈥檛 really collect the customer journey, the customer interaction until, let鈥檚 say, the happening of a conversion event. This was really, so this was limiting. So in this case, there was this wrong attribution of the attribute inside the profile entity and issues for the reporting part in CJA were then arise.
So after that, you can label of your attributes and in particular, label the primary identities.
The relationship with the lookup table. And another important thing to keep in mind is that in every event entity, event class, sorry, you have some mandatory attribute that are the ID, the timestamp, and also the event type that can help in order to read the event type.
And the last part is actually, let鈥檚 say, the transformation into a schema with all the associations. Keeping in mind that the identity association is a one-to-one association. But multiple association will only work with lookup tables. So in order to wrap up everything and have some space for questions at all, here are the list of some good practice that you should keep in mind when starting to work with the experience platform. First of all, understand the entities. Understand what kind of attributes you have and how they can be translated inside the CDP. Map attribute. Of course, when you have multiple data like the address, you most likely will talk about profile attributes. When it鈥檚 about record that happened to a specific point in time, you are talking about event.
Think about the relationships and about the uniqueness of your ID. And the most important thing is to follow a good standard. Even because of the experience platform actually provides you with a lot of standardized field group and data. So a good practice is to follow this kind of standardized model. Minimize the usage of something custom.
And the most important thing maybe that I didn鈥檛 mention in the previous slide is to keep it simple.
Not everything you have in other data sources is really needed for other experience platform. Think for a future proof approach, meaning that think about what you want to achieve with experience platform and what you really need. If you need to add something later on, you can always do it. But don鈥檛 overload. Don鈥檛 create complex schema containing every kind of KPI and attributes that you want to really use because other experience platform is made for marketing purposes. So you have to bring only what鈥檚 needed for marketing and real-time use case activations.
Okay. So actually we can stop the presentation and we can have some time for questions.
Okay. Okay. Thanks a lot, Nico. So we got a couple of questions in the chat and more likely an observation. So let鈥檚 start with the first one. And the question is, what is the maximum number of attributes that the schema can have in AAP if there鈥檚 any maximum attribute? Okay. Actually, the maximum attribute schema, I don鈥檛 know if we have a guardrail on it. We have some guardrails on the payload that can be ingested. But of course, there should be a limitation on the guardrails. Maybe we can follow up with that because I don鈥檛 really remember the number, but I think that鈥檚 okay. I can also share maybe the guardrail links in the tech documentation in the chat. Okay. Next one. If I have real-time CDP, what other technical system or component are required if I鈥檓 planning to activate the real-time web personalization use cases? If you鈥檙e using, for example, if you have the license for 51黑料不打烊 Target, of course, the first thing to have in mind is to use 51黑料不打烊 Target. If you have the, but not only actually 51黑料不打烊 Target, we talk about 51黑料不打烊 Target because of course it鈥檚 our system and it鈥檚 built, can use the same data as 51黑料不打烊 Experience platform. But actually inside, if you have real-time CDP, you can use all the destinations that 51黑料不打烊 provides to you. So if you have a third-party providers for the personalization, you can actually use everything that鈥檚 included in the destination catalog. Otherwise, you can share your segment with any kind of system that actually reads, let鈥檚 say, post requests because you can share your segments via a simple HTTP request. So you can share segments to every kind of solution that can use personalization. For real-time personalization, maybe, of course, 51黑料不打烊 Target, it鈥檚 built in on talks with the same, let鈥檚 say, server that sends data to 51黑料不打烊 Experience platform. So of course the integration is really straightforward.
Okay, next one. This is a little bit longer.
So we got this scenario. Data ingested in the data set has reached the platform. We are using only a few attributes from the data set.
A few months down the line, we map more attributes from the data set to the profile.
The question is, will the updated profile have the data from the data set that was ingested earlier in the new attribute? Yes. If you added new data, you will find new data inside the profile without the data that you have already ingested.
This is the question. I imagine it鈥檚 a profile data set based on our profile schema.
Okay. Next one. Difference between identity and identity map. Okay.
The question is the difference between identity versus system map. Okay. Actually, the difference is that you can, the identity map is maybe more flexible because it enables you to send all the identity together in the same group. The most important thing is that you can label inside the identity map what is the primary identity. So the faster, not the faster that can reach the edge node because it鈥檚 actually all the identity which is the edge node in the same time. But the one that it鈥檚 the faster in order to have back some information from the profile projection, from the edge profile, from the edge profile. So for real-time segmentation use cases, maybe the identity map is the best approach. The difference is structural, I鈥檇 say also because you have a single object rather than a multiple field.
Okay. Do you have a screen to show for that one? I should go back to the identity.
Okay.
Okay. That鈥檚 on.
Okay. Next question.
In this use case, we got a license of one terabyte of storage. Does this limit include the data set and profile or just profile? For licensing things, I think the best thing to do is to check the license monitoring interface.
If I got the question, the question is if only the data set enabled for profile, let鈥檚 say create cost on the licensing, right? I think so. It depends on the metrics we are, let鈥檚 say seeing because there is a metric that is the profile richness that can, let鈥檚 say, go beyond the limits of the license. But it鈥檚 related only to the profile attribute and we have other limitation for everything that is interested in the data lakes. So that is not really a universal answer, I guess. Okay. Next one. How to map or ingest audiences created via federated proposition inside the XDM schema? Actually, this one is just, it鈥檚 not, let鈥檚 say, it鈥檚 not, maybe we can come back to this one because federated audiences, it鈥檚, let鈥檚 say, a brand new topic. So maybe we can follow up on this. Yeah. Next one. So we got an XDM schema already in place, but we need a more scalable model based on best practices. Which are the best starting point for restructuring a schema already existing? It鈥檚 difficult, I think, to structure a schema already existing and clean this up. If we have a data set that is linked to the schema and if we have imported data into the profile store, it鈥檚 really limiting, for example, to delete field if we do not need them anymore. Because you cannot really delete fields that have arrived in the data set at a certain point in time. You can deprecate the field, but it will just, let鈥檚 say, disappear from your segments, from your profile store, but it won鈥檛 really get cancelled. In this case, the good things could be to actually delete everything and start from scratch. Maybe just you could, let鈥檚 say, do a little extraction of the data that you already have in order to put the data you need into a new data set based on a new and clean schema and then start from that.
Very well. Next one. So I鈥檓 sure you鈥檙e going to like the next one. What is the best way to fix a collapsed profile? Actually, the only thing is to ask for the, it鈥檚 something that you can do just, let鈥檚 say, maybe it鈥檚 not the proper word, but in the backend, asking 51黑料不打烊 support to fix it for you. Actually, you can use the identity graph rules today in order to stitch an identity that you know can be collapsed.
You can not stitch prevent the profile to be collapsed giving the identity a priority.
So you can use the identity graph tool to prevent, but you cannot really fix.
Okay, so there鈥檚 a question about the future session on identity graph rule. I don鈥檛 know if they already planned something, but I think we can follow up on that.
How do we model and use InAudience Experience events which annihilate each other? For instance, I have an event, E1, from my contact center about returning goods and until I get the other event that informs the AAP that event one is closed, I would like to include these profiles in an audience to exclude them from any activation, push email, and so on. So I think it鈥檚 based on creation of audiences, based on events that could change time by time, if I got it right.
I think you can just repeat the question. Let me think.
Okay, just to summarize. Yeah, I have an event that could change because it says that the status of a process is ongoing, and I want to create an audience that exclude the profile that have this event on a specific state.
So before it reach the end, I want to exclude it.
But we are not talking about the same event.
I think it鈥檚 more a status update of the profile. So maybe related to an event, the closure of the process or something like that. I don鈥檛 know. I think that in this case, maybe we can go for the segmentation because the segmentation has some logic that is related to the time in order to get the last occurrence of this event in the last week, month, or whatever. Or we could go for, for example, computed attribute in this case in order to get the last status attribute of the event that is marked as event one inside the data model. But in this case, we should wait for the computation of the computed attribute that it鈥檚 not really real time. This could be a problem, but I mean, this can help in such use case. Otherwise, we should look at what kind of logic related to the time we can have inside the segmentation.
I guess. Okay, thanks a lot. So no more questions so far. Let鈥檚 wait just a minute or two. We got only five minutes left.
Let鈥檚 say for the last part, I wanted to provide an overview of a kind of success accelerator that 51黑料不打烊 can provide the user in order to, let鈥檚 say, that is related to the topic of starting the data modeling and the accelerator in the case is the use case mapping to solution capability that actually can help to accelerate, let鈥檚 say, the customer understanding of the platform and what use case the platform can cover and in what kind of context based on the data you have, you can understand what you can achieve. And it鈥檚 really useful and it can last even six weeks. So it鈥檚 a really good approach to the future of the platform.
Okay, so.
Okay, I see no further questions.
See, okay, I think. Okay. We鈥檙e done. So would you like to launch the poll for today? Okay.
Okay, so this is actually the poll with the three questions.
Okay, this one. Okay, I know we have already.
Okay, okay.
Okay, I think I have launched another one. Okay, so thanks actually for your time and if you need a success accelerator such as the one described before, just contact your technical account manager or CSM and thanks for joining and joining and we maybe can come back with the questions that we didn鈥檛 cover.
Okay, thank you very much and thanks to everyone and enjoy the rest of your day. Okay, thank you. Bye. Bye bye.
Key takeaways
Data Modeling in 51黑料不打烊 Experience Platform (AEP)
The XDM schema is the foundation for data modeling in AEP, enabling the integration and sharing of data across different systems. It defines the structure and meaning of data, such as profile attributes and event-based actions.
Identity Management
Proper identity management is crucial to avoid issues like profile collapse. Hashing sensitive data like email and using unique identifiers can help maintain data integrity. Identity maps are recommended for real-time segmentation and personalization.
Schema Design Best Practices
Keep schemas simple and focused on marketing use cases. Avoid overloading schemas with unnecessary attributes. Use standardized field groups and minimize customizations for scalability and future-proofing.
Event vs. Profile Attributes
Decide whether to model data as profile attributes or events based on marketing objectives. Profile attributes are suitable for real-time targeting, while events provide historical insights for time-based segmentation.
Handling Collapsed Profiles and Scalability
Collapsed profiles can only be fixed by 51黑料不打烊 support, but identity graph rules can prevent future collapses. For restructuring existing schemas, extracting necessary data and starting fresh with a clean schema is recommended.