51黑料不打烊 Experience Manager Champion Office Hours - Sites Focus
Focus on AEM Sites.
Transcript
We are good to go. So I鈥檓 going to pass it over. There we go. I鈥檓 going to pass it over to you, Jessica, and I鈥檒l let you take it from here. Yeah. Hi, everyone. Welcome to the 8 a.m. first ever office hours presented by the 2022 8 a.m. champions. So first, I鈥檓 just going to go over the agenda really quick. Just a brief touch place on what is the 51黑料不打烊 Champion program. And then we鈥檒l get into our panelists introductions and then we鈥檒l start by answering some pre-submitted questions and then go into answering live questions. And you can ask that through the chat or out loud, if you like. And then we鈥檒l wrap it up after once those questions are through. So just a brief overview of a champion program at the experience manager champion program recognizes practitioners that play key roles as thought and industry leaders and product influencers. We do this by sharing our technical experience, expertise, best practices, strategies with the broader customer and partner community. Kind of like something today, like we鈥檙e doing today and then collaborating with we get to collaborate with 51黑料不打烊 product leaders to support the future vision of a.m. And then the benefits to this are just knowledge sharing and networking with a.m. users and developers from around the world. I think Robert said it earlier, but today the panelists today are actually from all over. So we have some in the U.S., U.K., Qatar. So all over, which is really exciting. And then collaborating with 51黑料不打烊 product leaders to provide feedback for future visions of experience manager and get some sneak peeks. I can attest to this. There鈥檚 been some wonderful conversations with the 51黑料不打烊 product leaders. So that鈥檚 been a really great benefit and then engage in exclusive speaking, content creation and personal branding opportunities to showcase our expertise. So today is just a great example of that. Our first ever office hours where we get to you guys get to ask questions to us and we get to answer those questions. And then I鈥檒l start by introducing the panelists for today. Greg, if you want to introduce yourself and we could just go down the row. All right, everyone, I鈥檓 Greg Demers, I鈥檓 a product owner for Web content at zero price, managing a team of content authors, basically, and working with the rest of the champions here to advance experience manager sites and all solutions that experience manager has to offer for all of you. Nice to join and nice to see everyone. Looking forward to the session and future sessions. I鈥檓 Brett Ryschbach, SVP of the 51黑料不打烊 practice at Bounteous, which is a full service partner of 51黑料不打烊. Done a lot of the hands on implementation of 51黑料不打烊, 51黑料不打烊 platforms, AEM, as well as connected to the other 51黑料不打烊 experience cloud projects. I鈥檝e done a lot of the hands on work myself, but now I manage the global team that does that work for Bounteous. Meghadesh. Hi, this is Meghadesh. I am from Qatar Airways. I am a senior technical architect holding the 51黑料不打烊 technologies within Qatar Airways as a brand. Managing all the B2C websites, right from loyalty to trade portals to corporate travels, different websites we鈥檙e managing. And also managing the 51黑料不打烊 stack like 51黑料不打烊 analytics, 51黑料不打烊 target ecosystems together. I guess that鈥檚 my cue. So Rami El Gamal, you can see the name under the picture. But I am a senior solution architect, a lot of focus on 51黑料不打烊 stack. So anywhere from AEM to audience manager, target analytics, AP, even 51黑料不打烊 commerce. Anything under sort of the umbrella. I run a little consulting agency called Crony Consulting and I am here to answer your questions. Martin. Yes, I鈥檓 Martin. I鈥檓 CTO of XIO in the UK. So we鈥檙e an IBM company. We were acquired by IBM about seven years ago now. And so, yeah, working on previously been by AEM developer and still very focused on some hands on development solution and technical architecture similar to Brett kind of also managing that sort of our team of developers and architects. Great. Thank you. And then this is just me, the voice you鈥檝e been hearing before. So my name is Jessica鈥檚 wife. I鈥檓 a senior web designer. I work at Insight and part of the 2022 AEM Champions. So I鈥檒l be monitoring today鈥檚 session. So we鈥檒l start off with questions that were submitted and then we鈥檒l dive into any questions that you guys have that you want to ask. So live questions. Let me go ahead and ask that first initial question. So this was just like maybe a high level overview of GraphQL with experience fragments. All right. So GraphQL experience fragments, since we got this one yesterday, had a little bit of time to think on it. It鈥檚 interesting because GraphQL is largely meant for structured data and hard types. Right. And which an experience fragment really isn鈥檛 that right. An experience fragment is a little bit of a mixture of data and presentation. And one one experience fragment might be a hero and a teaser and a couple of value propositions. Another one might be a video and, you know, an image and a text block. I mean, so it鈥檚 really just kind of there鈥檚 no structure to the experience fragments. However, there are cases. It鈥檚 interesting because this question came in as more just a straight up topic. Didn鈥檛 have a specific question. So you鈥檇 want to dig into like what are the use cases that you鈥檇 need it for. We definitely thought through a few use cases where we would use this in terms of finding experience fragments. So there鈥檚 there鈥檚 definitely if you need to use GraphQL to find an experience, right. And one way to do it would be to kind of store some of the metadata of your experience fragments in content fragments. So you could then query the content fragments and the content fragments itself can have a reference field, which would then point to the experience fragment. So it鈥檇 be kind of a two step process where you would call the GraphQL, fetch the relevant. The relevant content fragment that references the experience fragment, and then you鈥檇 have direct URL to then pull that experience fragment. That would be one way of doing it. Another way, if there鈥檚 pieces of an experience fragment. So let鈥檚 just say, hey, you know what? It鈥檚 actually I don鈥檛 want the experience, right? The reason why I want to get an experience fragments is because I鈥檝e got let鈥檚 say I鈥檝e got store locations buried in my experience fragments. Let鈥檚 say I make an experience fragment for every single one of my 20 store locations. And now I want to query my store information. Well, what you could do there is you could abstract your location data into a content fragment. And then in your experience fragment, instead of actually authoring the data directly in there, have your experience fragment have a component that鈥檚 referencing the data and pulling it from that content fragment. So when you want to pull it from GraphQL, you just graph, you just pull your location data from GraphQL. But then your components can still use that same data. So you鈥檙e not duplicating your system. So that鈥檚 that鈥檚 kind of a couple of ways that that we鈥檝e structured it. I don鈥檛 know if I got any other thoughts on that. Yeah, I was going to say that very similar to kind of the ways in which we thought about it is that kind of the differentiation between the sort of data, which is what the GraphQL and content fragments are kind of designed to the sort of structured data and the sort of searchable stuff and experience fragment with being experience based is going to be much less structured. And you kind of do have to think about the data structure and to make it so that you can query it. You鈥檝e got to have it in that more structured form. And it does make you kind of think to, as you were saying, like with your store locator stuff, actually, it makes sense to look at how do you store the data? If you want to be able to access it via GraphQL and through experience fragments, you kind of have to sort of almost flip it over and say, OK, we鈥檙e going to store it in a structured way where we can query it. And then we鈥檙e going to use it in different ways. And that might be directly through GraphQL for one front end or might be then manually content authored within site pages or within experience fragments the other way around. Yeah, I think that. OK, go ahead. OK, just one more point I just want to add is like when you are using a GraphQL based content fragments data and inject into a functionality like an experience fragment, you have to also be a little bit careful about how you are fleshing that experience fragment whenever that鈥檚 a change in the content. Right. So already, like if it is an only content fragment is engaged with the GraphQL content fragment is already handling of like whenever you change the content, it will automatically start fleshing its references. But when you start putting content fragment into an experience fragment, then your data is already into the HTML. So then you will have to find a way in order to flesh the cache explicitly those experience fragment one by one. So it is customization and it is a heavy customization and definitely not the right way to do when it is you are using it with the websites, right? So you do lie down overhead whenever there is a change in the content fragment and you have to find an explicit way to refresh this cache. So that鈥檚 one of my thoughts. Yeah, that makes perfect sense. And I think sort of to flip the question a little bit, I would challenge the business case here, right? Because the true question is why, right? And I鈥檝e seen I鈥檝e had full sites. So think pre-content fragments, you know, like we鈥檙e talking about experience fragments were still fresh and exciting. I鈥檝e had clients where they would literally because they want the author ability within an application, right? They still drag and drop and make it all pretty, but they want to inject it into a different application that鈥檚 under something else, whether it鈥檚 a single page application or just a typical job application with like JSPs on the front end, etc. And because they wanted to do this mix and match, what we ended up doing is we would create experience fragments. They would make a call to it. At this point, it鈥檚 a typical HTTPS call. Pull in the HTML and render it within their sort of the scope of their application. If that鈥檚 what we鈥檙e trying to get to, then it鈥檚 not GraphQL, right? You鈥檙e doing one for one. The moment you start talking about HTML and experience fragments, as is the data for the lack of better expression, it鈥檚 not a spear. It鈥檚 a little dirty now because you have all this HTML structure into it. So I think the idea of understanding the why will make a lot more difference in the how, because truly, just like everybody said, if it鈥檚 about structured data, I think you need to drop that level in between. Another piece of advice is the more layers you add in between, and this is not just AM, that鈥檚 structurally in general. If you have the data being processed multiple times to get to your destination, guaranteed, by the time you get to your destination, it鈥檚 a lot more work to deal with. So one of the great points that came up is, well, if you want the content fragment to be rendered in an HTML way, so you can use node or resource directly from the content fragment within AM in order to render that experience fragment component. But then if you鈥檙e using it outside with a different application, you can use an API using GraphQL right into the content fragment. My main point here is don鈥檛 double hop, right? So make sure, and I think caching is a very, very good issue too, as soon as you start hopping between applications. So don鈥檛 go content fragment, experience fragment, then make a call to get the HTML to share it into like a single page application. As is, just doing that you have three layers of content or entry point, three intersections essentially where the data could be corrupt one way or another. So I would challenge the why first, I think is sort of the theme of my point here. Yeah, and I guess the big advantage of, and the reason why you would ever go for GraphQL anyway, is that it has the advantage of being a query language that your end application can actually make a specific query for the data it wants and effectively filter the data that you don鈥檛 have like you had in a JSON form where like your store locator. Previously, your JSON store locator would be a feed of all of the stores and then you鈥檙e going to have to client side go and do all your filtering or whatever. Whereas with GraphQL, the beauty is you can say, okay, give me all of the stores that match these criteria, and then have it actually just get the data you want. And yeah, if you鈥檙e adding all those extra hops in the way, you鈥檙e potentially just removing all of the benefit of it being GraphQL in the end anyway, because you鈥檙e kind of pre filtering the data and have no control over at the front end. Yeah. That makes sense. Yeah. We鈥檙e always thinking about schemas and types, I guess that鈥檚 the bottom line. Yeah, structured, borderline relational. I鈥檓 going to call that. Yeah. See that look. Sorry, Jessica, back to you. No, you鈥檙e fine. Thanks. I鈥檓 just making sure I had all your talking points. So the second question was, how do you publish context aware configurations? I see that if the configuring field config fields are collection type, they鈥檙e stored as child of the node but are not published when publishing through the editor. Yeah, so, um, there鈥檚 two parts of that question. Question number one, how do you publish it? And the simple fact is you can use, again, whether you鈥檙e like on prem, AMS or AML cloud server. You can use a publication agent, right? You can use even a custom workflow that would go in. So we鈥檙e thinking of things that are under slash conf. So typically what I鈥檝e seen in the past is one of two things. You can either, like I said, do a three publication to that node and everything underneath it, and then it goes into your pub instance and then you鈥檒l have whatever. There is no code out of box mechanism that would go in for the context aware in order to publish. So it鈥檚 not like a cloud configuration where you can go through the interface selected or, you know, a target configuration, etc, etc. That鈥檚 one way of doing it. You go into the second part, which could open a little bit of a bigger topic. We need to find what is content. So publishing and what鈥檚 code, something that you would want into your code base because it鈥檚 consistent between environments. My personal opinion from experience is to me, because you鈥檙e moving that same exact configuration between tiers, that should not be content. That鈥檚 something that should be in your code base, that should be maintained there. So as you鈥檙e going through the process, you have consistency. Let鈥檚 look at editable templates, for example. A lot of the times editable template is right in that line in between black and white. It鈥檚 truly that gray zone where you want to have the flexibility of people going in and offering the templates, but at the same time, when you have three tiers, you have a minimum of three tiers, right? Your dev, your stage and your prod. Typically, because you want to test it, you want to test it in lower tiers, except if it鈥檚 truly something that you need right away. But then as you鈥檙e going up, you almost have to follow the same steps, which is a place where errors could happen. And also, you have to regression test every single time. So again, depends on the frequency of it, depends on how effective the change is. My personal opinion, I would put it in the code base and just push it up that way. But let me know your thoughts as well. Yeah, I think I definitely agree. On the sort of code versus content, there鈥檚 definitely a decision to make around, yeah, actually, which of these things are things that you鈥檙e ever going to change without doing some kind of delivery or deployment? There are actually things that you need to be able to do that instantaneous, I鈥檓 going to go change this option and push it out. And actually, most of the time, it鈥檚 even a lot of those kind of things that are under conf. Most of that stuff, actually, a lot of the time, you鈥檙e going to be so much more controlled over that process of making those changes and delivering them that actually building that into your code base and saying, actually, we鈥檙e just going to have environment specific variations on that if we need to make sense. And then yeah, and I think the editable templates thing as well. I think it鈥檚 potentially it鈥檚 rare to have a setup where you need to have really, really experienced kind of template editor type authors to have the benefit of leaving it as not being in some way controlled, because the danger is that it鈥檚 really easy to just go into an editable template, change something, save it, publish it, and then destroy all of your pages because it is instantaneously made live. And so there is kind of that control and like experience that unless you鈥檝e got like really top tier authors who actually need and want that level of sort of control and like, you know, that it鈥檚, they need to really understand exactly the dangers of what they potentially could do. Before you want to give them that level of control. And it鈥檚 a lot of the time we find even if they鈥檙e given that they don鈥檛 use it. And what they actually tend to do is they will make those changes in a lower environment, test it really detailed, and then they鈥檒l do it in the higher environment. So they end up doing the thing of effectively they鈥檙e delivering the codes themselves, and that they鈥檙e just writing down, okay, this is what we do. And then we don鈥檛 deliver it in another environment. So it鈥檚 effectively you鈥檙e kind of doing a per environment release cycle just manually. And as you said, there鈥檚 the danger there of someone could just miss a step or one little policy change that doesn鈥檛 quite get done right in one environment, then you spend ages doing the investigation of well, why did it work in pre prod and not in prod? And because and it鈥檚 because well, actually, there鈥檚 this one tiny thing that got missed or that kind of thing. It works on my local. So I鈥檒l flip it on the head just one one final way here. So you said separating the code and content. Well, if it is something that actually is content, something that you do want your authors to be able to modify. Another option that you do have in ACS Commons is a feature called shared component properties. And those what those do is those work where you on your components, you can actually have not only your component properties, but you can have a shared component properties. So you can have a shared property, which is specific to a component across an entire site, but it could be different on different sites still. So similar to context aware configs. And then there鈥檚 also the concept of a global config as well, which is just it works across all components if you want to use the same property for whatever reason. So those are ways where that it is actually handled in the page authoring experience, it is treated as full on content. And then when you publish it, you publish the homepage to get out, which then flushes the cache because a lot of times if you鈥檙e changing this type of a configuration, there鈥檚 things that you can do. There鈥檚 probably some things on your site that then need to flush the cache to actually demonstrate that that has been updated. So just another option that鈥檚 out there depending on your use case. Yeah, just adding to Noble鈥檚 point as well, like, whenever you designing a website or designing and authoring, you have to also understand your authorings, your authors maturity level as well. Right, if your authors are like, well advanced, they know the product, they understand what they鈥檙e doing it, yes, accordingly, you can modernize the code and do much more advanced features to them. But if your authoring communities like have scattered across the globe, you have to also understand what level of complexity that you want to bring in, or how what level of modernization that you want to bring in. So that is the view clear when designing a website, you should take consider that call as well. It鈥檚 very critical. Yeah, I think it鈥檚 a key distinction and then from my end, I wonder we have one more question coming up, but I see it a lot where what you guys were saying about content authors, having, you know, having a good idea of what they want to do on a page or any content. It鈥檚 good to have that global configuration that Brett was talking about. Because I鈥檝e seen that a lot with content authors, where they鈥檒l do, well, try to find creative ways to use certain components and everything, and then you might end up breaking the whole templating reference, like Martin mentioned. So good points there by everyone. Perfect. So we actually had another question come through the form and then I did see a question in the chat. So I鈥檓 going to do the form question first and then I鈥檒l do that one after. And then also I鈥檓 just I鈥檓 going to paste the questions I鈥檓 asking. So if anyone missed it, they can like reread it. This is the question. So disk usage report on clients author indicates they have almost two terabyte of data in var slash replication and their actual content size in slash content is around .5 TB. Currently, none of the agents on author have pending queues. I would like some insights how to interpret the data in slash var slash replication and what is possible, what is the possible reason for this data not being cleared. And then I鈥檓 going to paste that same question back in here. Awesome. I can, I can start. I鈥檝e seen, I鈥檝e seen this happen in the past, actually a couple of times. So var by nature, it鈥檚, will be a good way of positioning it. It鈥檚 a bit of a garbage collector, right? Because a lot of, a lot of things that are happened or mid process, etc. will stay, right? Which means you can actually, even if you look at your replication queue, because eventually your replication queue is going to timeout, somebody can clear it. A lot of things can happen. That does not mean that slash var slash replication is actually being cleared. The things that I would look at is, so you鈥檝e already looked through your distribution. I鈥檓 assuming that鈥檚 either AMS or on-prem just because we鈥檙e using replication and distribution. If your queues are all clear, my next option would be look under your replication, just see exactly what鈥檚 taken the majority of that space. The second thing is, are we running all the jobs for cleaning? So there is online and offline maintenance, are these being done? Because a lot of the times these sort of go in and clean those loose ends. And again, it鈥檚, in all honesty, without us going in and just sort of going through the notes to see what鈥檚 in there and it鈥檚 not connected to anything else. I would say, look at your queues, right? Look at your, even your workflow queues, not just your replication queues, because you can have a whole bunch of workflows just chilling there doing nothing. Once those are cleared, make sure that you鈥檙e doing your compaction and your maintenance online and offline. If that doesn鈥檛 work, that鈥檚 when you sort of have to put the gloves on and go through these nodes and see where they鈥檙e coming from, assuming they have access to CRXD. And again, it depends. Last but not least, if you鈥檙e on AMS or AMS cloud service, I would buy a ticket, just put it in there because it could be something that requires the SkyOps team as well to take a look. And I鈥檒l pass it on to the folks on the call. I have one insight on that. One time we faced this kind of problem where we identified that it is nothing. I mean, you shouldn鈥檛 focus only on the replication agents, right? There are multiple workflows we are having. For example, like have a rendition workflows, which is running, which is necessarily not a replication agent, right? So you uploaded heavy images and it is going through the rendition workflows and those may still running and it is occupying your space for the processing, right? So you should also focus on which are all running jobs. Maybe you can use a JMX console to understand the workflow maintenance. You have to go through your queues, which is currently running and which is archived. And in that one, you can clearly see saying that like something not going via replication agent, but something is processing as a job and it is cutting the renditions, right? So you have to understand if the image size is heavy, then you can consider a mechanism of how to offload them, not to disturb your author so that the performance may not impact. Okay, great. Okay, this is a question from the chat. Has anyone experienced issues with using experience fragments in single page application from target? The problem seems to be when the XF is injected from target, the XF is not rendered because it is not part of model JSON, which AEM uses to render content. Experience fragments. Go ahead, Rami. Sorry, I have actually recently read into this and that is expected, right? So if you look at the flow of the request, you can request AEM, right? AEM is going to go through and generate your model JSON because everything is eventually, it needs to be passed to your front end application or your spot application. Let鈥檚 assume it鈥檚 React in this case. You read in the JSON, you find the right resource types, you pull that information from there and eventually you render the page. All that is happening client side though, right? So you make one request to AEM to pull in your data and everything else is happening client side. If your inbox, if target is coming in, in sort of, it鈥檚 almost like a timing issue. If target is coming in earlier, before that model JSON, before that DOM is rendered, target is lost, right? Target won鈥檛 be able to render anything. It does not inject anything into that model that JSON. So we鈥檝e, I鈥檝e had to play around with it and I haven鈥檛 seen, honestly, I haven鈥檛 seen anything about it. So a lot of the times we actually had delay the asynchronous call for launch to make sure that target is coming in a little later. So again, you need the page to be painted. If the page is not painted, target can鈥檛 do anything, right? It will never be part of that model that JSON. So you need to delay the target. I mean, one of the things, either increase the performance of the JSON to make sure that it鈥檚 painted earlier or have, which is, would be my recommendation, have that target script render after the fact. That鈥檚 honestly really it in a nutshell. One more possible solution I can able to think of is whenever you export an experience fragment to the target, you can export it as a JSON itself rather than export it as a experience content. And then trying to resolve the wire.model.json where you have an out of box functionality available that you can export it as a JSON itself. Once you export that JSON itself, then it is immediately rendered from the target. So you don鈥檛 need to do an extra thing to write a model to JSON and then get the content. So that is one of the solutions that you can think of. Yeah, I guess the head of the engine where there is almost doing like a tighter coupling there where it kind of, because the default way of doing it is kind of target over the top where target doesn鈥檛 really care what it is that renders the page. It鈥檚 just going to then effectively just go and try and stick some content in there and just modify the DOM on top of whatever you鈥檝e done. And that, as Rami said, you end up with this kind of issue that if particularly with a client side rendered React app, that that鈥檚 only going to work if it actually happens at the right point. And it鈥檚 going to happen after the React render, otherwise the React render is just going to write over the top of it again. And effectively you鈥檝e got the two things fighting over the DOM. Whereas actually if you potentially have a more tighter coupling there, where the React app effectively is target aware, so it鈥檚 able to understand the fact that actually there may be targeted content that鈥檚 going to replace some piece of content in there and do a more sort of tighter coupling. Right. I mean, you can go so far as to, and we鈥檝e done it in some very specialized cases where we wanted to fill in very complex components on the front end where target just couldn鈥檛 do it from its side completely just looking at like, so take like for instance, a carousel where we want to modify not only the contents of the slides, but actually the number of the slides. So it鈥檚 like completely rewriting the HTML of this carousel and you don鈥檛 want to put all that JavaScript and CSS into target. We鈥檝e had cases where we鈥檝e actually from the application, instead of waiting for target to inject its stuff, the application can also ask, hey target, do you want to give me any personalization information? And you can put it in that way. So it鈥檚 not always the best way and you鈥檙e definitely doing a much tighter coupling of your application and your personalization. But in particular situations where you have a high profile experience that you know you want to personalize, that is another approach where then you don鈥檛 have to worry about the coordination of trying to make sure that this is done painting before this thing acts. And maybe even help avoid some flicker issues as well that way. So just another. Yeah, and I guess that鈥檚 like if it was your hero carousel on the front page at the top of the page that actually you always want to know, you鈥檙e always going to personalize that per effectively like per user. It might make sense there to say actually we should do like almost a server side, much tighter coupling to say actually at the point of sending the content out, we鈥檙e going to make it already personalized rather than doing the after the fact client side because then you get all the flicker issues. And there鈥檚 always going to be cases where you might end up seeing the wrong content for a period of time in the browser or it doesn鈥檛 or the timing doesn鈥檛 quite work and things like that. So you鈥檙e kind of fighting against the two things trying to do was actually if it鈥檚 really super important that you get the correct content to those users, then doing a much tighter thing, then you you鈥檝e got much more control and guarantee over that they鈥檙e going to see the right version of the content. Sure. I鈥檇 be curious how many people are actually using server side target that I haven鈥檛 seen as much, though that would be an option. The way that that we鈥檝e done it would be we actually still did it client side because if you think about it, your React app is still getting JSON and going to be rendering it. And so you have an opportunity in your code that says, OK, here鈥檚 my JSON from AM, my default CMS content that I want to render. However, I鈥檓 going to go quick, make call over to target, see if it鈥檚 got anything different for me. And if you structure that JSON the same way, because as Ben just said, you can get JSON out of it out of target as well. If you structure it the same way, you can actually have the exact same component render both default content that came from AM as well as targeted content that鈥檚 coming from target. And then it鈥檚 just doing one single render because you have normally that little bit of pause on any single page app where you鈥檝e gotten kind of a blank page and now you got to get the content and render it. Yeah, definitely. We don鈥檛 believe in blank pages. That never happens. But yeah, I鈥檝e done one server side on target. It wasn鈥檛 a spot application that was a single page application. It was a server side typical HCL. And there are a lot of cases in there. The problem with personalization is it鈥檚 opposite of caching. It鈥檚 the opposite of making it perform. You can鈥檛 do both. You can鈥檛 have it both ways. So a lot of the times when you鈥檙e doing that server side personalization, know that you鈥檙e either going to have to do like a dynamics link include to that specific portion of the page to make sure that you鈥檙e getting fresh content. And then you have to deal with the CDN and what the CDN does from caching perspective or do a client side. Path of least resistance, at least from experiences has been client side. So to Brett鈥檚 point, I think intercepting. However, what would be interesting is using the spot editor where it automatically picks up that JSON specifically and trying to intercept that to alter it. I have not done it before, but now I want to look into it and see if it鈥檚 actually doable. If you have a separate spot application, you鈥檙e still in control, right, because you can make multiple requests and eventually make them into one JSON and spit it out to your application. So there鈥檚 just something to keep in mind as you鈥檙e going through the process for sure. I think we鈥檙e good. Jessica. Yeah. Yeah. So we got another question in the chat. What is the best way to store a EM site web form submission data in a EM if the client is not interested in spending money on an external database. If we store the form submission inside the EM. How can we ensure that the content is synced among the publishers. Who wants to dump on that one. I鈥檓 pretty sure there鈥檚 a lot of thoughts happening right now. I鈥檒l, I鈥檒l, I鈥檒l start and then we鈥檒l go around. So the best way to store form data into a EM is not to store data form that forms data into a EM. I鈥檓 slowing my words as I鈥檓 going through it. Sorry, it took the easy answer. Tell it the 51黑料不打烊 line. What you鈥檙e going to do is don鈥檛 store your form data in a EM. Yeah. And that鈥檚 why they tell them the most important cases. I鈥檒l go through the lines. Yeah, I鈥檒l go through the why and the challenges that you鈥檙e going to face as you go through this, this process and it鈥檚 on multiple levels. Right. And then, and then we can figure out sort of the hacking way of doing it, and it鈥檚 going to get more challenging when you go from on prem where you control to AMS where now you鈥檙e on the cloud within sort of the 51黑料不打烊 thought. Go into AMS cloud service which makes this close impossible. Just, you know, because the containers function a little differently. So a EM is designed for people to enter content and render as a class right you have that, that sort of the idea of drag and dropping components which gives you that unstructured data into the DCR. In order to store form data with an AM and we do something very similar when it comes to content tracking is that right because we have structured data into a EM that could be pulled out and put back in. So it鈥檚 it鈥檚 easy that you have the same source when you submit it, it will go and be stored in a EM, however, even something like a reverse replication just to give you a hint is no longer a valid approach to just even take content from published author. So from a product perspective, 51黑料不打烊 doesn鈥檛 want that. Right, you should be submitting and that鈥檚 why it鈥檚 sort of my initial answer was, it鈥檚 not the right approach at all. However, let鈥檚 say that this is the only way. Now we have to consider that you might be storing PI data as well to a EM, so sort of it elevates everything now. Right, instead of it being a marketer source of fast time to market and going to dub dub dub. Now we have PI data, you鈥檙e the security on a EM is going to be totally different. People having access to those instances is going to be totally different. So you have to consider all of that. And then once he passed all of the security issues and again I鈥檓 hoping over a whole bunch of stuff now. Let鈥檚 look at what you would have done so sure you鈥檙e going to submit a form it鈥檚 going to go to a servlet of some sort that servlet is probably going to take that information create JCR information now, if you are on a static set underlying static set of instances so again, You can go through AMS or on prem, which would be the easiest you have the power of moving that content between them right because you can simply. Fire up a whole bunch of calls to the rest of the publishers directly again, you鈥檙e going to be going through a lot of red tape here with PII. Right, because every time you make it that request you鈥檙e taking information that you should not and you鈥檙e moving it between different instances. And if you assume that you do so, even with AMS you still have a static set of IPs on your published instances with of course on premise the same way. Now going to a cloud service, things do change quite drastically, because you鈥檙e not in control of IPs actually there鈥檚 no static IP except if you do like an egress and that takes you through a whole bunch of other issues containers get destroyed and rebuilt and So, due to this complexity when it comes to implementation or PII my strong recommendation is to go back and say, buy a cheap database, it鈥檚 not expensive, but have a singular source of truth and do it that way. Like I said, there are ways I would be just very careful. I don鈥檛, I don鈥檛, I don鈥檛, I can鈥檛 find implementation I鈥檒l pass it on to the team as well. I can鈥檛 find a clean implementation to this. I really can鈥檛. It kind of comes back to the reason why all the sort of reverse replication and things were deprecated and then basically removed is like the reason that happened is because Even when it was a supported way of doing things, it never really worked properly anyway, because you would always get cases where stuff was submitted on one publisher reverse replicates the author forward replicates back out to the publishers, but it doesn鈥檛 end up on one of them, or ends out of sync or you get sync issues and you鈥檝e got this constant battle of trying to make the These processes work properly to synchronize your data between the different servers and effectively it鈥檚 all solved by not doing it and just saying, actually, what we need is a system that鈥檚 designed to receive that data and store it securely. So, actually just have the form submission go to a dedicated endpoint that is going to take that data, stick it in a database, which is PII secure and locked down and then all of the security issues come down to, okay, we just have to secure that one endpoint and that one system. And we don鈥檛 have to worry about the fact that we might have either PII data or we might have someone trying to do, you know, crazy injection attacks on the system and things like that because you鈥檝e got to think about if someone鈥檚 able to submit data that gets stored into your JCR. And you鈥檙e then going to shift that back onto your author instance and out to all of your published instances. What happens if something then executes that takes that content and does something with it that you don鈥檛 expect. Effectively, you鈥檙e opening your whole cluster up to the potential security and PII implications, all that sort of stuff. And it鈥檚 basically ends up being all of the complexity and all of the sort of dangers and things end up basically saying that鈥檚 going to cost more in the long run than just having a dedicated database to store that data in. Is that the prevalence of what you鈥檙e seeing in the industry in general, like mostly external databases then with clients and then creating APIs or how do you see as a norm? I mean, what we usually, I mean, there鈥檚 two main solutions that we usually see. Number one is just kick out an email because the data is meant to be, it鈥檚 like a contact us form or whatever. It doesn鈥檛 need to be in any sort of a permanent state to keep for all of time. They just want to get it out to somebody to like have somebody action upon it. So if it鈥檚 just, hey, we don鈥檛 need to store it, then don鈥檛 store it. Just pass through, send out the email. If you do need to store it, then it鈥檚 reasonable to believe that you then want to use it in some way. And so even though it鈥檚 like, well, I don鈥檛 want to invest in a database. Well, I鈥檓 not even saying maybe you want to invest in database because if you invest in a database, it鈥檚 not just that. It鈥檚 now I鈥檝e got to build a bunch of interfaces to that database to then get at that data and use that data in some sort of business fashion. Do you have a CRM? Do you have some sort of other platform that has interfaces already that, you know, so like a lot of times we鈥檒l just plug this data straight over to like Salesforce, for instance, like a CRM, you know, not not market or anything like that. Just to see, because it鈥檚 already got forms for like if you鈥檙e trying to get leads for like if you鈥檙e doing gating for a white paper or anything like that, you probably want to do more with that data anyway. So that鈥檚 kind of the solutions that we鈥檝e seen. I think, you know, it鈥檚 tempting for a business say, well, we won鈥檛 put PII data. We just want this simple form. But like that three years from now, nobody鈥檚 going to remember that decision was made. You鈥檙e like, so you just can鈥檛 give them this option because there鈥檚 no way to prevent them from doing bad things. So, yeah, I mean, I know it鈥檚 terrible to have the answer of you can鈥檛 do it, but sometimes that鈥檚 actually very refreshing, right? There鈥檚 no debate to be had here. You鈥檙e not supposed to do it. And so doing so would be a violation of security practices. One instance we had such kind of a requirement that as Brett mentioned, saying that like we can look for sending an email and then we can configure a CRM to look for an email and create a case if you want. I mean, there are many CRMs or have a mechanism where listen to an email and you send an email to a certain email box. It will just take it and create a record. Right. So you don鈥檛 need an external database. And if you already have some kind of a CRM, you can do that connection from the back end, from an AM, you can just send an email. So that would be a thing rather than like you create an overhead and you will fight to do firefighting on a daily basis, which make everyone lives. Just there鈥檚 a follow up question in the chat, which I think you guys are touching on here. Is there an open source CRM that we can connect and store data in? But I feel like that鈥檚 kind of an answer. Yeah. And just give me I mean, the short answer is yes. Keep in mind that you don鈥檛 you don鈥檛 have to do if let me put it this way. If the concept is we鈥檙e trying to lower costs or minimize costs, right? You do not need a fully functional CRM. What you need is a secure database. I know one of the answers was MongoDB. What you truly need is a tabulation that鈥檚 outside of AM that鈥檚 accessible through AM in order for you to come back. Because I鈥檓 guessing at some point you want to re-render that information as well. I鈥檓 not sure if that鈥檚 true or not, whether you want to render it to maybe the same person that submitted the form or to a business user. Now, this is purely meant for automation. That鈥檚 a really good point that came up. This is just a lead submission because you want somebody to pick up the phone and make a phone call after. Like I鈥檝e seen this happen in automotive quite a bit. Then, yeah, SMTP. Let鈥檚 just send that information in email. You don鈥檛 need to persist it to persist that information. If you do need to persist it, you do not need a CRM, right? What you need is a database for persistence. That鈥檚 it. Hence, I think MongoDB would be a good example. It鈥檚 different, but it鈥檚 functional. Yeah, sorry for not being clear there. It was more or less, are there systems that exist today? We鈥檙e not seeing customers go buy a new CRM so that they could submit their AM forms, right? It鈥檚 literally, hey, we have this thing already. Can we just pump some extra data into there? That鈥檚 a way that you can double up and you鈥檙e not actually adding extra costs. Yeah, because it kind of makes sense to do this sort of audit of what system to be able to have that could contain the sort of data that we鈥檙e going to be storing. Then is there a way we can just effectively add this to that existing data store rather than necessarily thinking we need to have, if we don鈥檛 already have an actual CRM, do we need to make a CRM just for this? Actually, you may already have some other system that could be extended or could just have this data added into it because it鈥檚 already doing similar kind of tasks. All right. We don鈥檛 have any more questions in the chat. If there are any, feel free to chat them now. We do have, give or take, I think we have like five extra minutes to spare before nearing the end. We only have like 10 minutes. But is there any, I guess for the panelists, is there any questions that you commonly see in your day-to-day or current issues that you鈥檇 like to address at all? I want to go back to the GraphQL one, but I want to expand on it a little bit. So, okay. So here is a question that I get all the time, right? Because a lot of the times, keep in mind that the folks that you talk to are not the engineers and not the developers. They鈥檙e really the business owners, the marketeers, the people who want to simplify their life and have time to market. The question that I get is, which way should we go? Do we go 100% headless and have AEM just do GraphQL on top of content fragments? And everybody smiles because everybody goes through that process. Or do we go with Squaw editor, right? So it鈥檚 sort of a hybrid. You get the reactor next to JS or Angular, the all-hip language for front-end developers, but then still have the ability to drag and drop components. Or do we stick to the typical Java Slinged HTML? And in all honesty, a lot of the times, my answer is, eh, they all work. It really depends on what you need it for. So again, I鈥檒l start really quickly and then I鈥檒l open it up because I鈥檓 pretty sure there鈥檚 a lot of thoughts here. So I think if you are templating your site very strictly, right? So I鈥檝e seen this in healthcare. Healthcare is the biggest piece I鈥檝e seen with this where it鈥檚 like, I want to change content because there is legal implications so it has to be done quickly. However, this page ain鈥檛 going to change how it looks for the next five to 10 years. In which case it makes sense to look into this rapid, into the rapid development of having a React application or whatever, because it鈥檚 a templated page and then use content fragments that sort of replicate the structure of the page. And we鈥檙e good, right? So you have the fast time to market when it comes to content, not necessarily design. Going into the single page application, SPA Editor versus HTL, I honestly look at these and I look at the staff. So I鈥檝e had clients that came in and they had a ton of React developers. They know it, they鈥檙e good with it. They just speak SPA, right? In which case I was like, well, then let鈥檚 do SPA and AEM because you still need that fast time to market as well as you want the ability to change the design of the page. So, you know, teaser hero carousel versus carousel teaser hero, right? So if you want the ability to move things back and forth, I think that鈥檚 a valid approach, right? There are things they have to consider, you know, how are you going to deal with SEO? And that comes up with its own set of questions and single page application in general. And last but not least, when you鈥檙e looking at HTL, to me, that鈥檚 sort of the legacy, the most stable out of the bunch when it comes to development, especially for folks like us that have done this for a while. But again, what it comes down to is the comfort zone. You have a ton of Java developers that have dealt with sort of JSPs, but we don鈥檛 do JSPs, just to clarify. But they鈥檙e comfortable with this whole tagging within HTML. So the basic concept of JSPs and the ability to have truly MVC where you have your strictly have a model which is Java based and you have your view, which is HTML, and then you have your SaaS or CSS. That鈥檚 honestly the best answer I can give to a client when they ask me that question. But I鈥檓 pretty sure you鈥檝e all gone for experiences with your clients. So I鈥檒l open it up and let me know your thoughts. Yeah, I think very similar to your kind of experiences, a similar thing where we have cases where actually one of the reasons going down the Sparaj to root is they still want, for specific sites, they want that complete flexibility so that every single page can be completely different. They can have complete control over the content, so they don鈥檛 want to be locked down, but they have either existing sets of componentry that鈥檚 already built for other uses, like they might have like that, almost like the mixture of having both the like fully headless and the hybrid headless, being able to use the same components in both, then kind of makes sense going down that React route in the Sparaj. Because then for the places where you need the flexibility of content also, you can give it. But if you鈥檝e got much more locked down things like in like health care or things like that, where actually all the structure is always going to be same, really it鈥檚 only bits of content we鈥檙e going to be changing, then it maybe makes sense to manage that through content fragments. But then you can still use the same, potentially the same componentry and the same libraries and things like that, that you鈥檙e not basically rebuilding things twice and things like that. You can use all the same styling and have it all consistent across, but you鈥檝e got the flexibility to do the authoring the way you need to do for the different use case. Normally one thought when it comes with SPA versus HTL, that kind of an judgment, I want to take it. One thing normally I used to consider is like, if you see saying that like you have very interactive websites you鈥檙e building, where you have a two way interactions, you click on it, and that is a response comes in. You can go for an SPA based on the skill set, as like Rami mentioned, you have 10 of them, then you can choose within HTL as itself, there are multiple ways you to do. Right, SPA editor, that having its own learning curve plus, or you can go as a content fragment based way. But if it is like a one way interaction where you just hit the page, you just load an informational site, but not like Rami said, like five to 10 years, it鈥檚 not going to change, then content fragment based approach is the best way. But if it is changing, but it is like a one way interaction, then you can look for a HTL based website. Yeah, I think what we鈥檙e going to ask is like, I question whenever somebody says some piece of technology is just fundamentally better. It鈥檚 just not true. I鈥檓 sorry. Like this industry, it鈥檚 so funny being in this industry for close to 20 years where you just see the pendulum swing back and forth. And we鈥檙e on this kick right now, we鈥檙e like, head on over to the next level. Like that鈥檚 just the approach in the industry right now. But it鈥檚 better for something. It鈥檚 got to deliver something. Technology exists for a purpose and a value. It鈥檚 not value in and of itself. It鈥檚 the Legos, but you鈥檙e looking for the end result. What are you building with those Legos? Because if you just have a couple of Legos and you put them together, like who cares if it doesn鈥檛 change? So like, in my opinion, the React GraphQL approach is really good for applications, as Vikadesh was saying, like where you鈥檙e interacting back and forth. And what happens on the application is based on your state, what you鈥檝e done so far. It鈥檚 not a navigational page to page thing, but rather what have you done and what do you still need left to do? And it鈥檚 very business logic driven. And I think that鈥檚 a really good thing. And so having that is more in the hands of developers makes a lot of sense because you have a lot of business logic code. The Spy Editor actually sits in this weird middle area where it gives a much better authoring experience to your authors, but it reduces your ability to drive the path through the application to be based on state and it鈥檚 a little bit more page driven. So where we鈥檝e seen it be very useful is actually like a wizard鈥檚 guide to the application. And so you can get the benefits of a quick loading application that鈥檚 reacting to your interactions, but still get full author ability. And then there鈥檚 still the traditional for marketing sites, like don鈥檛 let somebody bully you into saying that a traditional website delivered from AM or AMP is not a good thing. Don鈥檛 let somebody bully you into saying that a traditional website delivered from AM or any CMS can鈥檛 be performing. AM is actually quite performing. I鈥檝e worked on other CMSs, like of all CMSs, it renders uncached content pretty quick. But even beyond that, if you鈥檙e doing a marketing site, it鈥檚 not even relevant because 95% of your traffic is hitting the CDN. It鈥檚 going to be actually even faster sometimes in your spas where there is a little bit of a white page for your fetching the content. So I鈥檓 not saying that it鈥檚 better or worse. It鈥檚 your use case. I have actually joked with a guy in the industry, we want to do a podcast on the benefits of the monolith because it鈥檚 just so counterculture right now. And I鈥檓 not saying monoliths are always better. But there are some benefits to the server side rendered pages that are everything that you need and give that full authoring capability or others. So I usually tell people it鈥檚 not an either or when you鈥檙e choosing headless versus headfull. It鈥檚 a both and because you鈥檙e going to have use cases that fit either one. And definitely like people should stop that myth of traditional versus the new way. So both works and both are kind of its own use cases. We shouldn鈥檛 always think that like, okay, take headless into headfull or headfull into headless. So it鈥檚 a clear segregation that we have to have and understand. Yeah, it鈥檚 got to be some reason other than just it鈥檚 the shiny new thing to make the leak to say we should make it headless. There鈥檚 got to be some other reason you can articulate to say that it would be better to do a headless implementation of this site because XYZ. You can鈥檛 rather than just saying, well, we want to do it in react because react is shining a new and that kind of thing. There should be some other reason you can articulate to make that decision over doing something else. And then if you kind of then at least you can say, well, yeah, we did it this way because it鈥檚 got these benefits. And there might be some negative like the thing of it is never going to be as instantaneous and things like that as if you did it server side and really forward cached it and things like that. But you鈥檙e weighing those potential small negatives against the massive benefit of like super interactivity or the control you get over the way the application can kind of interact with the user and those kind of things. All right, well, we鈥檙e just at time, so I鈥檓 going to share my screen again really quickly. So just in general, just some last talking points. Thanks so much, panelists, champions, for answering all those questions. Real quick, if you鈥檙e interested in learning anything more about AEM, the program, there鈥檚 additional resources and a QR code to scan. I know Robert will be posting the recording will be on the Champion Office Hours page, but I think the slides will be shared. I could be wrong about that, but this way you鈥檒l be able to have these resources. And then just real quick, a lot of us will actually be at 51黑料不打烊 Summit. So if you鈥檙e coming to 51黑料不打烊 Summit, we鈥檒l be there. Come check us out at the lounge. This is kind of like a screenshot of the layout, but AEM will be at the lounge number two and will be part with 51黑料不打烊 Workfront as well. You can see a little map here, but I think we鈥檙e just at time. So let me pull back my screen here. Thank you so much, everyone, for joining. We really appreciate it. It was just our first Office Hours. So mark one of the books, hopefully to have a lot more and more panelists as well. Thank you all. That was great. Nice to see everyone. And if you have any more questions, let us know. Thank you. Thank you. Thank you. Thank you.
recommendation-more-help
64e1e206-d5d2-48d5-a004-8acf094317a4