In this session, the Solace PubSub+ Cloud engineering team demonstrates how they’ve leveraged EDA to solve real-world problems while building Solace PubSub+ Cloud. Some of the problems they have helped solve are:
– Proxying REST APIs across microservices
– Implementing the CQRS pattern with MySQL and Elasticsearch
– Building highly available microservices with both leader election and horizontal scaling implementations
– A highly distributed control plane.

Learn more about Solace PubSub+ Cloud >>

As Solace’s vice president of Cloud and DevX, Ali Pourshahid leads Solace Cloud’s product and engineering teams. Throughout his career, Ali has been leading the development of large-scale on-premises and cloud-native products, and has built several engineering teams from the ground up. Ali is passionate about building high quality cloud products to help customers unlock their business opportunities.

Julian Setiawan leads the architecture team for Solace Cloud. He has worked on Solace Cloud since its inception, first as a developer, then as an architect, and now guides its overall technical direction. Julian is a craftsperson at heart and always has his hands on code or his nose in the latest blogs and developments.

James Ellwood is a Principal Engineer in Solace Cloud’s event mesh group. Calling upon his experience in building complex software systems, he is passionate about building cloud-native solutions to orchestrate and manage the PubSub+ Event Broker for Solace Cloud customers.

Kevin Lidstone is the Software Development Director of Solace’s PubSub+ Event Portal product. He is a full-stack developer who loves taking on all spectrums of the software stack from building React-based front-ends, to creating event driven architectures for micro-services, to designing high-performing DB schemas and access layers.

Alex Pulbere is the Software Development Director of the Solace Cloud’s event mesh group. His experience in analytics and dashboarding SaaS applications acquired a deep appreciation for data collection and exploration. He’s passionate about building dev. teams, Cloud Native SaaS applications, system design and architecture.

Speakers

Alex Pulbere

Alex Pulbere

Software Development Director

Solace

Alex Pulbere is the Software Development Director of the Solace Cloud’s event mesh group. His experience in analytics and dashboarding SaaS applications acquired a deep appreciation for data collection and exploration. He’s passionate about building dev. teams, Cloud Native SaaS applications, system design and architecture.
Julian Setiawan

Julian Setiawan

Architect, Team Lead

Solace

Julian Setiawan leads the architecture team for Solace Cloud. He has worked on Solace Cloud since its inception, first as a developer, then as an architect, and now guides its overall technical direction. Julian is a craftsperson at heart and always has his hands on code or his nose in the latest blogs and developments.
Alireza Pourshahid

Alireza Pourshahid

VP Cloud and DevX

Solace

As Solace’s vice president of Cloud and DevX, Ali Pourshahid leads Solace Cloud’s product and engineering teams. Throughout his career, Ali has been leading the development of large-scale on-premises and cloud-native products, and has built several engineering teams from the ground up. Ali is passionate about building high quality cloud products to help customers unlock their business opportunities.
James Ellwood

James Ellwood

Principal Engineer

Solace

James is a Principal Engineer in Solace Cloud’s event mesh group. Calling upon his experience in building complex software systems, he is passionate about building cloud-native solutions to orchestrate and manage the PubSub+ Event Broker for Solace Cloud customers.
Kevin Lidstone

Kevin Lidstone

Software Development Director

Solace

Kevin Lidstone is the Software Development Director of Solace’s PubSub+ Event Portal product. He is a full-stack developer who loves taking on all spectrums of the software stack from building React-based front-ends, to creating event driven architectures for micro-services, to designing high-performing DB schemas and access layers.

Transcript

Ali: Hi everyone, this is Ali Pourshahid. Happy to have you and hosting this panel today with a great group of Senior Technical leaders here at Solace Cloud. I'm VP of Cloud and Development Experience, responsible for product engineering operation, as well as user experience related to Solace Cloud's line of products.

Let me do a round of introductions before we start. Julian, go ahead.

Julian: Hi, my name is Julian Setiawan. I'm the Lead Architect for Solace Cloud. I've been with this product since the beginning. First as a Dev and now as an Architect. I'll be talking a bit about how we use EDA with our microservices.

Ali: Alex.

Alex: Hi, my name is Alex Pulbere.I'm the Development director for the Event Mesh group and we're responsible for the mission control product and the PubSub+ Insights. Today, I'm going to be talking about a use case for notification, dispatching that is for our broker.

Ali: Kevin.

Kevin: Hi, I'm Kevin Lid stone, Software Development director of the event portal in Solace Cloud but I've worked on all the different areas of Solace Cloud over the years. Today I'll be talking about EDA patterns.

Ali: James.

James: Hi, my name is James Elwood. I'm a Principal Engineer at Solace Cloud and I focus on how we orchestrate the PubSub+ event broker and how we do that in Kubernetes and other environments. And I will be talking about how we make that orchestration performant and resilient.

Ali: Awesome. Thanks, everyone for joining me. Let's roll. So, what I'd like to talk about, first, I'll be kicking this up with showing you a helicopter view of our architecture, Solace architecture, and the message that I really want to pass along is how EDA is really a strategic choice when it comes to architecture and building Cloud-native applications. So, you'll be seeing a lot of different patterns, how we solve the problems that we've had as part of our architecture, but really the core of it, the core principle of it, is the choices we made early on when we started building this product.

And so if you look at our high-level architecture, it really has two sorts of big components, big elements that I'd like to call out. One is everything related to our cloud console. This is our UI or the console for the control plane. And the whole idea behind that is, you know, as a user, as a SaaS user, you can log in to this console, and you can use different services and applications that we provide for you.

So the whole architecture is a Microservices-based architecture for each of our services or applications. We have a set of microservices. It is a Domain-driven thinking as it should be for a microservices architecture. But in addition to that, another big tenant or principle of our architecture is that it is Event-Driven.

The other element of this architecture is what we call Event broker regions. So this is where we deploy the services that we provide to our customers. And the beauty of these regions is that they can be deployed anywhere. Whether it's our customers' on-premises data centre or any cloud accounts that they might have that they wanna deploy to.

And, so, these two pieces communicate together through events. And again, the beauty of this architecture and the way we've designed it is these satellite regions, these regions that we provide to our services, to the customers are very malleable. They can adhere to our customer's requirements. So, if you look at the architecture, you can see that we have an event broker sitting between all of our microservices on the left side. You have all these microservices that are communicating together through these even broker services. And this principle, this way of thinking has helped us throughout the years to solve different problems and it has had several benefits for us. Of course, you know, when it comes to microservices and EDA in general, loosely coupled microservices being one of the most important ones, and this independent thinking.

Also, because we have these brokers these brokers sitting in the middle, it has created this runway for us from a scalability point of view because it acts as a shock absorber in the middle of our microservices. So, we have a bigger runway when it comes to certain events and that translates directly to savings when it comes to infrastructure cost. Also our architecture from a scalable point of view, it's always ready to be scaled and taken to the next level.

So, with that kind of a high-level introduction to our architecture, and now we're gonna hand it to Julian to talk about EDA microservices and some of the problems we've solved over the years. Julian.

Julian: Thanks, Ali.. One of the first things I want to talk about is what we call our REST Proxy, and it's an API gateway that uses Solace PubSub+ Event Broker as the backbone for Proxy and REST requests to the different microservices.

So, why do we build this? We used to use Zul and Eureka for REST requests, and we always used the broker for inter-service communications. And as we were developing, we realized that developing internal APIs was pretty easy. Because it's decoupled by nature, just connect your dev machine to the broker, and you start getting events.

But for REST APIs, it was a bit of a slog. But first we had to add new endpoints to map. Then we had to expose our laptop to the gateway, and then we had to register with service discovery just to test some REST endpoints. And we thought, can we just use the broker for both? And so how we built it was we built libraries around Spring Boot REST controllers, and request mappings.

When your microservice starts up, it looks at all the different request mappings and advertises them to the REST Proxy. The REST Proxy keeps track of different REST paths, to different topics it should send those events too. And when a request comes in by HTTP, it wraps it up in an event, sends it over to the broker, through the broker, and the microservice, any microservice subscribed to that topic will pick it up, do its work. And when it responds, that response is intercepted by the REST Proxy libraries, wrapped again in an event, sent back to the REST Proxy, who then looks up a correlation ID to find out which client actually requested this and gives a response back by HTTP.

And the nice thing about extending REST controllers and request mappings is that developers just develop their endpoints like any other REST controller. They can use cURL when they're developing and they can use all the different REST tooling to try it out. And when they deploy into their cluster, it just works seamlessly with this set up.

Some of the pros of this solution are that we got a lot of stuff for free. First, there was the hybrid cloud, like we first talked about. It lets us deploy our microservices anywhere, really. We just need a connection to the broker. There are no complex network diagrams. It's just a single PCV connection back out.

And this really helped in development because developers could just connect our laptops to these different topics and just start acting as another instance in the cluster. So in staging, if we seem to have an issue, misbehaving, microservice, we could take it offline, connect our laptops, and we would be debugging locally.

Service discovery also comes for free. Just by connecting to a topic, you're implicitly discovered by the broker, and the broker will just start sending events to you. And, finally, there's low balancing. We use a feature called shared subscriptions on the broker, which will then tell the broker to distribute the events to all the different instances.

So, you'll automatically get a round-robin by connecting it this way. One drawback of the solution is that it is homegrown, so it means everything we need to build from scratch. So, we are missing a few common API gateway features like circuit breakers. But, the nice thing is that the broker does give us opportunities to expand this solution in really novel ways.

For example, a REST path looks a lot like a topic hierarchy in Solace. So, right now, all of the different REST path for a certain cluster will only go to a single topic due to some historical, limitations of our implementation. But you could imagine creating a topic hierarchy that looks exactly like the REST path.

So, if you care about all the users, you could subscribe to apiv1user/>;. But if you only care about one specific user, you could just subscribe to apiv1user/id/>.

Ali: Great, Julian. Thank you. I really love the idea of making developers' life simpler as well as, you know, getting rid of service discovery and solving that problem without adding more operational burden.

But maybe for those curious architects who, maybe going right now. Well, you have a hammer and everything looks like a nail. Do you know why? Why use the broker? Tell me a little bit more. Why not an API gateway? Would it be at some point, to switch this architecture to use an API gateway?

Julian: Well, We have looked at other solutions like that, and really the harder problems to us. We're the ones that were solved by the broker.

And it also expands us into these new features, like I mentioned about topic hierarchies that you could probably wrangle together with API gateways. But it does get quite complex. In the end, though, we couldn't imagine ourselves eventually offering, ASICS APIs instead of REST APIs. Really, a lot of users still expect REST and a lot of tooling still expects REST, so, we continue to provide them. But, we envision a future where we start, you don't need to pull for any of these responses. You just subscribe to a topic and will give it to you.

Ali: So, that's really cool. So, with one set of infrastructure and the same architecture, you can offer both REST APIs and ASICS APIs eventually, which I believe Kevin will touch on at the end of the panel. So, why don't we move to the next topic, leader election? Tell me more about that.

Julian: Yeah, so one of the nice things about having the Solace Broker at the heart of our architecture is that it's a very mature product with a ton of features that they've had to solve really complex use cases with. And, so we can simplify some of our architecture by just using the broker that we already have, and they're all connected too.

One issue we ran into when we were clustering our microservices was that we had a bunch of scheduled tasks. These are methods that just run a schedule and when your clustered, you're gonna get a lot of duplication because the instances aren't aware of the other instances, also running these scheduled tasks.

And, so we solved this using Exclusive Queue from the Solace broker, but probably not in the way you're imagining. So, an Exclusive Queue, as a background, it's a type of queue that when all the consumers are sorted into active and standby. So, you only have one active consumer and the restaurant standby. This is mostly used in use cases where order is very important.

And the nice thing about this is that the broker actually offers an API for knowing who is the occupant, who is a standby. So, what we did is that we connected to the broker to an Exclusive Queue so all the instances are connected to the same queue and only the active one is actually going to execute any of the scheduled code.

All the other ones will just sit around until they become active, which is when the active ones eventually disconnect. What we liked about this was that it was a very simple implementation, and there were no new tools or frameworks we used, reused everything that we already had. The downside of it is that we are using, we do have this overhead of a queue.

We have to create this queue that isn't really intended for any messages, and then, it's almost seen as like a side effect of a queue. We're not really using a queue for its intended purposes, and that can also get messy. And, finally, it's not really distributed. This is a leader-follower setup. But the nice thing about this is a trade-off we were okay with because when we started with single instances, one instance was already doing all of the scheduled work.

So we figured when we do clustering, it should still continue to be okay. But, the nice thing about this setup is that it's easily extensible into a distributed scheduler. Because, if you switch the queue to a non-exclusive type, which means it distributes messages to all consumers and you start publishing scheduled events through that none exclusive queue, you'll end up getting that distribution. That is a con in this current solution.

Ali: This is an interesting one, right? So, being pragmatic, using the tools that you have in your toolbox to get, to solve the problem that you have. It's a classic example of architectural trade-offs and choosing the right solution. Why don't we go? To Kevin next, he's going to talk about a couple of EDA patterns, Kevin.

Kevin: Thanks, Ali. So, the two patterns that I'm talking about today are CQRS and the outbox pattern. So, CQRS stands for Command Query Responsibility Segregation, and it's a pattern often used when you want, your sort of right operations or commands to be backed by a different infrastructure than your querying. Because maybe, maybe your reads are needed to horizontally scale much faster than the services that are right or vice versa. In our specific case, why we chose to use CQRS is because of our command service, we want to use a MySQL database. Because, for the commands, we have like the basic operations and we want a relational database to store this data, which is very relational in nature. But for searching, we want a Google-like search, which isn't something that MySQL is good at.

I can't just type in a word and search across all fields in the database, but I can do that using Elasticsearch. So, this is where our query services are backed by an Elasticsearch database. But the challenge comes in and how do we keep the data in sync between the MySQL database and the Elasticsearch database?

We're gonna do it in an eventually consistent manner, but we need the right pattern in place to ensure that we're kind of guaranteed to do that, and that's where the outbox pattern will come in. So, if we go to the next slide, our first sort of naive implementation of this is we get a REST API. In this example, the REST resources widget.

So, a REST API comes in to create a widget, we commit that widget to the relational database, my SQL. And then we'll publish to the event broker, an event saying, hey we've created a new widget. The query service has subscribed to receive that event. So, it's going to consume it after it gets published to the broker, and it's gonna store it in the Elasticsearch database.

And this works perfectly fine in the positive case, but in the negative case where let's say the commit happened to MySQL, but then our command service crashed for some reason it ran out of memory, or someone you know, killed the process. If it happened at the exact moment, after the commit before we sent the message to the broker, our two databases are going to be out of sync, there is no way that Elasticsearch is ever going to be notified that this new widget was created. So, we go to the next slide we can see, okay, well, what if we try this? What if we publish the event first to make sure Elasticsearch is going to have it? Then we commit to the database, but, of course, we have just the exact opposite problem.

In this case, the event gets published, the command service crashes, we've never committed to MySQL, but the event did make its way to Elasticsearch, and again, MySQL and Elasticsearch are out of sync. So, in the final option here, this is where we decided to introduce the Outbox pattern to ensure that we can guarantee the two services are in sync.

So, the way we use the outbox pattern is, we committed the widget to the database, but we also have a table in MySQL that we call the outbox table. So, by whenever we create something or update or delete, we'll create an outbox event in this table. Then, we have a separate thread that's running on a scheduled task that's pulling the outbox events, seeing if there're any new records. And if there are, then we publish the event. And of course, the Elasticsearch, half of the querying service. It does the same thing. It can consume the event and store an Elasticsearch. And you can see if we were to crash, right after committing the event to the database, we would be okay because when we restart the command service, that scheduled task is still going to run on a poll interval, and find the Outbox event and still publish it to the event broker.

If we crashed before we committed to the database, we'd still be in sync. Because, we wouldn't have an event in the Outbox. . So by tying the need to publish the message and the commit of the original REST resource to the same database transaction, that's really what kind of solves the problem for us. So, we look at the pros and cons to this solution. So, we've given ourselves guaranteed eventual consistency, which maybe we'll talk a little bit more about the cons. When we write our REST controllers, it's very simple. We don't have to do any complex things, or we're trying to like double write to Elasticsearch and mySQL.

So it's very elegant for our developers to write REST controllers. And, we're using all the benefits of an Event-Driven solution with an Even-driven architecture. So, we get decoupled services or services can be highly available. The cons, so the events technically aren't real-time in this case, because I use kind of a bad word, an Event-Driven architecture.

I've mentioned that we're polling the database. So, let's say we're doing that every five seconds. So, that means there's like a five second window in a worst case scenario where your event gets triggered in this eventually consistent manner. In an ideal situation, maybe we would use a database that kind of gives us an Event-Driven architecture.

So when we write to the Outbox table and events are fired back to our microservice so it can then notify the broker in real time, that there's a new record. But that's not something MySQL, is able to do. So, we went with the polling solution for now. Another con is that we don't want multiple instances that from the command service, sending messages to the query service because it might start sending the same message.

So this is where we take advantage of the leader election that Julian talked about. And we use a leader-follower architecture to send these events because the scale of the rights isn't to the point where we need horizontal scalability. This is a con that we're willing to live with at this time.

And then, of course, with any sort of Event-Driven solution, it's like, is it guaranteed that message is going to be handled by the query service and written to Elasticsearch? It's not truly guaranteed, of course. We can have issues where we just can't connect to Elasticsearch for a long period of time due to networking, or we can have a bug in the code.

And we just are unable to handle that message. And this is where in the EDA solution you want to have sort of mitigation plans, like dead message cues or the ability to do message replay. And, of course, you want item potent message handlers. You can, if you do get a message more than once, which in our solution that can definitely happen because we're kind of at least once on our message handlers.

You want to be able to get that second message and repeat it. But the way we chose to solve the problem in our solution is a pseudo message replaced solution where we can re-stream the events from the outbox table back to Elasticsearch, which we can do on-demand from a REST request that we wrote to be able to do that.

So that is how we use the outbox pattern to solve a CQRS implementation between our microservices.

Ali: Very cool. Very cool. Kevin, let me go back to the previous slide and ask you a question. Given that our event broker has this approach of multiples, targets can subscribe to it, is there an option here that you reduce the reliance on MySQL and solely use the broker to solve this problem?

Kevin: Yeah, that's another way that we could have kind of built in the guaranteed delivery of the event from mySQL to Elastic. If we on the original REST request, instead of writing to the database immediately, if we went to the broker first and we published to a queue then we would receive that message from the queue back to the command service.

We could write to the database, publish the next message to the query service. If there were a crash, we would never have acknowledged that we received the message off the queue. So, when we restart, we will get the message from the queue again. So, we could build the kind of guaranteed delivery in that fashion.

What that would've required though, is we would need item potent message handling on the command service side. And this is one of the pros I mentioned. We have simple sort of handling and REST resources because we don't need to have item potency. And, doing things in an item potent manner with MySQL, it's a little bit more complicated with Elasticsearch. And,especially, because our architecture originally didn't have the item potency built into it, we would have to rewrite all of our REST resources to support that. So that's why we felt this implementation using the Outbox pattern, was the right solution for us in a more pragmatic sort of view.

Ali: Right. As they say, architecture is all about trade-offs and you know, you got to pick and choose which one is going to win for you in your context. So why don't we go to the next one and distribute a control plane and James is going to take us through this one? James, go ahead.

James: Yeah, thanks, Ali. So, one of the main features of Solace Cloud is the creation and management of the event, PubSub+ event brokers. This operation is performed in many different environments or what we call regions in Solace public cloud accounts, in our customers' public cloud accounts, as well as in our customer's private data centre.

This model introduces a whole bunch of interesting problems. These are our, these environments are all slightly different and there're a lot of them. Which means performance and scalability is definitely a concern. Security is always top of mind in the public cloud. But it's especially important when running in our customers' accounts as well because of the nature of a lot of these environments, we have limited or no direct access to them.

Here we have an example of an event flow that starts at MCA deployment, then progresses to the creation of a PubSub+ event broker. First, the MCA is deployed in the environment. For example, into a customer's open shift, in the private data centre and phones home with a heartbeat to do indicate its presence.

Here we have an example of an event flow that starts at MCA deployment, then progresses to the creation of a PubSub+ event broker. First the MCA is deployed in the environment. For example, into a customer's open shift, in the private data centre and phones home with a heartbeat to do indicate its presence.

The event that was sent is attracted to a queue that processes all requests. This is done so that we don't lose requests, but also, so that request processing, our request processing microservice can be easily scaled, if the volume of requests is beyond a single instance.

The event is received and processed and event, and an event is sent to the MCA with a topic structure that includes the MCA unique ID. Because the MCA is stateless, the request event contains everything it needs to know to perform the operation. The MCA then performs the operation, in this case, the creation of a Pubsub+ event broker.

If the MCA crashes halfway through on restart, the event is retransmitted to the MCA and the operation is restarted. All MCA operations are item potent. As the MCA performs the operation, it sends status events back to Solace Cloud so that we can display status to the customer on the Solace Cloud console. As well, once the creation is complete, we send a response back to Solace Cloud with all the details of what was created. And that could be used not to show details to the customer, but also for future operations. This patterns used for all MCA operations, including updating the MCA itself with when new MCA software is available.

Instead of asking our customers to update it by hand, we actually send a message, an event to the MCA to do a rolling update of itself which makes the management of the software much, much simpler. And so, next slide please.

So, there're some pros and cons of the solution. It's great because the transmission of requests is insured by the transactional creation of the database record and the event at the same time, requests to the MCA item potent and fault tolerant. So, if the MCA dies, it can come back up and just continue what it was doing.

Self upgrade makes it much, much simpler to manage the software for the MCA and we don't have to ask the customer to update it. And, status messages for request insures detailed progress, is provided to the user. But there's some issues with the solution. It's fairly complex and, especially, we find that developers take some time to figure it out, to be able to trace patterns across the estate. And, it does' t so easy to fail fast because we're fault tolerant, because things could be retransmitted, things happen slowly, and so that can be frustrating to the user. But, all in all, we think it's a pretty great solution to the problems that we have.

Ali: Great. Thank you. You touched on if MCA dies and you know that is such a critical part of making this operationally successful, especially because we happen to operate these agents in our customer's environments and they could be somewhat out of our hands. What happens to them, you know. If a maintenance event happens is an example by the customer IT team, which happens all the time as they're maintaining their Kubernetes environment. But one question I wanted to ask you back to customers' environment.

You know, we have a lot of enterprise customers and security obviously is very important and they all want to block their firewalls. So, how do we answer that question? How do we have this agent? Almost you could say as a Trojan horse in their environment and they're still comfortable with it, what is it that makes them comfortable?

James: Yeah, two things. One is we keep the footprint of what we need to do in terms of communication, as small as possible. So, we have our Solace PubSub+ event broker that we talk to as well as our monitoring systems, but also all communication is originated from inside their cluster. So, we don't need anything to come from the outside into their cluster and that is much more palatable to our customers than than not.

Ali: Yeah, definitely. Very important point. Thank you. So, why don't we jump to what's next for us and some of the ideas you're exploring, Kevin?

Kevin: Yeah, so, Julian kind of already touched on the idea that you know, not only do we want to use synchronous APIs to have our customers integrate with Solace Cloud, but maybe, we need sync APIs too.

So, if we have third-party developers that want to build tooling on top of Solace Cloud, maybe they want to trigger some CICD pipeline anytime a messaging service is created. For them to be notified of that programmatically with a synchronous API, they would have to use polling, they would have to build a tool that could pull and ask, are there any new services created? And if there are, they can then trigger their pipeline. If they're not, they can just ask again. How could we solve this problem more elegantly with a more EDA like solution? If we go to the next slide, we can see the introduction of event API products, which is something we're building in the event portal.

We wanna drink our own champagne, so to speak, and provide event API products to our customers. So, instead of giving them a synchronous REST API that they can find out if their messaging service was created, they can have an asynchronous event API where they're able to connect to a Pubsub+ event broker and be notified in real-time, anytime a message service is created and have their pipeline act upon this event.

So, this is something we're looking to implement as a first-class citizen, across Solace Cloud for all of the third-party integrations that we want to have with our customers.

Ali: Thanks, Kevin. And, shameless plug here, if you haven't tried Event Portal, you can go ahead, sign up on as a trial. You'll have plenty of days to play around experiment with event API products.

It's a very cool product for architects and developers and really helps you design and articulate your EDA.

Let's go to Alex next. Walk us through notification dispatcher.

Alex: So, for this use case, basically, is specific about to applications that provide Webhook integrations. And, right now, there are quite a few cloud applications that do that, allowing customers to receive certain notifications, through Webhook, which are usually REST based. And, for this specific example, let's consider Datadog, for example, which allows to receive, to send alerts as Webhook REST requests. And, in this case, we're talking about, how can we use this kind of integration, but not have to expose our application directly.

And, because there is an opportunity for lost requests or missed request if the incoming rate is too high. So, in this case, the event broker is gonna act as a guaranteed notification delivery sort of medium. And, as a backup are evolved and in this case, we are, literally, going to be receiving all the Webhook notifications as events to the broker. And then based on the capabilities of the broker, we're going to store them into a durable queue to ensure guaranteed delivery and the guaranteed delivery would be done through the feature of the broker that's called REST Delivery Point, which is able to deliver events as REST requests. So, on this slide, we can look at the diagram for this kind of architecture that would allow us to handle notification dispatching for the use case I mentioned before.

We can see Datadog which has alerting monitors and it can be configured to use a Webhook integration and the Webhook integration that can be set up to make requests, REST requests to a destination or URL that we provide. In this case, it would be the rest client endpoint from the broker and the broker would then receive all of these Webhook requests and publish them to the alert topic, which would then have an alert queue, which is subscribed to this alerted topic, and basically act as this durable queue that stores all the events. And then, we would as well have, as part of the broker features, REST Delivery Point, which is instructed to or is configured to consume from the alert queue and then subsequently make a request to the target destination. In this case, it's going to be making a request to either a private application or as an example to some Cloud application public APIs, like Sappi, MailChimp, or Twilio.

The benefit of this system is that it ensures the events aren't lost by storing them on a durable queue and making sure that everything is delivered before it's removed from the queue. There is an opportunity as well to pre-process some of these events before sending them to the destination and it allows us to manage API restrictions on some of this Cloud APIs or on our own private application, if there is a limited capacity for processing. And, the cons would be, the fact that the capacity of the broker needs to be calculated to ensure that it can handle the amount of Webhook requests that would be coming in. So and that the queue is sized properly to ensure that it can fit the maximum events, in a case of a burst. And that's would be the notification dispatcher pattern.

Ali: That's pretty cool, Alex. I really like the RDP feature of the broker, which really helps you bring this real-time Event-Driven world to restful world and bring them together and use the best of both worlds. Especially, if you're dealing with services that are not necessarily exposing a sync APIs. Maybe, tell me a little bit about the private application and how would this pattern alleviate potential security concerns.

Alex: Good question. So, in this case, you're right, most of the customers probably prefer to have their application remain in their private domain, not accessible to the public internet. So in this case, the broker actually can help,here, act like as a gateway to the public.

So, they would only have to expose the REST client endpoint that is able to receive this public request from Webhook integrations. And the broker can be set up in a private region or in a Solace controlled region. In a way that has less of other RDP happened to through pub private networks.

So, this way the customers wouldn't have to expose any of their software to the public and everything would have to go through the broker and private subnets.

Ali: Got it. That makes a lot of sense. Okay. Why don't we wrap up with one thought that I want to leave with you all. EDA is a strategic choice, as you saw throughout many examples today. When you make that choice, it really impacts your architecture and how you solve certain problems, and it helps you have a modern, scalable architecture that works really well with microservices. Thanks everyone.