Back in October 2022, I wrote a short blog post explaining how I automated our Datadog Marketplace sales cycle using a few AWS services and my first-ever Golang program. That basic event-driven system saved our sales team several hours a week by replacing a manual process with something far more efficient.
Even though the original setup worked well and ran reliably for a couple of years, it still required ongoing maintenance—like upgrading Go versions, fixing minor issues from those upgrades, and updating the HubSpot SDK I built when their APIs changed. It wasn’t broken, but it was becoming a bit of a time sink. With Datadog Workflows becoming more robust and available, I figured it was time for a refresh. Why not see what it could do?
The original flow followed a pretty typical event-driven architecture pattern: event producers, a router, and a consumer.
While this system did its job, it was fairly code-heavy and required occasional tweaks, especially around regex parsing and SDK maintenance.
To recreate the same logic in Datadog, I first needed a way to ingest emails. Luckily, Datadog has a built-in (and often overlooked) feature called the Email Events API. This lets you generate a unique email address that pipes incoming messages directly into your Datadog instance as events. Between that and the customer triggering a trial via the Marketplace, I had my event producers.
Next came parsing those events. This was a huge improvement over the AWS version. Instead of hand-rolling the regex for every single attribute, I used Datadog’s Grok Parser and built-in pipeline processors to normalize the events. It made things more maintainable and much easier to evolve. A few additional processors helped clean up and standardize some attributes for downstream use.
You could argue that this event pipeline acts as both a consumer (processing the raw event) and a producer (outputting a normalized version of the event for downstream systems). Either way, it simplified a lot of the logic I previously had to write in code.
At the time of writing, Datadog Workflows can’t be triggered directly from an ingested event. To work around this, I set up a monitor that looks for specific events from a certain service and a @title pattern. It also ignores trials that come from Datadog Partner instances.
This monitor acts as the router, forwarding qualified events to a Datadog Workflow.
Just like the old Lambda function, the Workflow is now the consumer. It takes the normalized event attributes and performs a sequence of steps to create contacts, deals, products, and line items in HubSpot. If the process succeeds, it sends a message to Slack with a link to the new deal in HubSpot. If anything fails, it also notifies the Slack channel with error details.
What started as an experiment turned into a functional and surprisingly low-maintenance solution. By using Datadog Workflows, I gained a simpler parsing mechanism and a flow that’s easier for our sales ops team to understand and update themselves. Since it’s a low-code solution, they can tweak it without touching infrastructure or digging into code.
Of course, there are trade-offs. Datadog Workflows aren’t meant to replace full-fledged serverless architectures, especially when it comes to high traffic or strict latency requirements. But for a system like ours, with moderate volume and no SLA constraints, it works just fine and was fun to build.
Curious how Datadog Workflows can replace code-heavy processes with low-maintenance automation? Contact us to help you modernize your internal tooling and streamline operations