How Unify uses Temporal

What is Unify?
Unify is a system of action for generating revenue. We are building a platform that empowers sales, marketing, and growth teams to run repeatable, scalable, and observable plays to generate new business.
To do this, we have to solve some difficult technical problems. How do we stay reliable when third-party systems are not? How do we scale complex, user-defined automations seamlessly and efficiently? How do we deal with spiky load in a multi-tenant system? Our tool of choice for many of these challenges is Temporal.
What is Temporal?
Temporal is an open-source framework for writing and running reliable distributed applications. It provides delivery guarantees, state, retries, and failure recovery automatically, letting engineers orchestrate complex systems as simple, fault-tolerant functions.
In our last blog post, we broke down the architecture that allows Unify to achieve high leverage as a lean engineering team. That architecture consists primarily of three types of components within each service: HTTP servers, Apache Pulsar consumers, and Temporal workers.
Temporal is our Swiss army knife that serves a long tail of use cases such as long-running processes initiated in the UI, scheduled jobs, ad hoc tasks, and more.
This blog post will break down our top four Temporal usage patterns along with the technical challenges they have solved and examples we currently have running in production.
Directed acyclic graphs (DAGs)
Unify’s flagship feature, Plays, allows users to automate sales and marketing processes with workflows that leverage real-time signals, AI Agents, and other Unify features to engage with buyers at scale.
Under the hood, a Unify Play is represented as a DAG where nodes and edges represent actions, transition criteria, and dependencies. No two Plays are identical, with some users’ Plays comprising dozens of actions and complex dependencies, which means the underlying implementation must be fully dynamic.

At any given time, there may be hundreds of thousands of node executions in progress across all of our customers. One of the challenges this poses is failure handling. If errors are left unresolved, Play runs will be stuck—a nightmare to resolve at scale. Temporal's durable execution semantics guarantee the eventual consistency of Play runs even when individual nodes fail.
Plays are the core of Unify, so it is imperative that they run to completion without a hitch. The world is not a perfect place, so when things go awry, we must have the ability to quickly understand and fix failures.
Temporal has a built-in UI that allows us to easily navigate these DAGs to trace execution flows and track down errors. It’s also easy to cross-reference the visualized workflow with our own codebase, because every Temporal workflow and activity in the UI corresponds directly to a Typescript or Python function.

Temporal automatically distributes the execution workload across our Kubernetes workers efficiently. Furthermore, the task queues mechanism has allowed us to combat noisy neighbor problems by creating dedicated priority queues and balancing customers' Plays across them. Scaling the Temporal infrastructure that powers Plays has been straightforward, allowing us to optimize engineers’ time for shipping new product.
Durable and efficient task processing
Third-party API downtime, network timeouts, and other failures are inevitable when building external integrations. Temporal makes it easy to build durably in the face of these failures.
One of Unify’s hallmark integrations is a data enrichment waterfall. It abstracts the process of enriching business information from a waterfall of data providers for users. This seems simple, but it becomes a challenging problem at scale: enrichment providers often experience significant downtime and enforce strict rate limits. Failures are abundant when hitting APIs of multiple downstream enrichment providers en masse.
This is where the fine-grained control Temporal offers over error handling behavior shines. It is trivial to configure retry mechanisms with backoffs to handle these API failures. We can tailor the behavior to each specific use case in code:
.png)
When dealing with more aggressive rate limits and higher volumes of requests, we use the signaling mechanism to channel requests through a single workflow that dispatches them at a fixed rate. In this approach, an arbitrary number of producers send requests to a central workflow and then sleep until they receive a response. This allows us to match external rate limits without the need for polling or approximations.

Temporal provides configuration options at both the workflow and activity levels which allow us to more easily navigate the complexities of rate limiting, downtime, and sudden spikes in usage.
Scheduled jobs
There are many scheduled tasks running in the background powering Unify. We maintain bidirectional integrations with CRMs, email providers, customer data warehouses, and data providers.
One key goal we set for our architecture is to avoid heavy, monolithic jobs in favor of smaller task-oriented jobs. Smaller jobs that run independently and focus on doing only one thing are easier to retry, balance across compute resources, and debug. While there are many existing tools for running jobs on a schedule, Temporal schedules provide a lightweight framework for scheduling many thousands of workflows with full control over scheduling policies and the ability to pause and unpause on demand.
Many of the jobs that run on a schedule can also be started in response to user actions or events. But the nature of many of these jobs is that duplicate concurrent runs need to be avoided at all costs. For example, two identical jobs writing data to the same customer’s CRM could accidentally create duplicates. To prevent this, Temporal workflow IDs form a natural locking mechanism that can prevent more than one workflow with the same identifier from executing simultaneously. In addition, Temporal’s scheduling mechanism provides granular controls over exactly what should happen in the case of overlapping workflow triggers.
Temporal schedules are the foundation of Unify’s key components, and their simplicity and flexibility have given our team more leverage to ship safely and quickly.
Distributed transactions
Unify’s backend consists of a microservices architecture. While this can provide cleaner interfaces and the benefits of independently scalable systems, it also brings the challenges that come from working with distributed systems. Features often traverse multiple services and require perfect coordination to function correctly. Solutions like queues can be difficult to orchestrate and reason about, and other workflow frameworks are often boilerplate-heavy and tedious to work with.
Temporal provides a way to coordinate distributed components by writing native Typescript or Python functions. One powerful pattern this unlocks is called the saga pattern for distributed transactions. When building multi-step processes that involve calling APIs or executing code across service boundaries, it is difficult to handle failures that require propagating the error back up the "stack".
An example of this is coordinating and sending emails in Unify Sequences. Sequences are dynamic, multi-step automations like Plays that automate multi-touch outbound campaigns with AI features and event handling. When a Sequence sends an email, several operations happen in a specific order across several services:
- Begin the step in response to certain conditions being met
- Wait to send the message in accordance with the mailbox’s sending capacity and schedule
- Send the email via the correct email provider integration
If anything goes wrong along the way, we need to gracefully resolve the situation by rolling back or cleaning up any partial progress that had been made.

Implementing these fallback strategies can be done directly in code using Temporal's workflow abstractions. Each operation is implemented as a Workflow, and Temporal guarantees that inter-workflow communication is never dropped or lost. If a workflow fails, we have full control over what happens in the event of an error. Without Temporal, code complexity increases significantly, and with it the risk of something going wrong or a failure being mishandled.
Conclusion
Temporal has become a foundational part of our engineering toolkit at Unify. By integrating it deeply into our product architecture, we’ve built systems that are resilient, scalable, and easy to operate as we grow. This robust architectural foundation has enabled our lean engineering team to scale our product and customer base at an exponential pace.
We're always on the lookout for talented engineers to help us scale even more quickly. Reach out if these technical challenges excite you!