Modern software development has embraced distributed systems. Microservices, serverless functions, and event-driven architectures allow teams to build, deploy, and scale application components independently. But this distributed world has an often-overlooked challenge: where do you run the complex, asynchronous, or long-running tasks that don’t neatly fit into a single request-response cycle?
Too often, the answer is messy. We overload existing microservices with background jobs, turning them into monoliths in disguise. We spin up fragile cron servers that become a single point of failure. We wrestle with complex message queues and worker fleets, spending more time on plumbing than on business logic.
This is a sign of a missing architectural layer. Your stack has a web layer, a service layer, and a data layer. It’s time to introduce the processing layer—a dedicated service for executing complex data and workflow logic on-demand.
When you try to shoehorn heavy processing into your existing services, you introduce architectural "cracks" that compromise stability and scalability:
Imagine a dedicated engine in your cloud infrastructure, built specifically to handle these challenges. A service that allows you to transform complex data pipelines and business workflows into simple, scalable API calls.
This is the principle behind processing.services.do, a platform designed to be the intelligent processing backbone of your architecture. Instead of building your own distributed job queue and worker system, you define your logic and offload the execution.
It’s an architectural pattern shift:
Instead of asking: "Which of my microservices should run this job?"
You ask: "What logic do I need to execute?" and hand it to a specialized service.
The power of a dedicated processing service lies in its simplicity and abstraction. With processing.services.do, you don't manage servers, queues, or auto-scaling groups. You focus on two things: your business logic and the API call to trigger it.
1. Define Your Logic as Containerized Agents:
You package your custom logic—a data transformation script, a machine learning model, a sequence of API calls—into a container. This is your "agent." It’s a self-contained, reusable unit of work. If you can code it and put it in a container, the platform can process it.
2. Execute Anything with a Simple API Call:
Once your agentic workflow is defined, triggering it is a single, asynchronous API call. You provide the name of the workflow and the specific data (payload) it needs to operate on.
Here’s how easy it is to launch a data enrichment job that pulls information from multiple sources:
import { Do } from '@do-sdk/core';
const processing = new Do('processing.services.do', {
apiKey: 'your-api-key',
});
// Define and run a data enrichment workflow
const job = await processing.run({
workflow: 'enrich-user-profile',
payload: {
userId: 'usr_12345',
sources: ['clearbit', 'linkedin', 'internal_db'],
},
config: {
priority: 'high',
onComplete: 'https://myservice.com/webhook/job-done',
}
});
console.log(`Job started with ID: ${job.id}`);
3. The Platform Handles the Rest:
Behind this simple API call, the platform orchestrates the entire execution:
By externalizing complex jobs to processing.services.do, you fix the cracks in your architecture.
Stop letting background jobs and data workflows complicate your architecture. It's time to recognize processing as a first-class concern. A dedicated processing layer is the missing piece that enables your services to be truly micro, letting you build more resilient, scalable, and maintainable systems.
What kind of processing can I perform with processing.services.do?
You can run virtually any custom logic. Common use cases include data transformation (ETL), batch processing, image/video rendering, financial calculations, and orchestrating sequences of microservice calls. If you can code it, we can process it.
How do I define my processing logic?
You define your business logic as containerized agents. processing.services.do acts as the orchestrator, invoking your agents with the provided payload and managing the execution state, scalability, and error handling for you.
Is the processing service scalable?
Yes. Our platform is engineered for high-throughput, parallel processing. It automatically scales compute resources based on your workload, ensuring your jobs are completed efficiently, whether you're running one task or a million.
Can I run long-running tasks?
Absolutely. The platform is designed for both synchronous (quick) and asynchronous (long-running) jobs. For long jobs, you can provide a webhook URL to be notified upon completion, allowing you to build robust, event-driven systems.