Say your tasks utilize a third-party GPU inferencing API that can run 10 concurrent jobs. It would be helpful to manage or limit the concurrency of this API to avoid sending more than 10 requests simultaneously.
I envision a “Resource” entity in Trigger.dev. You could declare a resource in the trigger configuration with a specified concurrency, like this:
export const gpuResource = resource({
name: "gpu-resource",
concurrency: 10
})
Next, you might add a resources argument to the task builder to define which resources the task will use.
Inside the task, you could then do the following:
const result = await wait.forResource(gpuResource, { timeout: "10m", autoReleaseAfter: "60m" });
if (result.ok) {
console.log("Resource acquired"); // gpuResource available concurrency has decreased by 1
// Use the resource here (this could be a simple fetch call or a third-party SDK call).
// Once we receive a result, we can release the resource (or it will be released automatically after 60 minutes as defined by autoReleaseAfter), which will increase the available concurrency by 1.
await wait.releaseResource(gpuResource);
} else {
console.log("Timeout"); // waited for 10 minutes but could not acquire the resource
}This approach seems like a combination of queues and wait tokens, and it sounds feasible to implement.
In Review
💡 Feature Request
7 months ago

Emiliano Parizzi
Get notified by email when there are changes.
In Review
💡 Feature Request
7 months ago

Emiliano Parizzi
Get notified by email when there are changes.