Planned
Vercel integration
This could do two things: Sync environment variables from Vercel to Trigger.dev. Create preview deployment on Trigger.dev for each preview deployment on Vercel. We need to figure out the details of how preview deployments would work with API keys, environment variables and filtering.
Linear 9 months ago
Planned
Vercel integration
This could do two things: Sync environment variables from Vercel to Trigger.dev. Create preview deployment on Trigger.dev for each preview deployment on Vercel. We need to figure out the details of how preview deployments would work with API keys, environment variables and filtering.
Linear 9 months ago
In Progress
Europe workers
I want to be clear what this feature is and isn’t. It would mean having worker machines in a datacenter in the EU. This would allow you to access APIs that are geo-IP restricted. It means the compute would happen in the EU. It does NOT mean that the data for those runs would reside in the EU. The operational and log data for all of Trigger.dev cloud is located in US-EAST-1. To have European data we would need to have multiple databases and log stores. This isn’t trivial – most companies that do this get you to select when you create your account between EU and US. If you do want this please create a separate feature request.
mattaitken 9 months ago
In Progress
Europe workers
I want to be clear what this feature is and isn’t. It would mean having worker machines in a datacenter in the EU. This would allow you to access APIs that are geo-IP restricted. It means the compute would happen in the EU. It does NOT mean that the data for those runs would reside in the EU. The operational and log data for all of Trigger.dev cloud is located in US-EAST-1. To have European data we would need to have multiple databases and log stores. This isn’t trivial – most companies that do this get you to select when you create your account between EU and US. If you do want this please create a separate feature request.
mattaitken 9 months ago
Self-hosted workers
You can fully self-host Trigger.dev. This is a really good option in lots of situations if your experienced with setting up and managing infrastructure. Another option would be if we offered self-hosted workers. You would host them inside your own cloud account (or on-prem) and they would connect to the Trigger.dev cloud.
mattaitken 8 months ago
Self-hosted workers
You can fully self-host Trigger.dev. This is a really good option in lots of situations if your experienced with setting up and managing infrastructure. Another option would be if we offered self-hosted workers. You would host them inside your own cloud account (or on-prem) and they would connect to the Trigger.dev cloud.
mattaitken 8 months ago
Planned
Rate Limiting / throttling
We currently support setting a concurrencyLimit on a task, or on a queue that you create. This can also be used with a concurrencyKey to have per-user queues: https://trigger.dev/docs/queue-concurrency Sometimes instead of concurrency you want to limit using a rate. For example, 10/minute, 1/second. This is especially useful when using APIs that have rate limits. We should support this with a per-tenant key like we do for concurrency.
An Anonymous User 9 months ago
Planned
Rate Limiting / throttling
We currently support setting a concurrencyLimit on a task, or on a queue that you create. This can also be used with a concurrencyKey to have per-user queues: https://trigger.dev/docs/queue-concurrency Sometimes instead of concurrency you want to limit using a rate. For example, 10/minute, 1/second. This is especially useful when using APIs that have rate limits. We should support this with a per-tenant key like we do for concurrency.
An Anonymous User 9 months ago
Planned
Incoming webhook trigger
In the dashboard you each task would have a URL for each of your environments. Then you can use this URL in 3rd party products. Any data in the body of the request to that URL would come through to the task as the payload.
mattaitken 8 months ago
Planned
Incoming webhook trigger
In the dashboard you each task would have a URL for each of your environments. Then you can use this URL in 3rd party products. Any data in the body of the request to that URL would come through to the task as the payload.
mattaitken 8 months ago
Static IP addresses
To access some databases, and other services, you need to provide them with an IP whitelist. For this to work well we need to have static IPs, potentially a per customer IP address.
Linear 12 months ago
Static IP addresses
To access some databases, and other services, you need to provide them with an IP whitelist. For this to work well we need to have static IPs, potentially a per customer IP address.
Linear 12 months ago
Event triggers
Please reintroduce the events feature in V3, similar to what was available in V2. Trigger with an event name and payload A task can subscribe to a named event with a filter (it will only create a run if the filter matches) A single event can trigger many tasks if it matches many
An Anonymous User 9 months ago
Event triggers
Please reintroduce the events feature in V3, similar to what was available in V2. Trigger with an event name and payload A task can subscribe to a named event with a filter (it will only create a run if the filter matches) A single event can trigger many tasks if it matches many
An Anonymous User 9 months ago
In Progress
Make cold starts faster
When you trigger a prod or staging run in the Trigger.dev cloud it takes 3 seconds on average for the machine to start up and your code to start executing. This same slowness happens when a run resumes, like after using wait.for if the delay is above a threshold when we shut the machine down. They take this long because we're using Kubernetes for our cluster and a new pod takes a while to come up. We're switching to using MicroVMs for the cloud machines. Our target is to get the p95 for starts and resumes to under 500ms.
Linear 11 months ago
In Progress
Make cold starts faster
When you trigger a prod or staging run in the Trigger.dev cloud it takes 3 seconds on average for the machine to start up and your code to start executing. This same slowness happens when a run resumes, like after using wait.for if the delay is above a threshold when we shut the machine down. They take this long because we're using Kubernetes for our cluster and a new pod takes a while to come up. We're switching to using MicroVMs for the cloud machines. Our target is to get the p95 for starts and resumes to under 500ms.
Linear 11 months ago
Planned
More environments
The ability to create additional environments that you name, e.g. “Sandbox”. They’d get assigned API keys, you could filter by them, etc. You could clone them from another environment, and delete them.
mattaitken 9 months ago
Planned
More environments
The ability to create additional environments that you name, e.g. “Sandbox”. They’d get assigned API keys, you could filter by them, etc. You could clone them from another environment, and delete them.
mattaitken 9 months ago
E2E and unit testing utilities
Would be cool to unit test: have a way to unit test specific tasks and mock other other tasks (that are used within the tested task) e2e test: have a way to spin up a instance for e2e tests locally (programmatically for jest/bun testing) and tear it down after tests ran
An Anonymous User 9 months ago
E2E and unit testing utilities
Would be cool to unit test: have a way to unit test specific tasks and mock other other tasks (that are used within the tested task) e2e test: have a way to spin up a instance for e2e tests locally (programmatically for jest/bun testing) and tear it down after tests ran
An Anonymous User 9 months ago
Planned
Support access to internal services and databases
Currently, if your database of some of your services are not accessible to the public internet you can’t access them inside your Trigger.dev tasks (unless you self-host the entire platform). There are many ways we could solve this. One obvious solution is for us to provide a Bridge Connector that you can run inside your cluster and it would only allow access from your Trigger.dev tasks. We use a tool like this to access our restricted database outside of AWS. It would be very useful if you can share the kind of private resources you need to use in Trigger.dev and what solutions you would be happy with!
mattaitken 9 months ago
Planned
Support access to internal services and databases
Currently, if your database of some of your services are not accessible to the public internet you can’t access them inside your Trigger.dev tasks (unless you self-host the entire platform). There are many ways we could solve this. One obvious solution is for us to provide a Bridge Connector that you can run inside your cluster and it would only allow access from your Trigger.dev tasks. We use a tool like this to access our restricted database outside of AWS. It would be very useful if you can share the kind of private resources you need to use in Trigger.dev and what solutions you would be happy with!
mattaitken 9 months ago
Support log drains
Support for logs to be sent from Trigger.dev to 1+ other destinations. We would probably start with an HTTP POST request with a JSON body, similar to how Supabase do this.
mattaitken 7 months ago
Support log drains
Support for logs to be sent from Trigger.dev to 1+ other destinations. We would probably start with an HTTP POST request with a JSON body, similar to how Supabase do this.
mattaitken 7 months ago
CPU and Memory graphs on the run page
It’s currently difficult to diagnose performance issues. It would be useful to have graphs on the run page. This would also make it possible to find memory leaks. Currently you have to use Node functions to log out the memory: const memory = process.memoryUsage(); logger.log(“Memory usage”, memory);
mattaitken 6 months ago
CPU and Memory graphs on the run page
It’s currently difficult to diagnose performance issues. It would be useful to have graphs on the run page. This would also make it possible to find memory leaks. Currently you have to use Node functions to log out the memory: const memory = process.memoryUsage(); logger.log(“Memory usage”, memory);
mattaitken 6 months ago
Run on complete/failed webhook
Currently it is already possible to setup a Webhook in the runner itself, however that leaves room for error at implementation level. As part of this feature request, I am looking for Webhook manager, where a URL is dispatched whenever a task is completed/failed in a similar way of how stripe works or clerk work (attached screenshot). In case of error, it is easily trackable in dashboard.
Rostislav Dascal 3 months ago
Run on complete/failed webhook
Currently it is already possible to setup a Webhook in the runner itself, however that leaves room for error at implementation level. As part of this feature request, I am looking for Webhook manager, where a URL is dispatched whenever a task is completed/failed in a similar way of how stripe works or clerk work (attached screenshot). In case of error, it is easily trackable in dashboard.
Rostislav Dascal 3 months ago
Role Based Access Controls (user permissions)
At the moment all users in your team are admins, which means they can all view the API keys, deploy, add/remove team members, and create/edit/view/delete environment variables. We need to add a permissions system so this can be configured by admins.
mattaitken 5 months ago
Role Based Access Controls (user permissions)
At the moment all users in your team are admins, which means they can all view the API keys, deploy, add/remove team members, and create/edit/view/delete environment variables. We need to add a permissions system so this can be configured by admins.
mattaitken 5 months ago
Allow Reschedule Run to update payload
It would be awesome so that we could also be able of updating the payload when rescheduling a run (runs with “DELAYED” status). Doing something like: const handle = await runs.reschedule("run_1234", { delay: new Date("2024-06-29T20:45:56.340Z"), payload: { foo: "bar" } }); API endpoint: https://trigger.dev/docs/management/runs/reschedule
JA Castro 5 months ago
Allow Reschedule Run to update payload
It would be awesome so that we could also be able of updating the payload when rescheduling a run (runs with “DELAYED” status). Doing something like: const handle = await runs.reschedule("run_1234", { delay: new Date("2024-06-29T20:45:56.340Z"), payload: { foo: "bar" } }); API endpoint: https://trigger.dev/docs/management/runs/reschedule
JA Castro 5 months ago