AI: Use your own API Key (BYOK)
under review
Log In
weskmerrill
Narek Zograbian Honestly that sounds like the best use to me! It also helps with client keys and such as well if we're building on behalf of clients. :)
Narek Zograbian
weskmerrill Noted. Thanks for confirming!
Narek Zograbian
under review
I've modified the title to make this a clearer ticket. If we are to support other models and such, it would probably be through BYOK.
Originally, this ticket was only for OpenAI models, but I think we'll end up giving the option to support other models (Claude, Gemini, etc.) using your own API key (BYOK).
Narek Zograbian
Merged in a post:
Own API integration.
g
gabriel.ross
When selecting the LLM model... It would be amazing to have the option to use/add our own OpenAI / DeepSeek / Gemini APIs...
So we could leverage the speed of Requests per minute using our own APIs and not rely on Taskade's GTP-4o only.
Im in a point that I need to have agents to work on 4k, 100K, 700K tasks in projects and I cant get it moving for the RPM limits or whatever reason I was get errors... I end up using my owns API within Excel Labs to do it. Not ideal.... Copilot can't handle the task, nor web gpt-o1. Only API is able to do it.
I do believe this will be a great tool and will withdraw a lot of Taskade's API requests, for those seeking API RPM tier speed.... Please consider this.
I'm opened to a conversation/call if needed. Im starting to use taskade as the core AI operations, along with N8N and Power Automate.
jack-s
jack-s
It would be nice to have the option to use other AI APIs as well.
For example, Google AI:
r
ryantaskade
Merged in a post:
Bring Our Own API Keys
jbellsolutions
Would live to be able to use our own models. Overall the agents inside of cascade are way better than Market average coming from someone who plays with every platform and tool that comes out. With that being said some models are better than others at different tasks. On top of that open source models like llama 3.1 B, Mistral can really help ramp up production without a rising exponential cost. Using models like GPT 4o Mini can really help with lowering costs as well. Personally I use 3.5 Sonnet for most of my tasks and would love to bring that over in here as well.
This would really help to boost growth for task Aid as well because you'll now attract way more people who don't want to have an exponential rising cost of paying a premium for llm tokens. It also will help to increase usage by a whole lot. For example I have to find ways to send web hooks to make.com because I can use open source models to do most of the workflows. It would be so much easier and better if I could just keep that in taskade but it would just be too expensive having to buy new licenses to get more AI usage I'd be spending two 300 bucks a month just on stuff that would only cost me 20 or 30 bucks.
Let me know if we can make this happen.
jack-s
With Chrome extension, you can use a local LLM model (e.g. Ollama)
Inspiration:
jack-s
🤔
This could be important for those who want to process data locally, so that the AI is not trained with the data and it does not leave the company.
Narek Zograbian
jack-s: We don't train the data regardless.
r
ryantaskade
Merged in a post:
Can we use different LLMs?
weskmerrill
Is there anyway we can use different models for our agents, or agent teams? As we are all aware, some LLMs are better at some tasks then others. As an example, I would REALLY like to be able to use Claude for my writing tasks! Or, even a pricing change on your end to where we can get charged for token usage instead of a monthly fee?
shrug emoji
Regardless of those comments, I would really like to create hyper-specialized agents using a specific model in the future. As models become more specialized for different activities, it would make sense to allow the user to select the appropriate model.
Thank you SO much for building an incredible product, and I'm a super happy user! Just some feedback for the future.
Load More
→