Choosing LLM models for AI tasks
planned
Log In
caitlinrway
Would love to see an update on the timeline for this when possible! Really looking forward to it.
Narek Zograbian
caitlinrway: Would you be okay with this option if it was BYOK?
caitlinrway
Narek Zograbian Yes I think so!
Narek Zograbian
caitlinrway Noted. Thanks for confirming!
azedgetech
Great request. Let's not forget Grok.
rossstokes
azedgetech agree Grok is important to add
Narek Zograbian
planned
brucerfleck
If we are able to BYOK, I would like to suggest the following priority of models:
GPT 4o
Sonnet 3.5
Perplexity
Gemini 1.5
Open source (Llama, Mistral)
I rank Perplexity at 3 because there is no better engine for gathering research with sources.
If Taskade is able to work with multiple models internally and push an output via webhook or API, we will not need to use tools like Zapier or Make. This would make Taskade even more valuable!
Narek Zograbian
brucerfleck We're looking into this!
artewebnet
brucerfleck If you would consider a free alternative to perplexity, tavily.com. it's not as fancy and pretty looking but it gets the research job done efficiently. I'm pretty sure you will have a new item on the list.
rossstokes
brucerfleck GROK as it has the broadest and most current perspective on whats available to draw from.
Heidi Briones
API keys would be the best way, I think
Narek Zograbian
under review
r
ryantaskade
Merged in a post:
choose your llm (gpt4o, sonnet 3.5, opus, and others)
floriane.c
like in Merlin or Perplexity
michaelbrooks
I love this idea of adding certain models (Claude, for example). I understand a BYOK situation might not work. I'd prefer less complexity and for Taskade to deliver more impactful features.
I could see a certain LLM being selected within the Agent settings (or per a Workspace) to simplify things at first. I would also prefer quality over quantity for LLM integrations (Gemini, Claude, etc. prioritized.)
Narek Zograbian
michaelbrooks: We'd definitely be supporting the models that have at least some level of feature parity with GPT-4o. So, that definitely includes Claude and Gemini. Main thing is seeing how we'd need to adjust based on the selected model.
Narek Zograbian
The main issue with this is complexity, which makes the product too technical.
The average user may not know about the nuanced differences between Claude Opus, GPT-4o, and Gemini 1.5 Pro. Providing various options increases the tool's complexity and adds additional steps.
I'm not against this feature at all. We're not married to any specific model, and we do have the option to switch to another one if need be. I only wanted to give some transparency and context from our side! :)
bersus
Narek Zograbian, I don't know your target audience. But without letting users to build their own setups you definitely will miss the tech savvy part of it. The models perform differently for different type of tasks. For example, ChatGPT is good for reasoning, Claude is great in writing, Gemini excels in working with a large amount of data. If you are about creating a versatile tool, it might be reasonable to provide the possibility to switch the LLMs.
Also, in case if I use my own API keys, I'm okay to pay monthly for the tool as a UI for my workflow. And definitely I wouldn't pay for "points", "tokens", etc. On the other hand, in case if I don't have API keys, that could be reasonable to pay per message (like Poe offers).
Narek Zograbian
bersus We're actually looking into how we can support this. The main aspect is communicating it to the everyday user in an understandable manner.
We also don't support BYOK. We work off requests instead. So, every request you send to an Agent or AI-powered feature counts as one request. We handle the billing, keys, and everything else on the back-end.
brucerfleck
Narek Zograbian, I would really like to be able to choose Sonnet 3.5 or Gemini 1.5 in addition to GPT4o. As bersus has said, the models each have their strengths. This would be especially helpful when making multi-agent processes where we can choose which LLM model to use for each agent that aligns with the LLM's strength.
I too would be happy to provide my own keys. However, I understand the complexity this may cause on the backend.
@brucerfleck
samantha007
It would be great to be able to use other models within Taskade, even if it were BYOK via our own API usage. Some models are better at different tasks/content so this would be a great addition to Taskade in conjunction with the Agents too.
Load More
→