chunk long documents/ contracts with ai agents workflow
planned
imoprotect
Dear Taskade Team,
I would like to propose the integration to manage and analyze long contracts. Here’s a brief outline of the desired functionality:
Objective:
Enable multiple ai agents to chunk long contracts into manageable sequences, process each chunk with AI-generated prompts, and compile the results into a cohesive final document.
Details:
Chunking Mechanism: User chooses how to divide a contract into segments/paragraphs
AI Prompts Application: For each segment, choose an agent and apply AI-generated prompts that are relevant to the specific content of that section (e.g., terms of payment, obligations, rights, etc.).
Processing and Analysis: Utilize the AI to process each chunk to generate summaries, clarifications, or specific contractual insights.
Result Compilation: Aggregate the outputs from each processed chunk into a comprehensive final document that captures the entirety of the doc/contract’s nuances and details.
Expected Outcome:
This feature would streamline the review and analysis of lengthy contracts, reduce manual effort, and enhance accuracy in understanding and applying contract terms.
I believe this addition would significantly benefit users dealing with complex document management and look forward to your thoughts on its feasibility.
Log In
John at Taskade
Thank you for this detailed proposal — chunking long contracts and documents for multi-agent processing is a genuinely powerful use case and one we think about a lot.
Right now, Taskade agents can already ingest PDFs and long documents as knowledge sources, and you can run multi-step workflows where agents hand off outputs to one another. With Taskade Genesis (https://taskade.com/create), you can build a custom app or agent pipeline where one agent splits a document into sections, another processes each section with a targeted prompt, and a final agent compiles the results.
The native "chunk and sequence" workflow builder you described is something we are actively working toward. In the meantime, the agent team + knowledge base combination gets you surprisingly close today. Give it a try and let us know what gaps remain — your detailed outline is exactly the kind of feedback that shapes what we build next. Check the latest at https://taskade.com/changelog.
John at Taskade
Update: This is on our roadmap!
We've prioritized this based on your votes and feedback.We're planning to include this in an upcoming release. Stay tuned for updates!
In the meantime, check out what's new:
- **Taskade Genesis** — generate AI-powered apps, agents, and workflows instantly
- **Community Hub** — 1000+ free templates and apps to explore
Follow our progress: Changelog | Product Updates
Thanks for voting and helping shape Taskade!
John at Taskade
marked this post as
planned
John at Taskade
marked this post as
complete
Shipped!
This is now live across Taskade AI Agents.We've made major upgrades to our AI Agent platform — multi-model support (GPT-4o, Claude, Gemini), custom agent tools, commands, knowledge base, MCP v2, markdown export, and public agent APIs. Your vote helped prioritize this.
Try it now:
- Create an AI Agent with Taskade Genesis
- Clone pre-built AI agents from our Community
Full details: Changelog | Latest Updates
John at Taskade
marked this post as
planned
Update: This is on our roadmap!
We've been shipping fast — including AI Agents, automations, new views, and hundreds of improvements. This specific enhancement is tracked and planned.
Explore what's available now:
- Build with Taskade Genesis — create AI apps, agents, and workflows instantly
- Browse Community Hub — clone templates and AI agents for free
Stay updated: Changelog | Product Updates
r
ryantaskade
Hi there, here are a couple of clarifications needed:
- You mentioned integration, but this suggestion seems like a feature request instead of integrating with another app.
- Could you describe more about Ai-generated prompts? Is there a baseline goal that the agent is suppose to achieve? For example what is a relevant prompt related to terms of payment? Is it checking the validity of it or improve it or rewrite it? etc
- How do define chunks? Is it by pages within a PDF file? Are you suggesting the text content be within a Taskade project instead?
imoprotect
Hi, You are right. It is more of a feature request.
My actual challenge is related to the difficult task of analyzing a longer text imported "txt" in a project with a single agent - when more than 15 paragraphs are selected once for the agent to apply the prompt/command on them.
Quality is not best when longer paragraphs are assigned to the same agent command in one task. Sometimes, the agent fails to output because of too long text. The idea is related to the fact that in order to have a relevant final output from a longer document, several agents should be deployed, each based on the topic of a specific section and contextually aware of previous agents outputs.
The goal is to have a relevant output in the end. I define chunks as "selected paragraphs" from the uploaded project document. For example, I would have a contract with 10 sections that need specific knowledge to be analyzed.
The agent specialising in a specific job would be employed for section 1. Another specialised for section 2 and so on. The same retrieval knowledge can be used by multiple agents as commands to be applied.
Multiple agents can be used to analyze documents. Overall, the goal is to improve output quality—the more specialised the agent, the better the results. Agents Working together for the final goal
Each deployed agent would be aware of the previous agent's context and overall document in order to output considering the relevance of previous answers. It is more of a workflow as a document analyst, step by step.
I see it as an automation workflow with the end result of reviewing a document.
At this moment, I can assign sections to agents, but the overall result is gathered from each agent's output that is not related to the previous outputs of other agents. So the final result is agent 1+2+3...10 outputs that are not contextually aware of previous agents—they do not act as context-aware reasoning outputs results.
The actual workflow would be to : first step : select paragraphs and assign agent for each ; second step : start all agents once to work together ; third step : gather output results in one piece organized by the specific sections analyzed by each agent . Example: CrewAI open-source framework for orchestrating role-playing and autonomous AI agents empowers agents to work together to tackle complex tasks.
Thank you!
r
ryantaskade
imoprotect: Thank you for the clarification, I understand your suggestion better now, i think this could be possible once we complete our multi-agent feature, however the UI does seem to be quite challenging as the example CrewAi configures the multi-agent feature mostly through code.