It would be interesting to watch for sure but I wonder if they might correct each other or collaborate in some way that could be lightly supervized to produce an ouput
It would be interesting to watch for sure but I wonder if they might correct each other or collaborate in some way that could be lightly supervized to produce an ouput
Absolutely. It is not thinking in the same way we do.
Putting aside the planning orchestrator and focusing just on the LLM.
The agent can do this in stages to try and decide what the complete set of input tokens should be, and at what point to stop trying to get more output tokens.
You can use the orchestrator approach to then try to get other models to validate the outcome and refine it - but it’s all just prodding the statistical model.
This talk was interesting. I’m a lot less enthusiastic on the topic than the speaker… but this is closer to how I think the industry can see a net gain from AI - before the slop-errors from taking expertise out of the loop hits critical mass.