• Tangent5280@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    8 days ago

    …and then you ask the agent to implement the next logical piece. It comes back with something that contradicts what you instructed on three sessions ago. A dependency appears that you explicitly said was forbidden. A boundary gets crossed. A pattern gets abandoned in favor of something the model apparently prefers from its training data. The code is often technically correct, but it is wrong for your project.

    I hit that moment a lot. It annoyed me greatly. And then I started thinking hard about why it kept happening.

    The reason it keeps happening is because OP is an idiot using a sophisticated Markov chain to try to automate away the only parts of his job that is actually self-actualizing. They will write another article when eventually they lose their job to some VC backed con job shows their boss a presentation showing capabilities that OP too falsely believe AI has.