I am vibecoding an app now, so I started with docs in the repo just because everything is in the repo. But mow I’m relying on this even more. The reason is that problems with session happen - context compacting, being out of tokens mid-implementation and continuing later, etc. So on several occasions, the agent started writing a feature based on my detailed explanations, and finished however it felt like finishing, because my detailed explanations were lost. So now it’s alvays:
Keep my ToDo exactly as I phrased it, IN A FILE
Plan out the implementation, IN A FILE
Now, there’s a zero chance any details of my request will be lost completely. And even if something happens mid-implementation, the chat has everything to track back and see what was done, and what wasn’t. We already had a steict TDD policy, so now both tests and docs are there long before any coding actually happens
AI hate on lemmy is strong. Admitting that you vibe code is enough to get an avalanche of downvotes. (I didn’t downvote you, I just happen to know things how it is around these parts)
Yes, I had the same idea. Just weird, because the post is about AI, so if someone doesn’t like it that much, they could just downvote the post without going in. I hoped maybe someone has other ideas regarding the topic. Because even when I agree that AI datacentres have a devastating negative environmental impact and have to be regulated better, LLMs as tools are super useful for some kinds of work - I’m an IT manager, and before AI, if I wanted to build something on my own, I would need to hire developers (I can write some simple code, but not on a level of building a good application from scratch), but now I’m a one-man-team. “Don’t blame the gun…”
I am vibecoding an app now, so I started with docs in the repo just because everything is in the repo. But mow I’m relying on this even more. The reason is that problems with session happen - context compacting, being out of tokens mid-implementation and continuing later, etc. So on several occasions, the agent started writing a feature based on my detailed explanations, and finished however it felt like finishing, because my detailed explanations were lost. So now it’s alvays:
Now, there’s a zero chance any details of my request will be lost completely. And even if something happens mid-implementation, the chat has everything to track back and see what was done, and what wasn’t. We already had a steict TDD policy, so now both tests and docs are there long before any coding actually happens
Downvoters - care to elaborate?
AI hate on lemmy is strong. Admitting that you vibe code is enough to get an avalanche of downvotes. (I didn’t downvote you, I just happen to know things how it is around these parts)
Yes, I had the same idea. Just weird, because the post is about AI, so if someone doesn’t like it that much, they could just downvote the post without going in. I hoped maybe someone has other ideas regarding the topic. Because even when I agree that AI datacentres have a devastating negative environmental impact and have to be regulated better, LLMs as tools are super useful for some kinds of work - I’m an IT manager, and before AI, if I wanted to build something on my own, I would need to hire developers (I can write some simple code, but not on a level of building a good application from scratch), but now I’m a one-man-team. “Don’t blame the gun…”