Why you shouldn’t use AI to write your tests | Swizec Teller.
If you derive tests from your implementation, you can’t apply the Beyonce rule. What if the code is wrong and that wasn’t the programmer’s intent? We’ll never know. The bug now exists in both places.
Read in full here:
This thread was posted by one of our members via one of our news source trackers.
It is the same with production code. If programmers just let AI write their code without understanding it or even trying to write a version of it on their own, if bugs appear, they won’t even know how to fix it.
I agree and I saw this first hand with a junior programmer at my company. Claude 3.5 and ChatGPT are really useful if you make the effort to understand what the tool generates for you, otherwise, if you just copy paste the generated code it is a disaster waiting to happen.