Can GPT-4 *Actually* Write Code?

Can GPT-4 Actually Write Code?.
I test GPT 4’s code-writing capabilities with some actual real world problems.

Read in full here:

This thread was posted by one of our members via one of our news source trackers.

It seems that right now, it can do small/low-code apps. Should programmers be afraid of what it can do in the future, in the future versions of ChatGPT?

1 Like

The next step is to develop a system that can generate codes and only focus on integrate those.

1 Like

Read this tweet from John Carmack:

1 Like

I think I’d agree with him on that basis - one day we’ll just be able to tell a computer what we want and it’ll output a finished product :upside_down_face:

I’m not entirely convinced that this will be the case in the context of the discussion. It is already true in a certain sense, realistically that is what software developers do right now, we don’t usually write instructions for the machine we write specifications that are exact enough to become products and have a program turn those into actual products. A programming language just exists to make it easier to specify the requirements in a detailed enough manner for it to actually become a program. I think that is what is missing from the current “ai will take over everything” line of thinking, to make something work you end up needing people who understand the business, the situational context and the people who are asking for a solution to apply an understanding of how to create a detailed enough specification in order to take some hand waving description of what it should be and figure out what it really should be.

I can see that there will be huge improvements in terms of helping people rough things out and helping make solutions to general problems simpler for anyone to apply to their use case improving both lower barrier to entry and productivity But I find it hard to believe that the gap between how well people can describe what they want and how well they really need to describe it is going to close that far. Having a machine that is good at guessing what they might want is unlikely to go far outside fairly basic cases without someone who knows what they are doing interjecting. The other problem is the challenges in answering the question of “why doesn’t this do what I want it to?” become quite murky when nearly all the decisions in what it actually does are quite good guesses rather than actually specified anywhere.

In the end for most cases the building stuff is mostly easy. The vast majority of our jobs are figuring out or finding out the details that no one else is really thinking about. I can see AI changing the tools a lot but I can’t see AI solving the real problem that most software developers are there to solve in the near future. In that way I agree with what Carmack is saying, being a highly skilled person with X tool doesn’t matter much.

1 Like

I think it might be in the distant future before computers/AI can build real-world complex systems from messy specifications and requirements. :slight_smile:

1 Like

Do you reckon? :upside_down_face:

I think GptChat-4 can write code. But he can’t write code without a programmer. The programmer divides the tasks into small tasks and gradually assigns these tasks to the chatbot, which gradually helps the programmer write the program. This requires skill from the programmer, not from the chatbot.