This way is not efficient at all … It’s like monkey
writing random text. For sure sooner or later they would write a book, but after that would happen we would not have much trees.
Fine, we would have better hardware, but there would also be much more data and they scale in much different way. More data faster impacts on speed than newer hardware does. Simply look how much data we add to internet yearly and how it grows every year. Current chat bots does not return results from YouTube
videos and every day we have more and more videos and other content.
Not only numbers are big, but we also have more and more formats. Some time ago HEVC
was relatively new and AV1
was worse than it. Now AV1
starts to become next standard codec. People still use hardware with max AVC
support. Parsing and understanding all of the data in all current and new formats without a dedicated algorithms is terribly hard if even possible. Assuming that creating a true AI
is possible it may be much easier to do.
We should also not forgot that many users report worse results recently.
We humans have problems distinguishing facts from fakes and now we should be the ones who are training algorithms? It’s like trying to create a video game thousands of years ago. We make one step forward and 2 steps back and pay for that billions.
For now the revolution
looks like evolution
. Of course we speed up some things and so some progress has been definitely made, but as same as LLMs
are not even close AIs
as same the current progress is not even close to be called a revolution. That’s said … I don’t see anything wrong with it - we still can find a job and we still live.