Something weird is happening with LLMs and chess.
Are they good or bad?
Read in full here:
This thread was posted by one of our members via one of our news source trackers.
Something weird is happening with LLMs and chess.
Are they good or bad?
Read in full here:
This thread was posted by one of our members via one of our news source trackers.
Been playing around with LLMs, but it feels like writing the right prompt is a trial-and-error thing.
Doing something using a tool, that doesn’t support it, doesn’t make sense and therefore it’s not worth analysing the results.
- e4 e6 2. d3 c5 3. Nf3 Nc6 4. g3 Nf6 5.
with such input data it’s pointless to show first 4 moves on graph. Saying
Wow, recent LLMs can sort of play chess! They fall apart after the early game (…)
is like saying:
While the input data was good, the results were bad
since as we see the LLMs were losing around said 4 initial moves.
Since OpenAI is lame and doesn’t support full grammars, for the closed (OpenAI) models I tried generating up to 10 times and if it still couldn’t come up with a legal move, I just chose one randomly.
so …
Because the runner was cripple, his time was randomly chosen from a pool of possible times.
and? How does this adds anything to the discussion if the author have generated part or (possibly even) the whole output?
You are a chess grandmaster.
(…)
1. e4 e6 2. d3 c5 3. Nf3 Nc6 4. g3 Nf6 5.
It’s just limiting number of possibilities … In linked article there was no mention how the game was rated by the chess engine. There was no information how much blunders and how much mistakes there were. There was no tool used which estimates if said moves looked randomly or if there was actually some plan for the game.
Yet people found that LLMs could play all the way through to the end game, with never-before-seen boards.
Yes, chess engines does the same and better, so? It’s not really hard to write a simple algorithm which filters all moves to the only possible ones, doing a move and checking if the game is over. It’s a small surprise that there was no forced tie, but I guess even the weakest level can avoid it.
The results were not generated from nowhere. There always need to be a source. While asking descriptive questions often helps it may drastically decrease number of possible results. In Google
search for example, if you do not force a specific term, the engine is looking for a similar ones and the results may not always be the best.
Also the LLMs prefers mainstream narration for example preference for renewable energy among possible energy sources despite their disadvantages. The most popular LLMs are made by a huge companies and they can support everything including worst things and ideologies as long as it would not be against said companies. The good results were never considered as highest priority.
At start we may be surprised about gpt-3.5-turbo-instruct
, but then we notice that gpt-4.o
at the start gives a better output, so it’s not “just better than others” - it’s just different. If it’s different (whatever it means) it’s not really worth to compare them. It’s like comparing 2 LLMs where each of them is based on extremely different sources with ideological background and be surprised that they discuss whether the best ideology is Nazism or Stalinism.
and yeah … as always … that’s the powerful “AI” who would take our jobs and destroy humanity. I know chess only for fun and still I’m better than LLM which possibly contains information about thousands of chess plays. The only thing this article has definitely shown is that LLMs are far, far way from becoming an AI.
I agree, but let’s see in a couple of more years.
This way is not efficient at all … It’s like monkey
writing random text. For sure sooner or later they would write a book, but after that would happen we would not have much trees.
Fine, we would have better hardware, but there would also be much more data and they scale in much different way. More data faster impacts on speed than newer hardware does. Simply look how much data we add to internet yearly and how it grows every year. Current chat bots does not return results from YouTube
videos and every day we have more and more videos and other content.
Not only numbers are big, but we also have more and more formats. Some time ago HEVC
was relatively new and AV1
was worse than it. Now AV1
starts to become next standard codec. People still use hardware with max AVC
support. Parsing and understanding all of the data in all current and new formats without a dedicated algorithms is terribly hard if even possible. Assuming that creating a true AI
is possible it may be much easier to do.
We should also not forgot that many users report worse results recently.
We humans have problems distinguishing facts from fakes and now we should be the ones who are training algorithms? It’s like trying to create a video game thousands of years ago. We make one step forward and 2 steps back and pay for that billions.
For now the revolution
looks like evolution
. Of course we speed up some things and so some progress has been definitely made, but as same as LLMs
are not even close AIs
as same the current progress is not even close to be called a revolution. That’s said … I don’t see anything wrong with it - we still can find a job and we still live.