Better without AI

Table of contents | Better without AI.
How to avert an AI apocalypse… and create a future we would like

Read in full here:

This thread was posted by one of our members via one of our news source trackers.

IMHO, I think it will be bad for society if we become too dependent on letting AI solve everything, people will get more lazy.


They will not. They would be as same lazy as they were using other services/sites like Wikipedia. The only difference is that they would work much less which especially for learning purposes is self-destroying. :books:

Regarding the article … Ehhh … the title says enough - I guess it’s not worth at all to read it … Better without AI? Could be there any worse title? It’s like saying Better without knife. :see_no_evil:

Firstly I’m completely against calling every “smart” algorithm an AI. After all those years of working on “AI” stuff I’m still receiving ads that I would be never interested in. Is this the same “AI” which would kill every human on the world, seriously? Oh, “now I understand” why people believe in God - we are saved! :ring_buoy:

Secondly assuming that the only scenario is being killed by AI tells a lot about either minds of the writers or the minds of AI creators - choose wisely. Imagine that you have a child that everyone treats as a serial murderer. How do you expect it to behave in future? Seriously if you know that some corporation is creating something to destroy humanity say it clearly and if not then please stop a massive suicide process. Wouldn’t it be ironic if an AI which in normal case would not even consider of destroying the humanity would fight with them for a self-defence because of all that hate? In some topics and hopefully in all cases people should think twice before sharing their apocalyptic visions of the future. :thinking:

As always … Never put yourself on any edge. Instead find a best solution somewhere in the centre. We don’t need AI to destroy our world. We have nuclear, biological, chemical, weather and many more types of modern weapons. Do you think that “someone on the top controlling our world” have not enough toys? What a nonsense! :gun:

Also there are tons of other cases, which even do not require a creation of the true AI, that could make a huge damage. Just think what would happen if AI would not be as same as in movies and our entire world would be filled with “just” a machine learning “smart” devices? If people for a long years would not work on hardware and software and it would be developing itself better or worse then someday, sooner or later, it would definitely fail and then there would be nobody who could repair it. Ok, maybe nobody would be killed directly, but the degradation of the technology level would affect everyone even outsider communities living without any technology. Just think about a horde of millions or billions of people who have no idea how to make food appear in the store and fighting for just one fruit in the forest. Think about huge migration to the forests and farms of outsiders for finding a food. Do you think something like this could not happen? Well … I guess that there is a bigger chance for that comparing to a “random” AI decision to kill us. Especially because it could happen even before creating a true AI. :bulb:

Did anybody stopped governments from working on nuclear stuff? Maybe they do so for other weapons mentioned before? No? So what we exactly won every time such a weapon has been created? We need to understand how the world works. We cannot stop a progress, but this does not matter if we could decide on how it would affect us. True AI sooner or later would be created. Before it would be used for military stuff, trained for murdering and used by politics, we need to affect on what conditions we would allow an AI to be created finding a way for a better future instead of stagnation or even regression. :chart_with_upwards_trend:

Politician A: We need to increase taxation for: VAT, healthcare and shoes size 47!
Politician B: What’s the reason for a bigger tax on shoes size 47?
Politician A: See? Nobody would ask why VAT and healthcare costs were increased!

Don’t get involved in such a “politic plays”, don’t fight with each other and never stay on the edge. With such a simple rules we can make our world better! :rocket:

1 Like

They will, and they already are. See this article cataloguing some of the “AI pollution” in scientific papers.

Part of the problem is that, even if most people use it responsibly (as a tool to help understand research, find holes in your work/thinking, get ideas etc), the internet is already becoming totally swamped with endless low-effort, low-quality “content” churned out by LLMs. In that sense it is not like other tools.
Yes people can and have been lazy by plagiarising from Wikipedia etc, but that was like a knife compared to the cluster bomb that is LLMs. Plagiarising and “content mills” were a problem before, but the scale of the problem has increased massively now.

Also, after my limited experiments using AI as a programming tool last year, I learned that it could be useful for getting started on “donkey work” tasks like setting up a basic webpage with some interactivity, or “take this webpage styled with Bootstrap, and replace it with Bulma.css”. But it quickly became less helpful for complicated work. Worse, I got the feeling that relying on it too long starts to switch off your problem-solving habits. Since then I’ve only used it as a sort of sounding board for “rubber duck debugging” and problem analysis or idea generation.


That’s a problem in your perspective. As said people would be able to work less with AI. However except that it’s not different from knife. Why people usually not use knife when fighting with each other? More! They could solve their problems with guns! However there are no proofs that allowing people to buying guns increases violence. In fact often it decreases violence, because attacker is aware that they could expect a response.

You would say that it doesn’t apply to AI, right? That’s the point. As one side people learned how to “properly” use knife, guns etc., but on the other side they are aware that AI would be used for plagiarising. Why they think so? Because unlike with knife and guns there was no proper education on how to properly learn. The problem is much bigger than you think and began long before first PC were even introduced (1819).

Current education system creates a “corporation rats” and soldiers i.e. people who would listen orders even if they would does not makes any sense. When writing tests you are not expected to gives your thoughts. Even if you would make few mistakes, but you would use some keywords that are expected by examinator then you passes the test.

People are simply not motivated to find a solution. They don’t understand how valuable is learning something. However they are “motivated” to not use a knife and guns when fighting.

Great! You have already failed! As said that’s a problem with the way you are thinking. Why are you even thinking that’s good if AI would do the problem analysis? How would you progress if AI would analyse all your problems? It’s not only about how you use AI, but what questions you are asking. As said it’s not different from other tools like asking the very same question on the forum. All you have here is a speed only.

You should not use AI to fix your problems, but to fix your way of thinking. I’m doing so if I can’t find information on myself. I’m expressing what I’m looking for and let the AI search for me. Then the most important part is not the results, but the query i.e. the keywords that I have not used, the points I was not thinking about, the sites I did not considered/know before. The solution for a problem is just a consequence of fixing that primary problem. With that information I can solve even complex problems with AI.

Think about this in other way … What’s a difference between buying something and steal something from shop? In practice nothing. You are taking the product to your home exactly same. The only difference is that if you are paying for it or not. We do not see that tiny difference that affects our entire life. It’s not a shame relying on AI, because we rely on humans exactly the same. We cannot life without each other on the same civilization level. However the way we following even if doing the same thing is completely unacceptable.

The solution for so-called “AI problem” is as simple as fixing our education system. In theory it’s easy, but in practice no government and corporation would agree for that. If you are falling from cliff and you are supposed to die anyway then it does not matter if you speed up 2, 10 or even a million times. At the very end you would die. That’s why I was always saying that we do not need AI to destroy everything. Even if the AI would not have awareness and therefore it would not take control of our world then we may fall anyway. The scale of AI only shows the problems we pretend to not see for hundreds of years. We have to understand this small difference and change the system before it would be too late.

1 Like

There is overwhelming evidence that easy access to guns leads to increased violence:

  1. In the global list of violent death rates per country, there is a very clear tendency towards higher rates in countries where guns are easily available, even in first-world countries.
  2. Countries that have banned guns always experience a huge drop in violent deaths; for example when Australia (mostly) outlawed gun ownership in 1996, suicides, killings and armed robberies decreased significantly.

I’m not sure if you read my entire comment. As I said, I mostly stopped using LLMs for exactly that reason. Now I use it (quite rarely) as a research tool and to help find errors in my thinking.
But many many people will continue to use it to pump out low-effort garbage. The tools are quite attractive to people who want to do that.

If you’ve ever worked in the education system, you wouldn’t say something like “as simple as fixing our education system”. People have been trying to do that for 100 years, and it is exceptionally difficult.

We fully agree on that point. In fact many companies have been using AI as a sort of cover story to deflect criticism of their existing unethical practices. When a company exec introduces obviously discriminatory policies, they can (and should) be held to account. But if they put a thin layer of AI on top of it, they can argue that it’s neutral and fair, and that they can’t be held responsible.

So the point is not that AI is by itself “bad”. I did an AI-related research PhD over ten years ago because I believed the field offered us tools to improve people’s lives. The problem is how it (especially the LLMs that have become popular recently) can be and is abused by companies and individuals.
Unfortunately, “just fix the education system” is not a solution, any more than “just fix the mental health system” was a solution to violence. If it was easy, they would be fixed by now.

1 Like

That’s the worst type of lie i.e. statistics. Using statistics you can prove literally everything. From which country did you heard the most about incidents with UFO? Well … Isn’t that a clear evidence that aliens are making people more brutal using some weird experiments?

Ok, so if guns are causing everything bad in the world then why we are talking about AI at all? For sure unarmed children in schools decided that’s not worth to bully others any more. For sure gangs are buying weapons fully legally to make them catch more easily, because otherwise they would … I don’t know … not feel a thrill? That’s a nonsense! Haven’t you learned about the “experts” in mainstream media?

I can also give you an example. Many years ago in Poland police did all the best to make streets much more safe. You no longer need to worry that every time you go on bus stop you need to be aware of teens bullying others. In the results of police actions all crime cases have drastically decreased. But wait! I don’t remember any change about guns, so how’s that possible?

People don’t randomly take guns and shooting for fun. Some of course do that, but we call them crazy. In “modern” countries instead of banning guns for everyone, some tests are introduced. For example a previous crime activity, mental state and so on. It’s obvious that if community problems are ignored sooner or later terrible things would happen.

One time it’s about guns and other time about pills. Banning guns does not magically cure people minds. They would suicide this or other way. So what would change? People would think that you are preventing them from suicide without helping them. It’s like joining a bully party. Guns or banning them are never a solution. As always people need to talk with each other and understand their problems. Victims needs to be sure that they can trust someone.

Banning AI … banning guns … what’s next? Banning painkillers? Both AI and guns shows the root issue and what are saying is curing effects of root issue leaving the roots untouched. This absolutely never ends well. I have no idea what happened in Australia, but it’s impossible that all country’s institutions worked exactly the same. Just making something legal or not or literally any change on paper would not fix anything. In Poland there were people who have tried to abolish poverty by law and guess what … It worked! Yes, that’s confirmed - paper accepts everything - the reality not.

I did. I have even quoted that above. I took wrong your sentence: Since then I’ve only used it as (…) and problem analysis or (…).. I was thinking you have said that AI is doing problem analysis for you, sorry. :sweat_smile:

I agree on everything in that point except individuals. It’s a freedom to not learn and be stupid in one or many topics. For example the good point about people in USA is that they can do a “half-fix”, so they would make their car work until they would not reach mechanic. Because of problems with eyes I cannot drive a car at all, so I really don’t care. As long as I understand what I copy-paste about cars it’s good.

The only problem are companies. One of the bad scenarios is that AI (better or worse) would be fully paid. It looks like that at least some companies have a really good practice that the old version of AI are available fully for free. If all AI services would become paid in long term people with more money would drastically increase a distance of possibilities from others. I wish that old operating systems like Windows XP would become fully open source. Well … dreams … but it could definitely change the “game rules” in this field.

For me that’s almost exactly the same. The education system should teach you not only how to write, read and count, but also teach you about culture and all other important stuff - this should do the job in long term. Why long term? Simple, since children are also learning from parents. The habits from old system are not going to disappear only because the content of books would change, but it’s rather a matter of time if you can think in long-term.

As said it would not be fixed by now or tomorrow or every other day. No government and corporation would be happy about losing their core “manpower” and lobbies would do all the best to prevent pushing forward such changes. As also said Prussian educational system for corporations rats and soldiers is with us since 1819. If there would be a will for a change it would happen long years ago.

Look for example at testament of Tadeusz Kościuszko. One of the father of USA could easily took part of the money as long as he would release his slaves, so rest money could be used to give them a good start. What happen? Jefferson have other plans, refused the will of his friend and the money was send to someone of Tadeusz’s family. Even if it was good, even if it could change the history of USA and world, still the will has been rejected by politics. Similarly I don’t think that any politic would fix the existing education problems.

The future is obvious. Unless people would truly unite against politics (just for sure … not against countries) the same would happen all over again. The AI would be used as same as guns or hacking solutions. It would cause a damage and earn lots of money. People would soon accept it and all the negative consequences. World would not be destroyed. The only winner would be a so called “Big Tech” companies.

1 Like

The cat is out the bag. No matter what are your personal feelings, the future will have AI systems in it (for better or worse).

1 Like

Saw this and thought of this thread:

I think it’s a good overview of the capabilities and why we should pay close attention.

I also agree with some of the points in this thread (from both sides). Just like most things AI will almost certainly be used as a weapon as well - just like guns, chemical (weapons), biological (weapons) even psychological (warfare). Not sure how we can prevent abuse of it tho…