Inleiding
Artifical Intelligence, we hear the word everywhere, everyone is talking about it and every company has to integrate AI in their work, no matter if that is useful or not. What do I think about that myself? Below you can read my honest opinion (anno October 2025)
AI = machine learning
AI, or what was called “machine learning” before, exists already long. I remember from a long time ago that machine learning was applied to reach specific goals. In such case, you don’t tell the computer “if this happens, then you need to do that” but you feed the computer a whole series of examples of situations or applications so it can use that to calculate what would be the result, without explaining how it came to that conclusion… and as a result without giving us a chance to verify if this is correct! And exactly that is for me the danger as I like to verify things, so I know what I base my idea or an opinion on.
What for? And what not for?
I do think the application of machine learning of AI for specific goals is still very relevant, based on a clearly defined scope and with a clear verifiable knowledge-input, the system will just continue and generate the logical result. Because the input can be verified, I would dare to trust that the output is also pretty good, but then came our friends of OpenAI. Basing themselves on “everything they could find on the internet”, they built a language model (as all of this “knowledge” is still based on words of which the computer has no clue what they actually mean) and whatever you ask, the system will think of an answer. Think of, which means that also without the right data, it will produce something, or it will paste things together that don’t belong together at all, just to generate an answer… the so-called “ghosting”. And where the this can give funny and original results in creating a poem about a certain subject, it is obviously less ideal to represent facts. As a science-adept (where knowledge is based on experience, testing and verification), I really don’t think this is a good evolution. On top of this, Google was caught on speed by OpenAI, so they had to rush to put their model (with the same flaws) live, with all consequences as a result. Google is thé source of info from the majority of our population and they give by default AI-results on top of searches, so everyone thinks this is correct. Few will actually verify the answers (even if there are sources displayed) as Google will be correct, won’t it? Personally, I often try to verify the answers through the links that come with them and sadly, I have to conclude in more than half of the cases that the result is not (completely) correct. When I asked which restaurants served vegetarian dishes, I got a promising list of places but most of them didn’t serve what was being said and some of the displayed restaurants didn’t even exist, typical ghosting as linguistically all seemed fine!
The result is that many people will accept half truths and where this is mainly annoying when looking for a restaurant, it can be problematic for important subjects. Doctors already have the problem that people checked Dr. Google and think they already know the answer, but if it will assign diseases based on language, without scientific basis (since certain symptoms can be attributed to many diseases), it could potentially become dangerous. On a weekly basis, I hear people saying “ChatGPT told me that…”, unfortunately without adding “and I verified and it seemed to be correct”, people will out of ease ask a question and accept the answer, both in ChatGPT as in Google. Agentic AI is the new holy grale as we don’t even have to do anything with the results anymore, the computer will do it for us. If we’ll all be happy with the hotel which AI booked for us, is still to be seen, but at least we can hide behind AI since we didn’t take that foolish decision, it was the computer…
Sometimes, I’m nostalgically thinking back to the time where we had to look in several articles or on serveral websites to find the answer. Much more difficult and time-consuming, completely agree, but the source of the information was immediately a measurement of the trustworthiness of the result!
Where will it end?
At the moment, we’re still in a “race to the top”, everyone wants to launch as many new AI-things as possible, launch new models (no matter if they work good or not, that’s not important), the more the better. Personally, I think (or should I say “I hope”) that the many problems, incorrections, ghosting etc will lead to unhappy users and that the aim for the highest quantity will be replaced by the highest quality. New systems in which the level of accuracy is a factor and which maybe say “I don’t know the answer, but please come back soon as I will probably have more data to answer this question”. Or who knows, maybe there will be an AI-system that would go search for specific, trustworthy sources, to come to an answer with a solid foundation? We can only dream, right?

