Quality is not an act, it is a habit

flag

.......... Facts and

Facts and AI

- Conversation of Rob Arntz with ChatGPT

You said:

Who wrote the book on FCO-IM?

ChatGPT said:
The FCO-IM-book (Fully Communication-Oriented Information Modeling) is written by Herman Balsters, Harm van der Lek, and Lex van Vliet. This book describes the FCO-IM-methodology, a technique for information modelling that strongly aims at communication and the explicit capturing of knowledge in natural language.

The FCO-IM-method is often used to model complex information systems on the basis of structured communication with end users. It stresses the full understanding of the meaning of data in a specific context, before they are implemented in a technical system.

Do you have specific questions about the book or the methodology?


You said:
Who is Herman Balsters?

ChatGPT said:
Herman Balsters is a Dutch scientist and expert in the area of information systems, database technology and conceptual modelling. He is particularly known because of his contributions to Object Role Modeling (ORM) and related methodologies for information modelling.

Background and contributions
1. Academic Career:
Herman Balsters has worked as university lecturer and researcher, among others at the 'Rijksuniversiteit Groningen'. His academic work has been focused on subjects as databases, semantic modelling and the integration of conceptual models in information technology.

2. Object Role Modelling (ORM):
He has actively contributed to the research and the development of ORM, a data-modelling technique aimed at the modelling of conceptual schemas in natural language. ORM is used to design information systems in a way that is comprehensible both for technicians and domain experts.

3. Publications:
Herman Balsters has several scientific publications to his credit in international journals and conferences, often focused on ORM, databases and information integration. His work is important for researchers and professionals engaged in data modelling.

If you are looking for more specific information on his work, I can expand on that!


You said:
Who is Lex van Vliet?

ChatGPT said:
Lex van Vliet is a Dutch expert in the area of information modelling and datamanagement. He is co-developer of the FCO-IM-method (Fully Communication-Oriented Information Modeling) and one of the authors of the book about this methodology. Lex has delivered an important contribution to the field of conceptual modelling through his focus on communication as a basis for data modelling.

He has specialized himself in methods and techniques to make complex information structures comprehensible and useable to organisations. His work has been of great value for the improvement of the communication between domain experts and IT-specialists. Besides the development of FCO-IM he has he has also lectured, given presentations and developed training courses in the area of information modelling.

Do you like to know more about his work or specific contribution he has made?


You said:
Tell me all you know about him!

ChatGPT said:
My knowledge about Lex van Vliet is limited to his work and contribution within the field of information modelling. Here's a review of what I know.

Contribution to FCO-IM
• Lex van Vliet is one of the three co-authors of the book about the Fully Communication-Oriented Information Modeling (FCO-IM) method.
• He has developed the FCO-IM-methodology particularly to base information models on the communication between people. This makes it possible to formulate data models in natural language, which is comprehensible to both domain experts and technicians.

Professional Background
• Lex has mainly been busy with teaching, training courses and consultancy in the area of information modelling.
• His expertise lies in making complex concepts accessible and improving the quality of data models in organisations.

Publications
• His best known publication is the book about FCO-IM. This book is widely used by professionals and students who want to delve deeper into communication-oriented information modelling.


A few days ago my colleague in training courses, Rob Arntz, sent me the following story about his findings with ChatGPT, of which you find a transcript above:

“Recently, I asked ChatGPT the following, seemingly simple question: "Who wrote the book on FCO-IM?" This yielded a very surprising answer: "Herman Balsters, Harm van der Lek, and Lex van Vliet."

Surprising, for not entirely correct! But also not entirely incorrect. Let us start with tht good news: ChatGPT names three authors, that is correct, and one of them is Harm van der Lek, also correct. The not so good news is: Herman Balsters and Lex van Vliet are not co-authors of the FCO-IM book (red. P.A.: the right names that are missing are Guido Bakema and Jan Pieter Zwart; see fig.1 and 2 top-right). A little searching on LinkedIn told me that Herman Balsters is "Associate professor in Modelling and Design of Information Systems, Faculty of Computer Science" at the 'Rijksuniversiteit Groningen'. Further investigation showed me that he is among other things co-author of the article "Modeling Dynamic Rules in ORM", together with, amongst others, Terry Halpin. In this article a reference is made to the FCO-IM book. Hence, he is familiar with the method and might, given his background, quite well have been a co-author.

Lex van Vliet is a name that doesn't mean anything to me at all in the context of FCO-IM. Also by further questioning ChatGPT, I couldn't find out who this Lex might be and what his link with FCO-IM is. ChatGPT persisted in going round in his self-invented circle.

This shows, that as answer to just one question a partly correct, partly plausible but incorrect and partly implausible answer was given. However, you can assign these qualifications 'correct', 'plausible', 'incorrect' and 'implausible' only, if you already know the fully correct answer. And herein lies the great danger of ChatGPT and its peers: usually, you don't know the right answer to a question. And would it be so that every question yields hogwash as answer, then it would not be a real problem. Then you simply don't believe the answers of AI. But precisely because the answer is often correct and AI also strongly suggests that the answers are correct, you also believe AI, when it gives an incorrect answer. And perhaps it is precisely the plausible yet incorrect answers (such as here Herman Balsters) that are the most dangerous. Due to their plausibility you will not doubt the answers and draw wrong conclusions from them.”
Rob Arntz, July 2025.

Indeed, various sources, such as the quarterly figures presented by Google at the end of July, now show that on the one hand the use of AI for searches is increasing, but on the other hand the use of the internet is changing drastically as a result. AI chatbots provide instant answers, and because these answers are trusted, users are less likely to click through to websites that provide reliable information, such as on news, research, and medical information. Health sites, in particular, are seeing a significant decline in visitors, putting many users at risk of wandering off into a world of delusions with all its consequences.

The story of Rob instantly reminded me of one of my previous columns about a quotation that on many websites is attributed to the physician Thomas Fuller, who lived from 1654 to 1734, often with the addition that the quotation can be found in his book Gnomologia from 1732.(see History of columns about BI, Information management and Information quality).

Ik said in that column, that I have read the book Gnomologia from start to finish and have not found the statement in question anywhere inside. And weirder still: the cover of the second edition of Gnomologia, published in 1817, is adorned with the portrait of Thomas Fuller, churchman, historian, and writer, who lived from 1608 to 1661. There are even websites on which the quotation in question is attributed to him. But in all works of both Thomas Fuller, M.D., and Thomas Fuller, the churchman, I have as yet not been able to find the quote in question.

With these generally accepted allegations and representations as well, there are, just like in Rob's story, truths, half-truths and untruths. Finally, I have in that column, more than two hundred years later, presented the facts just as I have been able to verify them, for instance that the quotation in question is widely attributed to Thomas Fuller, M.D., author of the book Gnomologia uit 1732. And in the portrait of Thomas Fuller, the churchman, I put the face of Thomas Fuller, M.D., on which purely coincidentally - as it seemed to me - a slightly ironical smile can be seen.

What can we learn from this? Well, it is on the one hand encouraging but on the other also somewhat disconcerting to note, that with respect to reliability of information AI has advanced to the level of humanity about two centuries ago. Obviously, it still has a long way to go to bridge the gap with fully truthful information, but hopefully it will, thanks to its speed of learning, manage to do so in less than two hundred years.

- Peter Alons, July 2025

P.S., August 6, 2025: Today, several news sources, including NOS.nl, TIME.com and Psychology today, reported that due to intensive use of ChatGPT and other AI chatbots more and more people are developing psychoses or other forms of psychological disorders.

 

Links

Impressions:




Related sites: