When I was studying mathematics at university the lecturers would take great pains to demonstrate the proofs for whatever new equation or technique we were learning. Nothing was acceptable unless it was backed up by a valid proof.
In applied mathematics we learned about initial and boundary conditions. how the values given by an equation can depend on the values you start with and those at the edge of your model. We learned the range of values for which the equation could be applied.
While many equations were given without formal proofs in my physics classes we were still taught that they only applied exactly for theoretical models which made assumptions about the real world and that the assumptions were critical for understanding its applicability.
The real world is much messier than classroom theory, but the principles still hold. It is not enough to be told something is true. You need to understand if the reasoning is valid and under what conditions it holds to be true.
It may not be possible to do this by formal logic, especially in the case of human affairs. Indeed, what is held to be true may depend on the observer. What we can do is query the observer, consider their position in the event, their biases, their sources. We seek alternative perspectives and supporting evidence. Ideally we form our own opinion, but with the preparedness to change or refine it should contradictory evidence come to light.
Unfortunately, the complexity of life means that the time-consuming search for truth is often subject to shortcuts and a reliance on trusted sources for information. The new versions of artificial intelligence tools are seen as a potential source of such information.
These AI tools are inference engines, relying on the statistical processing of ingested data to derive relationships from it. They have no actual understanding of the sources, their validity or their context.
When a user searches for information on the Internet, they can use a number of different techniques to judge the responses. When they inspect a link they can see if it is hosted on a reputable website, they may be able to check the author, look for comments questioning the information, see a list of alternative sources.
The danger with an AI is that it will just confidently provide a response to a query without exposing how it obtained that response. It will not show its sources or the assumptions and biases implicit in them. Questions, alternatives and doubts may not be included.
The greater dangers are that humans will accept that response without question or doubt and that decisions are made without their involvement and without recourse.
Too often are leaders prepared to outsource responsibility for decisions to external parties, be they reports from consultants or computed models. And how often will we accept another’s word without question?
Artificial intelligence can be a useful tool but it is not necessarily a trustworthy one.