The spread of fake news has become one of the central threats to democratic societies and public trust.
The dominant response has been technical and regulatory:
→ more fact-checking → more verification → more automated moderation
While necessary, these measures address only part of the problem.
Classical information systems operate with a reduced concept of truth:
Human communication, however, does not function on facts alone. Meaning, emotion, context, and experience play a decisive role.
When systems recognize only machine-verifiable truth, two effects occur:
This environment is fertile ground for fake news.
Fake news are not primarily false facts.
They are statements in which emotional meaning is presented as objective truth, while the factual component is hidden, distorted, or absent.
The core issue is opacity, not emotionality.
Complex information distinguishes between two components of every statement:
Both components are legitimate, but they must be explicitly separated.
This separation allows:
By making the imaginary component visible rather than suppressing it:
Truth is not relativized. It is structurally reinforced.
Information systems that ignore the imaginary component risk:
Systems based on complex information enable:
Fake news do not arise because humans are emotional. They arise because information systems lack a formal place for emotion and meaning.
Protecting truth requires more than verification. It requires an expanded concept of information.