The invention of the light bulb fundamentally changed our biology- just as LLMs will fundamentally change our perception of trust.

Artificial intelligence, specifically the type based on large language models, is currently in the declining phase of its hype cycle, but it will not simply fade away into the obscure corners of the internet once its trendiness diminishes. Instead, it is becoming increasingly integrated into various aspects of our lives, even where it may not be the most suitable solution.

Although it is likely that AI may become so advanced that it will dominate or even destroy humanity, in the early days the threat lies with its pervasive presence and misinformation, constantly introducing errors and distortions into our collective knowledge.The utopian versus dystopian discussions surrounding technology serve their intended purpose of diverting attention from the actual impact of technology in our present-day lives. AI has undeniably made a significant impact, especially since the introduction of ChatGPT just over a year ago, but although ChatGPT has surpassed its creators’ modest expectations and the use of generative AI is increasingly prevalent and expanding, it is hardly a breakthrough AI super-intelligence.

However, Chat-GPT has undeniably contributed to a digital landscape that contains easily overlooked factual errors and minor inaccuracies and those agencies which are tasked to sew confusion and misinformation for their own goals or for the subversion of the stability and security of foreign states are undoubtedly enjoying and facilitating the rise of the misinformation economy using LLMs. This is because LLM-based AI models differ in their approach compared to other sources of conventional information dissemination. They provide information in a casual and constant manner, without self-reflection, with an authoritative but deceptive confidence, which users are all to ready to accept having relied over many years on reasonably reliable internet search results from Google and other search engines. Although the seaches have contained the risk of uncertainty, this has been a relatively benign result with search results being relatively stable and factual due to the costs of producing misinformation, but LLMs change all of that. Whilst we have become accustomed to trusting the information that appears when we type a query into a search box on the internet, the automation of LLMs such as ChatGPT allows a significant amount of content to be instantly created according to a defined criteria, producing results of highly plausible but false accuracy and very few people are mentally equipped to immediately question and investigate these dubious sources and there are no current mechanisms to properly address the consequences.