Yep! you got it, I am referring to the numerous GPT models out there. Whilst there is enormous value to be created through the application of LLMs (Large Language Models) we are still at a very early stage of both their development and deployment. So, in this article I want to give you a heads up … can we trust it? Well, that’s not guaranteed, so don’t throw out your fact checking just yet!
AI language models are a form of “generative AI”. Generative AI models create new content in response to prompts based on their training data. The training data itself can include biases, confidential information on individuals, and information associated with existing intellectual property claims and rules. Language models can then produce discriminatory or rights-infringing outputs and leak confidential information.
AI language models can facilitate and amplify the production and distribution of fake news and other forms of manipulated language-based content that may be impossible to distinguish from factual information, raising risks to democracy, social cohesion, and public trust in institutions. AI “hallucinations” also occur, where models generate incorrect outputs but articulate them convincingly. The combination of AI language models and mis- and disinformation can lead to individually tailored deception at a broad scale that traditional approaches such as fact-checking, detection tools, and media literacy education cannot readily address.
One particular set of concerns arises from language models that can take actions directly, such as sending emails, making purchases, and posting on social media. But even so-called “passive” question-answering systems can affect the world by affecting the behaviour of humans, for example by changing their opinions, giving them inappropriate medical advice, or convincing them to take certain actions.
Language use is, after all, the primary means by which human political leaders and dictators affect the world.