Artificial Ingelligence (AI) and academic study |
---|
There are two major issues with AI generators - bias and hallucinations. These significantly challenge the reliability and fairness of any AI content.
Bias in AI Generators
Bias in AI can be described as systematic errors or distortions of the results returned through AI systems. This mostly manifests as unfair/prejudiced content, mirroring human biases found within training data.
Sources of AI Bias
Examples of AI bias
Impact of Bias
This can enable the bias in AI to perpetuate and increase already existing social inequalities, hence ensuring that the individuals from minority groups are treated unfairly and that the discriminatory practices are maintained.
Hallucinations in AI Generators
AI hallucinations refer to instances when AI generators fabricate false information or misleading facts and provide them as true.
Characteristics of AI hallucinations include:
Causes of hallucinations:
Types of Hallucinations:
Addressing Bias and Hallucinations
A number of the following methods can be applied to such effects:
Bias Reduction:
Hallucination Reduction:
Conclusion
Bias and hallucinations are two of the most important challenges when developing and deploying AI generators. The first step in developing responsible AI is to acknowledge these problems. Progress is being made on these issues, but continued research and further watchfulness are necessary to ensure that AI systems produce fair, accurate, and reliable outputs.
Sources:
Farquhar, S., Kossen, J., Kuhn, L. and Gal, Y. (2024) ‘Detecting hallucinations in large language models using semantic entropy’, Nature, 630, pp. 625–630. Available at: https://doi.org/10.1038/s41586-024-07421-0
MIT Sloan Teaching and Learning Technologies (2024) ‘When AI Gets It Wrong: Addressing AI Hallucinations and Bias’. Available at: https://mitsloanedtech.mit.edu/ai/basics/addressing-ai-hallucinations-and-bias/ (Accessed: 13 August 2024).
Perplexity AI (2024) Perplexity AI response to Kathy Neville, 13 August.
University of Oxford (2024) Major research into ‘hallucinating’ generative models advances reliability of artificial intelligence. Available at: https://www.ox.ac.uk/news/2024-06-20-major-research-hallucinating-generative-models-advances-reliability-artificial?ref=image (Accessed: 13 August 2024).
Further Reading:
ChatGPT and AI text generators: Should academia adapt or resist?
How can we counteract generative AI’s hallucinations?
They’re all so dirty and smelly:’ study unlocks ChatGPT’s inner racist.
AI language models are rife with different political biases.
Educause quickpoll results: Adopting and adapting to generative AI in higher ed tech.
Humans are biased. Generative AI is even worse.
Time for class 2023: Bridging student and faculty perspectives on digital learning.
Tackling bias in artificial intelligence (and in humans).
Ageism, sexism, classism, and more: 7 examples of bias in AI-generated images.
When A.I. chatbots hallucinate. The New York Times.