This report was prepared by Alejo Jose G. Sison, Marco Tulio Daza, Roberto Gozalo-Brizuela and Eduardo C. Garrido-Merchan as an input for the President’s Council of Advisors on Science and Technology (PCAST) Working Group on Generative AI. The discussion took place on May 19th, 2023 and covered two topics: 1) AI Enabling Science and 2) AI Impacts on Society.
The report aims to promote the beneficial deployment of generative AI, specifically chatGPT and similar chatbots, and to mitigate risks by addressing the PCAST’s questions:
- What are the most significant technical and societal challenges and risks posed by generative AI, such as ethical, legal, privacy, security, safety, and trust issues?
- What are the best practices and principles for designing, developing, deploying, and governing generative AI systems in a responsible and trustworthy manner?
This meeting was livestreamed and it is accessible via the PCAST website.
ChatGPT is a chatbot that can generate coherent and diverse texts on almost any topic. Since its rollout in November 2022, a host of ethical issues has arisen from the use of ChatGPT: bias, privacy, misinformation, and job displacement, among others. In our latest research paper, “ChatGPT: More than a “Weapon of Mass Deception” Ethical challenges and responses from the Human-Centered Artificial Intelligence (HCAI) perspective” (preprint available here: https://bit.ly/3MYLSGA), we explore some of the ethical issues that arise from chatGPT and suggest responses based on the Human-Centered Artificial Intelligence (HCAI) framework to enhance human flourishing.
ChatGPT is a powerful text generation model that can produce original and human-like texts from natural language prompts. It has high user-friendliness and can mimic different language styles. Furthermore, it has lowered the marginal costs of producing new and original human-sounding texts to practically zero (once the costs of construction, training, maintenance, and so forth are covered). However, ChatGPT also poses some challenges, such as the risk of automation bias and the lack of verification of its accuracy.
We have identified the greatest risk associated with ChatGPT as its potential for misuse in generating and spreading misinformation and disinformation and enabling criminal activities that rely on deception. This makes ChatGPT a potential “weapon of mass deception” (WMD). The following table classifies the ethical risks we have found into three categories. Our article addresses each of these topics in greater depth.
The above-mentioned misuses of ChatGPT show how distant it is from HCAI’s goals of a reliable, safe, and trustworthy model. This is not the “fault” of the model, but of humans involved in design, deployment, and use, since only they could provide malicious intent or lack due care. That’s why it would be erroneous to call ChatGPT racist, sexist, and so forth simply because it includes terms of abuse in outputs: it’s just reproducing words in the training data based on statistical correlations without intention. Otherwise, all dictionaries would be racist, sexist, and so forth, something which most reasonable people would deny. It could be another example of anthropomorphizing a machine.
The main reason ChatGPT is unreliable is that it hallucinates and produces “bull-shit”; its outputs are neither verified nor validated, often false, despite sounding confident. Although if prompted, ChatGPT may come up with answers, users cannot know whether the model is just “making it up.” Users cannot verify the accuracy of ChatGPT outputs based solely on their interactions. It violates HCAI software engineering practices for reliable systems, which call for verification and validation testing, bias testing to enhance fairness, and explainable user interfaces, among others.
ChatGPT is not safe. Some worry it could induce people through manipulative responses to perform acts of physical harm to themselves or others. Possibilities of psychological harm are not far-fetched, especially for the vulnerable, through cultivating unhealthy attachments. And occasions for widespread social harm are evident through ChatGPT-enabled disinformation and criminal scams.
In our paper, we address these ethical challenges related to ChatGPT by employing both technical and non-technical approaches. However, it’s important to acknowledge that no combination of solutions can eliminate the risk of deception and other malicious activities involving ChatGPT. Therefore, we focus on reducing harm.
Engaging with the ethical challenges of ChatGPT
The following list presents our proposal to mitigate the potential risks of chat GPT being misused to produce disinformation and deception. Our article addresses each of these topics in greater depth.
- Technical resources:
- Statistical watermarking
- Identifying AI styleme
- ChatGPT detectors: GPTZeroX, DetectGPT, OpenAI Classifier
- Fact-checking websites
- Non-technical resources:
- Enforce terms of use, content moderation, safety & overall best practices
- Transparency about what ChatGPT can and cannot (or should not) do
- Educator considerations: age, educational level, and domain-appropriate use under supervision
- “Humans in the loop” (HITL) & knowledge of principles of human-AI interaction
An HCAI assessment
Additionally, we made an assessment using the Six Human-Centered AI challenges framework proposed by Ozmen Garibay et al. (2023) to create AI technologies that are human-centered, that is, ethical, fair, and enhance the human condition. In essence, these challenges advocate for a human-centered approach to AI that (1) is centered on human well-being, (2) is designed responsibly, (3) respects privacy, (4) follows human-centered design principles, (5) is subject to appropriate governance and oversight, and (6) interacts with individuals while respecting human’s cognitive capacities. Our findings show that chatGPT developers have failed to address four.
- First, chatGPT falls short in responsible design by failing to determine who is responsible for what, clarifying both legal and ethical liabilities in the user interface.
- Second, ChatGPT lacks respect for privacy, which includes various rights such as being left alone, controlling personal information, and protecting intimacy. ChatGPT’s conversations can unintentionally infringe on these rights, increasing the risk of causing psychological harm.
- Thirdly, it does not follow human-centered design principles by failing to calibrate risks properly and focusing more on maximizing AI objective functions than on human and societal well-being.
- Lastly, ChatGPT has not subjected itself to appropriate governance and oversight, failing to consider current and prospective regulatory standards and certifications.
To make the best use of ChatGPT, developers should prioritize human development and well-being by moving beyond usability to consider emotions, beliefs, preferences, and physical and psychological responses, effectively augmenting and enhancing the user experience while preserving dignity and agency.
Best uses for chatGPT
Finally, we have identified the best uses for ChatGPT, including creative writing, such as brainstorming, idea generation, and text style transformation. Additionally, non-creative writing applications like spelling and grammar checks, summarization, copywriting and copy editing, and coding assistance can also benefit from ChatGPT. Finally, ChatGPT can be useful in teaching and learning contexts such as preparing lesson plans and scripts, as a critical thinking tool for assessment and evaluation, and as a language tutor and writing quality benchmark.
Creative writing: brainstorming, ideas generator, text style transformer |
Non-creative writing: spelling & grammar check, summarization, copywriting & copy editing, coding assistance |
Teaching & learning: preparing lesson plans & scripts, critical thinking tool (assessment & evaluation), language tutor & writing quality benchmark |
We conclude that the greatest threat is that Generative AI platforms could be used as a “weapon of mass deception.” ChatGPT cannot do this by itself, but it can be employed by humans for this purpose. ChatGPT can be misinformed, that is, provide erroneous data because of limitations of its training set and algorithm, but it cannot intentionally mislead. Only humans are by nature or innately interested in the truth and can grasp the truth; only humans suffer from lies and disinformation. For this reason, we clearly distinguish between fiction, which caters to our imagination and provides entertainment by being unmoored from the truth, and non-fiction, from which we draw scientific knowledge to base our social interactions. Therefore, it is crucial that we use chatGPT and similar chatbots with care and responsibility and not let them deceive us or others.
Generative AI and chatbots such as chatGPT are powerful but perilous tools that require careful and responsible use. Users should be aware of the risks and limitations of these technologies, and developers and regulators should establish ethical standards and guidelines for their design and use, ensuring that they respect human dignity, privacy, and security.
We strongly urge the government, legislators, companies, universities, and educational institutions to incorporate programs of Artificial Intelligence Literacy at all educational levels, which address the challenges and opportunities of this technology. Additionally, we recommend promoting programs of Media Literacy, which help users critically evaluate the information they consume online. Today, the most immediate risk that society faces is misinformation at scale, which can have serious consequences for democracy and social cohesion.
The manuscript "ChatGPT: More than a "Weapon of Mass Deception" Ethical challenges and responses from the Human-Centered Artificial Intelligence (HCAI) perspective" by Alejo Jose G. Sison, Marco Tulio Daza, Roberto Gozalo-Brizuela and Eduardo C. Garrido-Merchan, is under review by the International Journal of Human-Computer Interaction. The preprint is available here: https://bit.ly/3MYLSGA.