The Moat Is Trust, Or Maybe Just Responsible AI

Minimizing the Risks of Generative AI

Get the Latest on All Things CODE

author-image

作者

OpenAI’s ChatGPT* created a frenzy in the generative AI landscape by achieving record-breaking growth. It attracted over one million users within its first week. While big tech companies like Google, Microsoft, and Facebook are competing in the Large Language Model (LLM) race, small startups are also making headway. A growing problem seems to be balancing the secrecy needed for competitive advantage with the transparency needed for safety. Unlike some LLMs, OpenAI has not released their training set or GPT-4* architectural details, which is drawing criticism from some quarters.

A recent leaked Google memo highlighted concerns about Google and OpenAI lacking a competitive advantage, or “moat,” around LLM technology. The memo emphasized how open-source alternatives are smaller, faster, cheaper, and more customizable. (We demonstrate this in Create Your Own Custom Chatbot in this issue of The Parallel Universe.)

While LLMs like ChatGPT have demonstrated impressive capabilities, the emergence of generative AI models raises concerns about potential harmful effects. While the ongoing debates over large vs. small models and open vs. closed systems are important, we must recognize that performance and accuracy are not the only considerations. Factors such as fairness, explainability, sustainability, privacy, etc. must also be considered. Upholding responsible AI practices will ultimately determine the societal value of AI. Regulations will probably be introduced to require AI applications to comply with ethical best practices. The forthcoming European Union Artificial Intelligence Act, expected to be passed later this year, will mark the world's inaugural set of regulations governing AI systems. This legislation aims to foster a human-centric and ethical approach to AI by introducing guidelines pertaining to transparency and risk management.

Generative AI models learn from vast amounts of data available on the internet, which can contain biases present in society and may inadvertently perpetuate and amplify these biases. LLMs can be manipulated to generate or spread misinformation, phishing emails, or social engineering attacks. Malicious actors can intentionally train models with biased or false information, leading to the dissemination of misleading content on a large scale. Such models can be used to create convincing deepfake video and audio content. For example, a recent AI-generated hoax of an explosion at the Pentagon went viral. Such fake news is emerging as a major threat to the upcoming US elections: “It’s going to be very difficult for voters to distinguish the real from the fake. And you could just imagine how either Trump supporters or Biden supporters could use this technology to make the opponent look bad,” said Darrell West, a senior fellow at the Brookings Institution’s Center for Technology Innovation.

LLMs can often have “hallucinations” and generate inaccurate information, which can be particularly problematic in industries like healthcare, where models can influence diagnostic and therapeutic decisions and potentially harm patients. Even though AI hallucinations are a known phenomenon, people continue using LLMs and uncritically accepting their pronouncements. In a recent example, it was discovered that a lawyer used ChatGPT to do his research because none of the decisions or quotations cited in his legal brief existed. They were made up by ChatGPT.

The need for responsible AI has never been greater. In an interview at the Commonwealth Club, Professor Stuart Russel said about ChatGPT: “...in a sense, we are conducting a huge experiment on the human race with no informed consent whatsoever.” A group of over 1,000 AI experts, including Professor Russell and Elon Musk, has called for a pause in the deployment of LLMs. Lawmakers, industry leaders, and researchers agree that building guardrails around AI and strict regulations to ensure safe deployment of AI are a must. Industry leaders admit that AI technology might be an existential threat to humanity. A statement released by the Center for AI Safety says that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” It was signed by some of the biggest names in AI: Geoffrey Hinton (Emeritus Professor, University of Toronto), Yoshua Bengio (Professor, University of Montreal), Sam Altman (CEO of OpenAI), Demis Hassabis (CEO of Google DeepMind), and Dario Amodei (CEO of Anthropic). To mitigate the risks of AI, responsible development, careful dataset curation, ongoing research, and robust ethical guidelines are essential. It is crucial to ensure transparency, accountability, and regular audits of LLMs to address biases, reduce misinformation, and protect user privacy.

It is evident that companies and individuals working on AI technology need to make sure their software is developed and deployed according to ethical AI principles. The open-source Intel® Explainable AI Tools allow users to run post hoc model distillation and visualization to examine the predictive behavior of both TensorFlow* and PyTorch* models. They are designed to help users detect and guard against issues of fairness and interpretability. For example, our model card generator is an open-source Python* module that allows users to create interactive HTML reports containing model details and quantitative analysis that displays performance and fairness metrics for both TensorFlow and PyTorch models. These model cards can be part of a traditional end-to-end platform for deploying ML pipelines for tabular, image, and text data to promote transparency, fairness, and accountability.

LLMs are typically trained on large public datasets and then fine-tuned on potentially sensitive data (e.g., financial and healthcare). Technologies like our Open Federated Learning (OpenFL) incorporate confidential computing so that LLMs can be safely fine-tuned on sensitive data, which in turn improves the generalizability of models while reducing hallucinations and bias.

AI has the potential to help in economically disadvantaged areas where there is a shortage of critical expertise. Presently, LLMs require tremendous computing power, and are typically executed in the cloud or expensive on-premises servers with multiple accelerators. We are focused on reducing the computational complexity of LLMs and making LLM-based inference more efficient so that advanced AI techniques will be available in areas with no cloud connectivity and on lower cost edge computing devices.