Just when you thought that business couldn’t change any faster, ChatGPT burst onto the scene. Now it seems generative AI is everywhere. AI is creating images, composing music, writing books, and transforming medicine. And it’s reinventing how companies do business and how employees do their jobs.
Many companies have not yet capitalized on this powerful technology and are scrambling to catch up. The learning curve, however, can be steep, particularly when it comes to the risks AI poses.
Although the technology is new, the list of companies that have been burned or embarrassed by AI-related mishaps is significant and growing. For instance, a court ordered Air Canada to pay damages to a grieving passenger after the airline’s chatbot misled him about bereavement fares. Sports Illustrated allegedly published articles with fabricated bylines and writer profiles, outraging its hardworking (human) journalists. And iTutorGroup settled a lawsuit with the U.S. Equal Employment Opportunity Commission for $365,000 after its software systematically rejected male applicants over 60 and females over 55.
How can companies capitalize on AI without falling into a trap? Here is a list of factors to consider as organizations develop their AI strategies:
Data quality: Is the data used for training AI models accurate and representative? A high-quality data set helps ensure that the model’s output is reliable and on target.
Data bias: Are there biases in the training data that may lead to discriminatory or unfair outcomes? There’s a time-honored concept in computer science known as GIGO: garbage in, garbage out—and vice versa. With AI, it matters more than ever.
Data privacy: Are there risks of unauthorized access, use, or disclosure of sensitive or personally identifiable information? AI has the potential to infer and disseminate sensitive information, such as an individual’s preferences and habits.
Algorithmic bias: Could AI models produce biased or discriminatory results based on factors such as race, gender, or socioeconomic status? Given AI’s ability to discern patterns in enormous data sets, algorithms can uncover and perpetuate hidden biases that would raise red flags for human analysts.
Model interpretability: Are AI models transparent and interpretable, or do they operate as “black boxes” with unclear decision-making processes? An inability to understand how a model decides makes it difficult to identify or fix problematic output. If a model rejects certain loan or job applicants, for instance, it’s important to know why. For the moment, “explainable AI” is still an emerging field, so companies should take care when using models for high-stakes applications.
Model performance: Is the performance of AI models consistent, accurate and reliable across different data sets and scenarios? Consistency and reliability signal that models are adaptable and that they are truly learning and identifying trends and not merely memorizing examples.
Legal compliance: Does AI adoption comply with relevant laws, regulations, and industry standards governing data protection, privacy, and discrimination? Top concerns include litigation over copyright infringement and the EU’s Artificial Intelligence Act, passed in March 2024, which regulates practices such as resume scanning.
Integration challenges: Are there risks associated with integrating AI solutions into existing systems, processes, and workflows? Companies need to consider a range of possibilities, such as whether AI raises concerns over compatibility, security, over-reliance, and so on.
Security vulnerabilities: Are AI systems vulnerable to cyber threats, hacking, or unauthorized access that could compromise data integrity or system reliability? In addition to traditional cyberattacks perpetrated via AI infrastructure (such as cloud servers), AI systems are vulnerable to model theft and can be manipulated by adversaries to produce erroneous or damaging responses.
Performance degradation: Could AI systems experience performance degradation, downtime, or disruptions that affect business operations? Models may be vulnerable to a host of technical issues, such as concept drift, data drift or model decay.
Change Management: Is there a risk of employee, stakeholder or customer resistance? The risk is real and must be managed. In a 2023 CMC/AMA Global survey, 29% of respondents said they didn’t trust managers to use Al fairly and with transparency. Another 37% were unsure.
This long list of risks may make AI seem too hazardous to handle. Obviously, business is rarely if ever risk-free. Moving slowly to minimize risks raises the very real risk that competitors will gain advantages by capitalizing on the technology before you do. Educating your team on the risks and assigning capable employees to address them can help you move forward as safely as possible. As the adage says, a ship in a harbor is safe, but that is not what ships are built for.