The question is no longer “is your team using Artificial Intelligence?” but rather “how is your team using Artificial Intelligence?”. Whether sanctioned or not, it is likely that someone, somewhere in your organization is already using Artificial Intelligence (or AI), as many different systems have become readily available for use, by neophytes and experts alike.
So, how well do you understand your team’s use of AI? Because the use of this powerful tool comes with ethical and legal implications for your organization. Read on to learn about key areas of concern and how to mitigate them.
Key Areas of Concern
1. Data Privacy and Security: Vast quantities of information are fed into AI models to train them to provide output in response to queries made by users. However, users also provide data to the AI model to obtain output in response to queries, and this is of particular concern if the information provided is subject to privacy and data protection regulations. For example, a retail employee might use an AI system to help a customer identify which size pants would be the best fit, requiring information such as age, weight and height to provide recommendations. This information could also be assigned their client file for future use, as long as their name, address and email address are also provided.This seemingly harmless use of an AI system to provide efficient customer service is fraught with potential regulatory violations. The data provided by the customer is considered personal, and in some instances, sensitive data. Collecting this data without fully informing customers how this information could be used could be considered a violation of privacy and data protection regulations. Asking for personal identifiers exceeding what is necessary to recommend pant sizes, for example a customer’s full name and residential address, could also expose the company to regulatory issues, not to mention the fact that the information collected by employees must be securely stored by the company and retained only as long as required for its anticipated use.
2. Intellectual Property Rights: Because an AI model learns based on the information it has been exposed to, there is potential for intellectual property infringement when it comes to AI-generated output.Consider the new logo which your intern has proudly submitted, and which looks clean and fresh… and strangely like something which you can’t quite identify. Could it be that they submitted AI-generated output, which might inadvertently be mimicking designs or styles used as input during the learning process? If that’s the case, can the logo be considered an original work? A transformation? Who holds the copyright? Current intellectual property legislation is only drafted with human creators in mind, meaning the answers to these questions and others remain unclear.
3. Bias and Fairness: AI systems can perpetuate and amplify existing biases often inherent in their training datasets. If the AI system is trained on data that is Eurocentric, overlooking non-Western cultures, then the resulting output could be similarly biased. The bias could manifest in a minor way, with something as simple as defaulting to male pronouns in text output (as typically, male pronouns are the “default” in Western culture). More concerning, however, are the significant ramifications that bias can have, especially in contexts that may lead to legal challenges, such as an AI model tasked with screening candidates for medical fellowships systematically rejecting female applicants due to historical gender biases in the field.
4. Accountability, Transparency and Institutional Knowledge: As AI improves, it will become more deeply integrated into various business processes. It will become more difficult (and more important) to understand who – or what- is making decisions and on what basis. Was this policy reviewed and edited with human eyes and a human heart, or did an AI system suggest these “improvements”? Did we always do things this way, or has our decision-making gradually shifted based on an algorithm fed through AI enabled software? Identifying when such shifts happen is crucial, as it allows organizations to maintain a history of their corporate culture and understand the origins of their expertise and knowledge.
Mitigation Strategies
While solving the above issues requires a multifaceted approach, there is one easy way to address the legal and ethical risks associated with the increasing adoption of AI in the workplace: developing a comprehensive AI Policy for your company and ensuring that all employees are aware of and trained on this policy.
The AI Policy should clearly define the company’s position on the acceptable use of AI, while considering applicable legislation and the company’s professional and ethical values. It should establish clear principles on data management and confidentiality and provide explicit principles on the ethical use of AI, including fairness and transparency, on bias monitoring, to mitigate or prevent undesired output and provide clear guidelines on the creation of AI-generated content in the context of intellectual property law. Considering the speed at which technology is evolving, the AI policy should be reviewed regularly to keep up with the regulatory landscape, which we expect to change quickly.
Should you require assistance in drafting a custom AI Policy for your organization, our specialized team would be happy to offer our assistance. Please don’t hesitate to reach out!