Nearly every industry is now using artificial intelligence. AI is being used to analyze metrics and automate tasks. Through AI, organizations, teams, and nonprofits are able to streamline and reduce their expenditures, optimize their processes, and provide an overall better end result for their clients and consumers. But artificial intelligence is not without its challenges, especially on an ethical level. As companies continue to work with artificial intelligence, it’s important that board members understand the need for ethical auditing and analysis.
“Artificial intelligence” has a fairly broad definition. Today, many for-profit and not-for-profit entities are using AI. Artificial intelligence is used to protect computer systems, by identifying potentially malicious behavior within a system. It’s used to sort through resumes, highlighting resumes that are most likely to be applicable. It’s even used to enhance facial recognition, making it possible for potentially dangerous individuals to be tracked through camera systems.
But there can be a dark side to AI, as an AI program simply automates and provides solutions — it does not question whether these solutions are good ones. In the past, for instance, AI systems have been found to be racist.
Because artificial intelligence is still programmed by people and still fed samples by people, there are potential opportunities for abuse. An AI program may operate based on biased information if it is fed biased information, which can be particularly dangerous for nonprofit and community-based organizations.
But that doesn’t mean that the advantages of using AI can’t still be outstanding. Artificial intelligence can be used to very quickly finish routine, day-to-day tasks. Artificial intelligence can be programmed wrong or given the wrong samples, but it can’t make “mistakes.” It is consistent in a way that people can never be.
A modern board faces a few major challenges when it comes to AI:
To fulfill their obligations and responsibilities to an organization, board members must be somewhat versed in AI. At a minimum, a task force within the board must be able to understand how to properly utilize AI. Otherwise, an organization may not be able to utilize the technology to its fullest benefit.
At the same time, auditing has to be done to ensure that the use of artificial intelligence is correct and ethical. Because artificial intelligence can be applied to so many tasks — and because the technology is inherently amoral — it’s very easy for issues to arise.
Consider an AI system that ranks applicants for funding by need. This AI system can be used to remove human bias and potential nepotism from the equation. But consider if this AI system starts to rank “women” only for funding because (from its statistical samples) women have greater needs for funding. If this happens, the AI is operating as it should be. But the very same systems that remove bias from the AI can also lead to unintended consequences.
A board will need to address these issues, as it is the board’s responsibility not only to implement AI as necessary for the organization’s progress but also to ensure that the AI is not interfering with the organization’s ultimate goal or mission.
Artificial intelligence is everywhere. Whether it’s “dumb AI” (routine automation) or “smart AI” (machine learning algorithms), it’s unavoidable for companies wanting to improve upon and streamline their processes. At the same time, it has to be implemented responsibly and ethically to avoid issues down the line. By working with technology experts and creating the appropriate teams and task forces, board members can effectively utilize AI while reducing their risks.
See for yourself how Boardable saves times, energy, and resources.