No job in the United States has seen more hiring growth in the last five years than artificial-intelligence specialist, a position dedicated to building AI systems and figuring out where to implement them.
But is that career growth happening at a faster rate than our ability to address the ethical issues involved when machines make decisions that impact our lives and possibly invade our privacy?
Maybe so, says Dr. Steven Mintz author of Beyond Happiness and Meaning: Transforming Your Life Through Ethical Behavior.
“Rules of the road are needed to ensure that artificial intelligence systems are designed in an ethical way and operate based on ethical principles,” he says. “There are plenty of questions that need to be addressed. What are the right ways to use AI? How can AI be used to foster fairness, justice and transparency? What are the implications of using AI for productivity and performance evaluation?”
Those who take jobs in this growing field will need to play a pivotal role in helping to work out those ethical issues, he says, and already there is somewhat of a global consensus about what should be the ethical principles in AI.
Those principles include:
- People affected by the decisions a machine makes should be allowed to know what goes into that decision-making process.
- Non-maleficence. Never cause foreseeable or unintentional harm using AI, including discrimination, violation of privacy, or bodily harm.
- Monitor AI to prevent or reduce bias. How could a machine be biased? A recent National Law Review article gave this hypothetical example: A financially focused AI might decide that people whose names end in vowels are a high credit risk. That could negatively affect people of certain ethnicities, such as people of Italian or Japanese descent.
- Those involved in developing AI systems should be held accountable for their work.
- An ethical AI system promotes privacy both as a value to uphold and a right to be protected.