26 Posts
Someone I met at the recent ASQ meetings in Dallas followed up with me regarding an interesting topic. The question of ethics where AI is concerned was brought up... What do you think?
The following interesting articles were shared:
https://www.geekwire.com/2018/ethics-ai-robots-will-rise-will-rule-us/
http://www.andrew.cmu.edu/ user/ddanks/pubs.html#ethics
https://nlpers.blogspot.com/ 2016/11/bias-in-ml-and- teaching-ai.html
The following interesting articles were shared:
https://www.geekwire.com/2018/ethics-ai-robots-will-rise-will-rule-us/
http://www.andrew.cmu.edu/
https://nlpers.blogspot.com/
5 Replies
618 Posts
Ethics should govern human behavior. In the case of AI, the outcomes are not necessarily behavioral but the result of algorithms and programming logic.
Ethics could be programmed into AI solutions as business rules or boundaries, so the burden is placed on the designers and programmers.
If the AI outcomes are not "humane", then a social responsibility concern could arise. For example, if AI regulates the prescriptions to patients currently in palliative care, the algorithm might incorporate the low life expectancy and allocate lower quality pharmaceuticals and treatment options on that basis.
AI could also predict future behavior based on prior patterns and choices, providing sufficient predictive information that an invasion of personal privacy could occur to the detriment of the people affected.
I recommend reading the books by Michael Lewis, particularly Fifth Risk, Flash Boys, and the Undoing Project, to gain a perspective on the effects of AI and algorithms on human choices and behaviors.
Ethics could be programmed into AI solutions as business rules or boundaries, so the burden is placed on the designers and programmers.
If the AI outcomes are not "humane", then a social responsibility concern could arise. For example, if AI regulates the prescriptions to patients currently in palliative care, the algorithm might incorporate the low life expectancy and allocate lower quality pharmaceuticals and treatment options on that basis.
AI could also predict future behavior based on prior patterns and choices, providing sufficient predictive information that an invasion of personal privacy could occur to the detriment of the people affected.
I recommend reading the books by Michael Lewis, particularly Fifth Risk, Flash Boys, and the Undoing Project, to gain a perspective on the effects of AI and algorithms on human choices and behaviors.
618 Posts
I am adding Hadassah Mativetsky to this conversation which combines SR and technologies. Also Nicole Radziwill for her Quality 4.0 and Software Quality Engineering perspectives.
87 Posts
Daniel Zrymiak,
You say AI outcomes are the result of algorithms and programming logic. In a strict sense, this is true, but in many, if not most cases, the algorithms and logic are not generated by programmers, but by other, higher level algorithms. These higher level algorithms infer logic rules (or something somewhat like logic rules) from examples presented to them. Thus if there is implicit bias in the training examples, there is an at least moderate probability that there will be bias in the outcomes from the AI, even though it was not deliberately programmed into the system.
Some types of AI do generate explicit rules by, for example, building decision trees which can be examined. But other types of AI, such as neural networks, create decision models that cannot really be interpreted as logic. And even decision trees can contain logic that operates on variables that are not themselves inherently discriminatory, but are correlated with other variables in such a way that the result might be discrimination. Zip codes are an easy example.
This is just to say that we may not be able to strictly speaking "program" the ethics in. We will need to be conscious of the possibility of inadvertent bias in the choice of training examples and also test for biased results that slip through anyway.
Of course, this does not account for the possibility that some individuals may actually try to implement systems with systematic biases for their own nefarious purposes.
You say AI outcomes are the result of algorithms and programming logic. In a strict sense, this is true, but in many, if not most cases, the algorithms and logic are not generated by programmers, but by other, higher level algorithms. These higher level algorithms infer logic rules (or something somewhat like logic rules) from examples presented to them. Thus if there is implicit bias in the training examples, there is an at least moderate probability that there will be bias in the outcomes from the AI, even though it was not deliberately programmed into the system.
Some types of AI do generate explicit rules by, for example, building decision trees which can be examined. But other types of AI, such as neural networks, create decision models that cannot really be interpreted as logic. And even decision trees can contain logic that operates on variables that are not themselves inherently discriminatory, but are correlated with other variables in such a way that the result might be discrimination. Zip codes are an easy example.
This is just to say that we may not be able to strictly speaking "program" the ethics in. We will need to be conscious of the possibility of inadvertent bias in the choice of training examples and also test for biased results that slip through anyway.
Of course, this does not account for the possibility that some individuals may actually try to implement systems with systematic biases for their own nefarious purposes.
257 Posts
Our concern for ethics should go far beyond AI.
That said, it will be interesting to see how we deal with biases that are actually true (e.g., certain groups of people are generally taller/stronger/slower than others, but also others that people don't want to be true but actually are). So the AI will pick up on those differences (not just about people) and it is real learning. But if people don't like what it learns they will insist of tampering with the model, which will actually weaken the logical application.
Frankly I'm not sure we're bright enough as a species to handle what the potential is. Look at how poorly we do as managing physical assets, which are concrete (no pun, really).
That said, it will be interesting to see how we deal with biases that are actually true (e.g., certain groups of people are generally taller/stronger/slower than others, but also others that people don't want to be true but actually are). So the AI will pick up on those differences (not just about people) and it is real learning. But if people don't like what it learns they will insist of tampering with the model, which will actually weaken the logical application.
Frankly I'm not sure we're bright enough as a species to handle what the potential is. Look at how poorly we do as managing physical assets, which are concrete (no pun, really).
257 Posts
And another critical issue is who defines what is and is not ethical? Will there need to be a different AI app for each religion, each country, ...?