There's a technical issue that needs to be thoroughly debugged first.
AI programs exhibit racial and gender biases, research reveals
The Guardian (UK)
Can "not being racist" be implemented algorithmically?...
These biases can have a profound impact on human behaviour. One previous study showed that an identical CV is 50% more likely to result in an interview invitation if the candidate’s name is European American than if it is African American. The latest results suggest that algorithms, unless explicitly programmed to address this, will be riddled with the same social prejudices.
It seems to me akin to the halting problem.