Fringe FM conversation on AI Ethics

A few weeks ago, I had a lively conversation with Matt Ward for the Fringe FM podcast, where we discussed artificial intelligence, its applications, and the ethical implications thereof. 

During the conversation I mentioned offhand an article I’d read recently which suggested that program to identify criminals using face recognition in CCTV suffered from a high rate of misidentifications. I couldn’t remember the exact rate at the time, and in the quote shown below I said “50% or 90%”. Turns out I was being too conservative: the actual rate was 98%. Just as with human-based systems, no AI system is perfect, and classifications are based on a confidence score exceeding some threshold. Set the threshold too low, as was undoubtedly the case here, and the result will be many false positives. It’s always important for AI developers to consider the impact of false positives and false negatives, and to take particular care to consider the impact of those negative determinations, especially when they relate to people’s lives.

During the podcast I also mentioned the Microsoft group headed by Kate Crawford which is doing some of the fundamental research into AI ethics, but failed to mention the name of the group: Fairness, Accountability, Transparency and Ethics in AI (FATE). You can find their publications in the Microsoft Research Catalog — the AI Now Report in particular is worth reading for its recommendations.

You can find the Fringe FM podcast in your favorite podcast app (mine is Pocket Casts), or listen to the Episode 46 directly at the link below.

Fringe FM: 46. The Ethics of AI in an Era of Big Data David Smith