Being in China, I’m constantly discussing and covering
news about Artificial Intelligence (AI) . And although “Alexa” doesn’t work so
well in China because of the great firewall, I still love the fact that because
of AI I can just shout and Alexa will pick a playlist that perfectly fits my
mood. Somehow, I feel like my standard of life is improved. A few months ago,
however, In October 2018 a small story broke about Amazon’s use of AI
technology in hiring staff.
Don’t feel bad if you didn’t hear about it, most people
didn’t.
But the story potentially has huge consequences and
should send shivers down the spines of everyone who cares about diversity in
television.
In 2014 a team at Amazon.com started developing a
computer programme to review job applicants’ applications with the aim of
mechanizing the search for top talent. The Artificial Intelligence (AI)
employed in the computer programme would be devoid of human emotions and
prejudices and objectively evaluate potential candidates.
In many ways you would think this is exactly what people
who campaign for employment for people of more diverse backgrounds have been
clamoring for. No more bosses just emplooying their friends and employing in
their own image.
But by 2015 the bosses at Amazon started to realize that
something was going wrong.
The AI programme was weeding out all the women.
The computer programmers had given the AI system over ten
years’ worth of successful employees' CV’s and so the AI programme had “learnt”
that one of the biggest indicators of whether a candidate would be successful
in the future was whether the candidate was male. Hence all women were got rid of
straight away. Far from getting rid of prejudice the computer programme only
amplified the prejudice of previous employment practices combined with the
possible subconscious innate prejudices of the programmers (who happened to be
predominantly men).
For people interested in diversity in television this
case potentially has far wider consequences than just employment practices at
the online retailer.
In May 2018 Broadcast magazine published a piece explaining how AI could
be used in the television industry to analyse scripts and decide which ones get
green-lit or not, or at the very least which ones commissioners get to see from
the mass of scripts which are submitted.
AI could also be used to tweak scripts changing the
gender or race for example of a particular character if its machine learning
has taught it that a woman or white man are viewed more favourably by an
audience.
In 2017 the BBC announced it was partnering with eight
British universities to use AI to "better understand what audiences want
from the BBC."
The publication Engadget reported that “The BBC hopes machine learning can help it build ‘a more personal BBC’ with tools that could allow employees to make informed editorial and commissioning decisions.”
The publication Engadget reported that “The BBC hopes machine learning can help it build ‘a more personal BBC’ with tools that could allow employees to make informed editorial and commissioning decisions.”
Looking at the example of Amazon the potential pitfalls
of using AI to decide who to employ and what programmes to commission are
obvious and scary. But more importantly the problems might be far
harder to spot than the Amazon example.
In America right now people are so worried about the
potential prejudices being imbedded into AI models that
journalists and lawyers are arguing that AI programmes should be open to be
reviewed and challenged for prejudice. That means people should be able to
independently analyse the computer codes that are used to teach the AI
programme and make sure there aren’t any assumptions baked into the programme
that might discriminate against certain groups. Just as any human decisions can
be challenged in court if someone is suspected of sexism or racism for example
computer programme should also be subject to the same type of challenges if
prejudice is suspected.
I love AI – it improves my life in so many ways and I am not arguing for a return to a world where it doesn’t exist. Nor am I arguing for a return to a world where scripts are green lit and people are employed just on a nod and a wink. However, I personally side with the US lawyers and journalists who are making sure that we subject all decisions to scrutiny. Whether those decisions areat is made by a humans or ca computers.
Now after writing this blog post I need to relax. - Alexa play me some 90’s R&B
Hey, would you mind if I share your blog with my twitter group? There’s a lot of folks that I think would enjoy your content. Please let me know. Thank you.Surya Informatics
ReplyDelete