5 ways artificial intelligence is appropriating gender
NewsNoticias, 13/11/2017
Share Facebook Share Twitter Share Google Plus Share Linkedin Link

Calls to action for an immediate discussion about AI regulation

In an article published by Discover Magazine, July 2017, several artificial intelligence experts were asked to respond to a statement made by Elon Musk regarding the threat posed to humanity by advances in AI, and the need to regulate this technology. Though many of the responses attempted to play down his concerns for humanity, some did agree that we need to immediately discuss the ethics of AI, the regulation of AI, and issues of diversity, responsible design, and sustainability. One expert, Fei-Fei Li, director of the Stanford Artificial Intelligence Lab, said something interesting.

“As an AI educator and technologist, my foremost hope is to see much more inclusion and diversity in both the development of AI as well as the dissemination of AI voices and opinions’’.

When we speak about inclusion and diversity in non-AI society, we speak about race, age, gender, class, sexual orientation, religion, political freedom, and the discursive space made for these identities. There are more opportunities than ever for members of society to get involved in the development of intelligent programming and chatbots. Companies like SnatchBot offer a free platform to develop your own chatbot, and it also gives users access to a market place of bot templates, and several others allow the proliferation of chatbots across the digital landscape. If the boundaries between our biological lives and the lives of artificially intelligent programmes are soon to be less defined, then upholding these principles as we potentially pass into a new kind of human-AI society over the next few decades is of equal importance. As it stands, the majority of the voices you’ll find in many areas of this technology are the artificially feminine and stereotypically female voices of gendered AI. The design of the bots that are being developed for our use, is skewed in favour of one type of identity – female, young and white.

In her paper ‘I’d Blush if I could’: Digital Assistants, Disembodied Cyborgs and the Problem of Gender’, Hilary Bergen examines how the feminine is transposed into our interactive AI technologies, and how gendered power relations dictate the mechanisation and commodification of femininity in the technology we install on our devices, and envisage interacting with in real life. She argues that “virtual cyborgs are not only ubiquitously gendered female, but also rely on stereotypical traits of femininity both as a selling point and as a veil for their own commodification’’, and that this process “works insidiously to efface not just the body of the cyborg, but the bodies of real women who make up the cyborg’s discursive network’’. Looking at the relationship between gender and artificial intelligence, there are some key areas where gender is being appropriated by this technology.

Boletín Regístrate

Confirmo que he leído y acepto la política de privacidad y doy mi consentimiento para el procesamiento de datos personales