NEW DELHI

Eight years after his lover died, a bereaved Canadian man utilised cutting-edge artificial intelligence software to have life-like online "chats" with her.Joshua Barbeau, 33, told the San Francisco Chronicle that he paid $5 to participate in a beta test of GPT-3, artificial intelligence software created by a research group co-founded by Elon Musk.

Advertisment

Barbeau said he used her past text messages and Facebook postings to create the chatbot resemble his late lover's writing voice. Barbeau lost his 23-year-old sweetheart Jessica Pereira in 2012.

Also read | Apple’s AI chief, Ian Goodfellow, quits over 'work from office' order

Robots that converse with dead people

Advertisment

Barbeau was able to exchange text messages with an artificial "Jessica" using a deadbot, a form of chatbot. Despite the case's ethically contentious character, I rarely came across materials that went beyond the facts and examined it via an explicit normative lens: why would developing a deadbot be right or wrong, ethically desirable or repugnant?

Let's put things in perspective before we tackle these issues: Jason Rohrer, a game developer, established Project December to allow people to customise chatbots with the personality they wanted to connect with, as long as they paid for it.

Watch | Impact of automation on jobs: Use of AI-based technology in hiring talent

Advertisment

The project was designed using the GPT-3 API, a text-generating language model developed by OpenAI, an artificial intelligence research firm. 

Barbeau's situation caused a schism between Rohrer and OpenAI because GPT-3 is clearly prohibited from being used for sexual, romantic, self-harm, or bullying objectives in the company's standards.

Also read | NASA's new AI technology for hypersonic missile: Will it change warfare?

Rohrer shut down the GPT-3 version of Project December, calling OpenAI's approach hyper-moralistic and stating that persons like Barbeau were "consenting adults."

While we may all have opinions on whether developing a machine-learning deadbot is a good idea or not, articulating the implications is far from simple. This is why it's crucial to tackle the ethical issues posed by the case one by one.

WATCH WION LIVE HERE: