File Photo: Google Logo. Photograph:( Reuters )
Significantly, many computer document systems use methods and algorithms to correct spelling and grammar in the online text but corrections for 'inclusive language' are not widely used
Aiming to stop or cut down the use of politically incorrect words online, Google has launched an "inclusive language" function to avoid biases, slang or expressions that discriminate against groups of people on the basis of race, gender or socioeconomic status.
Typing 'politically incorrect' words will prompt a warning message to the users, for example, while writing 'landlord' will be flagged.
The message apparently reads: "Inclusive warning. Some of these words may not be inclusive to all readers. Consider using different words."
Similar to the word landlord, the users will be warned while typing the word 'mankind' because the suggested alternative is to use 'humankind'. The word mankind has been treated as a controversial term and not gender-inclusive.
As reported by Daily Mail, the new Google Document style programme will also flag gender-specific terms such as policemen and housewife. Instead, words like 'police officers' and 'stay-at-home spouse' should be used.
The document is being rolled out to what the firm calls enterprise-level users, the Mail report added.
Significantly, many computer document systems use methods and algorithms to correct spelling and grammar in the online text but corrections for 'inclusive language' are not widely used.
The report mentioned that the system is apparently flawed. For example, the transcribed interview with ex Klu Klux Klan leader David Duke prompted no warnings. In the interview, Duke used offensive racial slurs.
However, former US President John F Kennedy's inaugural address was flagged. The warning message suggested that he should say "for all humankind" instead of "for all mankind".
Is it a step too far?
As quoted by Daily Mail, Silkie Carlo, of campaign group Big Brother Watch, told the Sunday Telegraph: "Google’s new word warnings aren't assistive, they're deeply intrusive. This speech-policing is profoundly clumsy, creepy and wrong, often reinforcing bias."
Sam Bowman, of an online magazine, Works in Progress, said: "It feels pretty hectoring and adds an unwanted political/cultural slant to what I'd rather was a neutral product [as] a user."
Meanwhile, a Google spokesman said: "Our technology is always improving, and we don’t yet [have] a solution to identifying and mitigating all unwanted word associations and biases."
WATCH | Twitter begin deal talks with Elon Musk