Technology

Meet Your New Colleague: Artificial Intelligence

By Andie Burjek

Nov. 9, 2018

Artificial intelligence is increasingly people’s interviewer, colleague and competition. As it burrows its way further into the workplace and different job functions, it holds abilities to take over certain tasks, learn over time and even have conversations. Many of us may not even be aware that who we’re talking to isn’t even a “who” but a “what.”

In 2017, 61 percent of businesses said they implemented AI, compared to 38 percent in 2016, according to the “Outlook on Artificial Intelligence in the Enterprise 2018” report from Narrative Science, an artificial intelligence company, in collaboration with the National Business Research Institute. In the communication arena, 43 percent of these businesses said they send AI-powered communications to employees.

Many candidates don’t even realize that they’re not speaking to a human, according to Sahil Sahni, co-founder of computer software company AllyO, which uses an AI-enabled chatbot to speak to candidates and answer questions in the recruiting process.

Based off data from AllyO’s applicants, he found that less than 30 percent of candidates think that they’re speaking to something not human. The other 70 percent either did not disclose what they thought or believed there’s a person behind that chatbot.

AllyO does not disclose up front to the candidate that they are not speaking to a human. However, if they were to ask outright if they are speaking to a person or an AI-enabled chatbot, the system discloses that information. “The goal is not to goof anyone here. The goal is to have the best candidate experience. Lying about it is not the best candidate experience,” Sahni said.

communication with artificial intelligence
In 2017, 61 percent of businesses said they implemented AI, compared to 38 percent in 2016, according to the “Outlook on Artificial Intelligence in the Enterprise 2018” report from Narrative Science.

Candidates don’t behave differently when speaking to an AI as opposed to a human, Sahni added.

“When you’re a job seeker, it’s not like you’re calling customer service to complain about something. You’re at your best behavior,” he said. “You tend to be a lot more tolerant, you tend to be a lot more respectful, no matter what the process might be.”

Dennis R. Mortensen, CEO and founder of New York-based technology company X.ai, also has access to conversations between people and machine agents, and his team spent the past four years assembling a data set of more than 10 million emails on these dialogues. Their findings have similarly found that people don’t communicate differently just because they’re speaking to a robot.

Giving X.ai’s own personal assistants Amy and Andrew as an example, he said, “It would be very easy to imagine that I will treat them like machines and remove any level of emotion otherwise applied to a traditional conversation with a human, or that the system as a whole would not leave any room for empathy toward the machine. I am happy to say that it is not the case.”

This is not to say that everyone treats a machine with respect. If people tend to be more aggressive or rude with a real person, that same communication style can be seen in how they converse with a machine. The same trend goes with people who are neutral or overly friendly in how they speak to others.

communication with artificial intelligence

Also read: Artificial Intelligence, Automation and the Future of Talent Acquisition

How potential employees actually speak to AI is a different conversation than how potential employees should speak to AI, he added. That is, it’s unclear whether how a person treats a machine says anything about how that person would treat other people, and it’s unclear whether something like a person being rude to a machine agent should impact their job prospects.

“We can certainly agree that we do care if it’s a human recruiting coordinator,” Mortenson said. But machines have no feelings or emotions and cannot be offended, so it would be easy to argue why employers shouldn’t care. Ultimately, “I do think we should care even if it is a machine,” Mortenson said. “I understand why we might care a little bit less, but I don’t think we can just discard that as a signal.”

He gave the example of a report which found that this technology could have implications on how kids learn how to communicate and teach them that speaking harshly or impolitely to people has no consequences.

“In real life there’s a penalty to being an asshole,” Mortensen.

Limits and Capabilities of AI in the Hiring Process

Machine learning allows AI to gain knowledge over time and learn from its interactions, much like a person would. That being said, even though it has the ability to mature in its own way and become more humanlike over time, that still doesn’t make it human, and there are certain questions that a person might have to answer, for example, questions about company culture, according to Sahni.

AI systems are capable of taking this into account. For example, AllyO can recognize when a candidate asks a question that cannot be answered by a machine and brings in a person who can answer that question, Sahni said. This way, the candidate can have a positive experience and not feel like they’ve lost out by not speaking to a real person.

“If the process is objective, AI knocks it out of the park. If the process has any subjectivity to it, AI does really well looping in the hiring team,” he said. “A good AI system typically has human support behind it.”

Much like people themselves, AI has the potential for bias, according to Eric Shangle, director of people operations at AI platform Figure Eight, based in San Francisco. For example, Wired reported in July 2018 that Amazon’s facial recognition software system Rekognition confused many black members of Congress with publicly available mugshots and that facial recognition technology’s problem in detecting darker skin tones is a well-established problem.

One reason why a tool may be biased is training data bias, Shangle said. From the developmental side of machine learning, the creator of a tool must input a data set to train the algorithm, and if it does not use a diverse data set, then an employer using the tool may come across bias blind spots.

“What are the biases of this tool?” is a legitimate question for employers who are looking to purchase a machine learning tool such as facial recognition software, Shangle said. A recruiting tool may, for example, have a bias toward college-educated job seekers.

David Dalka, founder of Chicago-based management consulting company Fearless Revival, agrees that AI has its limits. He has a more traditional view of what recruiting should look like, arguing that companies should invest less in technology and more in human recruiters who work at the company long-term, know the company culture and know what kind of person would be a best fit for the job, rather than look for trendy keywords or job titles in résumés.

“I’m not opposed to AI tools if someone built the full data library of all the factors and stopped focusing trivially on things like job titles,” he said.

He suggested that companies should more carefully consider the attributes that matter in a candidate — Do they read any books? Are they naturally curious? What are their skills and degrees? — and consider how they would weigh these attributes in an AI system. Ultimately AI is simply a tool that analyzes content.

“This idea that some wizard will magically create this black box that will hire the right people without you thinking of these things is a fallacy,” Dalka said.

This article originally appeared in Talent Economy.

Andie Burjek is an associate editor at Workforce.com.

Schedule, engage, and pay your staff in one system with Workforce.com.