Google Assistant to become Duplex: What does that mean?

Duplex started testing in four US cities at the end of 2018, and will most likely be be rolled out further this year. Google’s AI, available as beta software on Pixel smartphones, promises to revolutionize the services an assistant can provide on cellphones (and other devices), going way beyond what they can currently do. In May last year, Sundar Pichai unveiled this artificial intelligence aimed at consumers, which is presented as a platform for managing everyday activities such as booking restaurants and hair salon appointments, with the challenge of creating the appropriate ethical relationship between man and machine, which is increasingly under threat in the society of the future.

The issue was highly controversial at the time, because the Assistant could deceive interlocutors into believing they were talking to a real person at the other end of the line. Google later confirmed that Duplex would announce itself at the beginning of every call and again, more recently, that it would be more transparent and more flexible.

What is Duplex?

Duplex is a fully-automated system that makes calls on behalf of the user, complete with a synthesized voice that sounds very much like a person speaking. The software is also able to understand complex sentences, fast talking, pauses and interruptions, just as we do in conventional, natural speech. Activities that it can simplify by doing them automatically include diary planning, managing appointments and requesting holiday leave from companies and sick leave from doctors. For Duplex to work, the person it’s calling must have given their consent to talking to an AI. This is the essential requirement for establishing communication between people and the AI of the future. Google Duplex centers on a neural network built around a machine learning platform called TensorFlow Extend (TFX). This recurring neural network (RNN), as it‘s known, allows the AI to process sequential and contextual information, and this is what makes it suitable for machine learning, linguistic modeling and speech recognition.

Will it replace Google Assistant?

At the moment, digital personal assistants are only able to carry out limited requests, to do with what happens on the cellphone. They are not so much dedicated assistants in the way we normally understand it. You can ask them to turn on the lights or tune in the radio, but they can’t cancel an appointment or go beyond what is possible with just a handful of bits. We certainly expect Duplex to supplant the AIs that are currently available in terms of quality and scope of support they offer, although this won’t happen in the short term.

The advantages are clear, and could potentially overcome linguistic barriers and personal disabilities; people with a hearing impairment, tourists in foreign countries, and many others would benefit from software that can do the job on their behalf, and only when they want it to. It also promises to free up some of your time. When you’re spending too much of your time online or are busy with other things, Duplex would be able to perform certain activities at given times, without consulting you and just following a command it was given even several days earlier. Such as? “Duplex, order two margaritas tomorrow night at 8 o’clock.” Hold on, we don’t have to worry about tomorrow night’s dinner anymore. Or: “Duplex, if I’m not home by 12, cancel my appointment at the dentist.” And these are just a few examples; we could go on a lot longer.

A far distant future? Not at all. New York, Atlanta, Phoenix and San Francisco are already getting a taste of it, and the really smart Assistant is ready to conquer the world.

This post is also available in: GermanFrenchItalian