Alphabet's developers have given Google a powerful new feature to offer users, but critics warn about the blurring of lines between humans and artificial intelligence. Do we need to know when we are talking to a robot, or not?
Google has added to the creeping paranoia about the accelerating advance of artificial intelligence into our daily lives with the announcement of an upgrade to its virtual assistant that has sent a chill through some, and raises ethics issues say others.
Androids and robots are becoming increasingly lifelike, but it is still possible to tell the difference. Now Alphabet's developers have given Google's assistant a human-sounding voice, and the new software will allow it to make calls on your behalf.
Based on the study of speech patterns Google has created six voices for you to choose from...except that those or other voices could be used to phone you, and with inserted human-sounding ums and ahs, you wouldn't be any the wiser.
So some people are saying, "Hey Google...hold your horses!"
The demonstration at the annual Google I/O event also showed that the assistant can perform better than some humans when ordering a takeaway meal...in English, anyway.
The development does raise a novel issue. Does it matter if you speak to a human or a machine, and does it matter if you don't know if you are? Security issues over the imitation of real people's voices seem to be obvious risks that need some sort of regulation. But does the fact that your computer sounds human, or a programme can cold-call you to sell stuff rather than a salesperson, make any difference? That argument appears less clear-cut.
The co-founder of Google himself, Sergey Brin, has written a letter to Alphabet's shareholders expressing his hopes for the future, but also his reservations about AI and its reach.