On Tuesday, Google unveiled its latest foray into machine learning, which may just come as a welcome respite to those with cluttered inboxes.
Dubbed Smart Reply, Google’s Inbox app feature offers users three replies based on email content, ultimately customizing its generated text as it learns the user’s voice and style. Users can select and send the generated response immediately, or edit the text if the reply requires something a bit more emotional, a concept Smart Reply doesn’t yet understand.
While the feature offers a real solution to a problem most working adults face, it also begs a revisit to the privacy question. As life becomes increasingly digital, the line between privacy and convenience continues to blur.
Google already scans your emails and search history to target ads, so really what’s the difference? Google maintains that in developing Smart Reply they “adhered to the same rigorous user privacy standards we’ve always held – in other words, no humans reading your email.”
But of course, just because humans aren’t reading it doesn’t mean they can’t, as it’s no secret Google has access to all your data.
Google is explicit in its terms and conditions that by using Google services, “you give Google (and those we work with) a worldwide license to use, host, store, reproduce, modify, create derivative works, communicate, publish, publicly perform, publicly display and distribute such content.” Though we may not have read it, we all checked the “I agree” box.
So why is this seeming invasion of privacy no longer as concerning as it once was?
In his book “The Googlization of Everything,” author Siva Vaidhyanathan attributes the shift in thought to perceived trust.
“[W]e now allow Google to determine what is important, relevant, and true on the Web and in the world. We trust and believe that Google acts in our best interest.” At the same time, Vaidhyanathan suggests we have also “surrendered control over the values, methods, and processes that make sense of our information ecosystem.”