Natural Language Processing image

Hacking AI for Fun and Profit

Print Friendly and PDF

Posted: June 9, 2017 | Categories: Natural Language Processing

I knew all along that AI was a component of the World Without Apps. What I didn’t expect was how quickly AI-driven actions would become a home-grown option. I subscribe to The Mag Pi magazine (a publication from the Raspberry Pi Foundation) and even got my first project published in the current issue (https://www.raspberrypi.org/magpi/issues/58/).

In the previous issue, the magazine included the complete Voice Kit AI project from the Google AIY (AI Yourself) project – basically a complete Google Assistant including an enclosure, speaker, button, Audio HAT, and more. My son and I quickly assembled the project, and now he has an almost Google Home device in his bedroom. This is absolutely interesting because Google funded sending thousands of these devices all around the world, but more interesting is that the core project here is extensible. The project runs on a Raspberry Pi, and you can add commands to the Google Assistant project code. Once you do that, you basically write the code to respond to your specific voice commands and you can make this device do anything, absolutely anything.

Google AIY Project

Your regular Google Assistant go to the cloud for execution (searches, weather reports and so on), but, if you connect some specialized hardware to it (not something specific, but any hardware you can control from the Raspberry Pi) and correspondingly add your own code, suddenly your project becomes much more interesting.

From a Maker standpoint, this dramatically enhances the types of projects that I can make with this thing. I no longer have to deal with ANY of the complexities of voice interaction, the platform (Google AIY) takes care of that for me. All I have to do is connect my hardware, add the command to the acceptable command list, write some code, and I’m all set.

This will ultimately take us to something I’m worried about: AI everywhere. When companies (and hackers) start embedding AI into everything around us, suddenly we have multiple systems all listening for commands and stepping over each other. I got a good taste of this while watching this Spring’s Google I/O keynote. Everytime the presenter said “OK Google” to demonstrate a new capability on stage, my local Google Home device would wake up and answer. I had to put my phone in my pocket during the presentation so it couldn’t hear and answer as well.

What do you do when your car and phone both have AI capabilities? How does each device know it’s the target of the query? Will I then need to preface every query/command with the name of the target device I’m targeting? Probably at first, but ultimately, we’ll get to a single, overarching AI that understands what it can interact with locally. You’ll speak to the ceiling or your thumb, or whatever, and an available, compatible device will perform the action for you, whatever it is.

That’s where this is ultimately going, I’m certain of it. When that happens, we’re in the World Without Apps.

Photo by Glen Carrie on Unsplash

Next Post: When the WWA Overlaps the Physical World

Previous Post: Facebook and The World Without Apps