Member-only story

Building a JavaScript AI Talking Plant 🌱 with Facial Recognition and Real Sensors

Mate Marschalko
9 min readAug 18, 2020

--

I think I covered most of the buzzwords in that title. Maybe I could squeeze in “Smart” and “Lasers” to make it complete!

To be fair, most of the services used in this project are existing AWS services and the actual work I did was to make all these services work together.

It’s probably worth having a look at the finished project to put all this in context:

I think the end result is pretty impressive … there’s definitely a lot going on in this project so let’s try and break it down into smaller pieces. That is actually how I like getting started with complex projects: list all the problematic areas of the app and build a quick proof of concept for each of these.

In the case of this project, it meant building a quick example for:

  • Speech recognition and intent detection
  • Face and object recognition
  • Reading physical sensors

Once I had all these working I was ready to put everything together.

This post is not meant to be a complete, step-by-step tutorial. It’s more like an introduction to help you understand how you could build a complex project like this. There’s definitely too much to cover so here’s the source code:
https://github.com/webondevices/geroge-ii

Let’s now look at how I built this talking plant:

Speech recognition and intent detection

This definitely feels like an overwhelming task to tackle…

Would it not be awesome if could simply tap into the capabilities of an existing voice assistant like Alexa? That would definitely save us a lot of time!

Well, that’s exactly what we can do with Amazon Lex!

Lex is an AWS service (https://aws.amazon.com/) that lets you use the voice capabilities of Alexa. You can actually…

--

--

Mate Marschalko
Mate Marschalko

Written by Mate Marschalko

Senior Creative Developer, Generative AI, Electronics with over 15 years experience | JavaScript, HTML, CSS

No responses yet

Write a response