A few months ago, I wrote an commodity on web speech acceptance using TensorflowJS. Even though it was super absorbing to implement, it was bulky for many of you to extend. The reason was pretty simple: it appropriate a deep acquirements model to be accomplished if you wanted to detect more words than the model I provided, which was pretty basic.

For those of you who needed a more applied approach, that commodity wasn’t enough. Afterward your requests, I’m autograph today about how you can bring full speech recognition to your web applications using the Web Speech API.

But before we abode the actual implementation, let’s accept some scenarios where this functionality may be helpful:

  • Building an appliance for situations where it is not accessible to use a keyboard or touch devices. For example, people alive in the field of appropriate globes make interactions with input accessories hard.
  • To abutment people with disabilities.
  • Because it’s awesome!

What’s the secret to powering web apps with speech recognition?

The secret is Chrome (or Chromium) Web Speech API . This API, which works with Chromium-based browsers, is absurd and does all the heavy work for us, abrogation us to only care about architecture better interfaces using voice.

However absurd this API is, as of Nov 2020, it is not widely supported, and that can be an issue depending on your requirements. Here is the accepted abutment status. Additionally, it only works online, so you will need a altered setup if you are offline.