This is an experimental technology
Check the Browser compatibility table carefully before using this in production.
The Web Speech API enables you to incorporate voice data into web apps. The Web Speech API has two parts: SpeechSynthesis (Text-to-Speech), and SpeechRecognition (Asynchronous Speech Recognition.)
The Web Speech API makes web apps able to handle voice data. There are two components to this API:
SpeechRecognition interface, which provides the ability to recognize voice context from an audio input (normally via the device's default speech recognition service) and respond appropriately. Generally you'll use the interface's constructor to create a new SpeechRecognition object, which has a number of event handlers available for detecting when speech is input through the device's microphone. The SpeechGrammar interface represents a container for a particular set of grammar that your app should recognise. Grammar is defined using JSpeech Grammar Format (JSGF.)SpeechSynthesis interface, a text-to-speech component that allows programs to read out their text content (normally via the device's default speech synthesiser.) Different voice types are represented by SpeechSynthesisVoice objects, and different parts of text that you want to be spoken are represented by SpeechSynthesisUtterance objects. You can get these spoken by passing them to the SpeechSynthesis.speak() method.For more details on using these features, see Using the Web Speech API.
SpeechRecognitionSpeechRecognitionEvent sent from the recognition service.SpeechRecognitionAlternativeSpeechRecognitionErrorSpeechRecognitionEventresult and nomatch events, and contains all the data associated with an interim or final speech recognition result.SpeechGrammarSpeechGrammarListSpeechGrammar objects.SpeechRecognitionResultSpeechRecognitionAlternative objects.SpeechRecognitionResultListSpeechRecognitionResult objects, or a single one if results are being captured in continuous mode.SpeechSynthesisSpeechSynthesisErrorEventSpeechSynthesisUtterance objects in the speech service.SpeechSynthesisEventSpeechSynthesisUtterance objects that have been processed in the speech service.SpeechSynthesisUtteranceSpeechSynthesisVoiceSpeechSynthesisVoice has its own relative speech service including information about language, name and URI.Window.speechSynthesis[NoInterfaceObject] interface called SpeechSynthesisGetter, and Implemented by the Window object, the speechSynthesis property provides access to the SpeechSynthesis controller, and therefore the entry point to speech synthesis functionality.The Web Speech API repo on GitHub contains demos to illustrate speech recognition and synthesis.
| Specification | Status | Comment |
|---|---|---|
| Web Speech API | Draft | Initial definition |
SpeechRecognition| Desktop | ||||||
|---|---|---|---|---|---|---|
| Chrome | Edge | Firefox | Internet Explorer | Opera | Safari | |
| Basic support | 33
|
? | No | No | No | No |
| Mobile | |||||||
|---|---|---|---|---|---|---|---|
| Android webview | Chrome for Android | Edge Mobile | Firefox for Android | Opera for Android | iOS Safari | Samsung Internet | |
| Basic support | ? | Yes
|
? | No | No | No | ? |
SpeechSynthesis| Desktop | ||||||
|---|---|---|---|---|---|---|
| Chrome | Edge | Firefox | Internet Explorer | Opera | Safari | |
| Basic support | 33 | Yes | 49 | No | 21 | 7 |
| Mobile | |||||||
|---|---|---|---|---|---|---|---|
| Android webview | Chrome for Android | Edge Mobile | Firefox for Android | Opera for Android | iOS Safari | Samsung Internet | |
| Basic support | 4.4.3 | 33 | Yes | 62
|
No | 7.1 | ? |
© 2005–2018 Mozilla Developer Network and individual contributors.
Licensed under the Creative Commons Attribution-ShareAlike License v2.5 or later.
https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_API