Ok, I could have exaggerated a bit... But let's go straight to the point and then add some details.
In order to make your browser talk, just write this line of code:
speechSynthesis.speak(new SpeechSynthesisUtterance("Hello!"));
How simple is that?! Many years ago it would have been unthinkable to do such a thing, but nowadays modern browsers give you access to the new Web Speech API.
In order to make our browser speak, we use the SpeechSynthesisUtterance interface.
Firstly, we have to create a new SpeechSynthesisUtterance instance that accepts only one parameter (the sentence):
const utterance = new SpeechSynthesisUtterance("Hello!");
There are a lot of SpeechSynthesisUtterance properties we can manage:
If you want to explore other features of SpeechSynthesisUtterance you can browse the MDN Doc.
Let's discover how we can "tune" the browser's voice:
utterance.lang = "en-US"; // A string representing a BCP 47 language tag utterance.rate = 0.5; // Represent the speed utterance.pitch = 2; utterance.volume = 0.5; // mute=0, max=1
And now, let the browser speak!
speechSynthesis.speak(utterance);
We are also able to listen to a lot of events too:
Print out to the console once an event occurs:
utterance.onstart = (event) => console.log("Speech has started", event); utterance.onend = (event) => console.log("Speech has ended", event);
Speech Syntesis is well-supported on major browsers, on the contrary Speech Recognition is partially supported .
If you want to check whether your browser supports this feature, you can browse caniuse.com to have more details.
NB: There are still some problems with "onpause" and "onresume", so use it carefully:
Theory is useless without practice, thus I created a stackblitz project with a bunch of SpeechSynthesisUtterance features you can play with.