Jump to content

Voice computing

From Wikipedia, the free encyclopedia
The Amazon Echo, an example of a voice computer

Voice computing is the discipline that develops hardware or software to process voice inputs.[1]

It spans many other fields including human-computer interaction, conversational computing, linguistics, natural language processing, automatic speech recognition, speech synthesis, audio engineering, digital signal processing, cloud computing, data science, ethics, law, and information security.

Voice computing has become increasingly significant in modern times, especially with the advent of smart speakers like the Amazon Echo and Google Assistant, a shift towards serverless computing, and improved accuracy of speech recognition and text-to-speech models.

History

[edit]

Voice computing has a rich history.[2] First, scientists like Wolfgang Kempelen started to build speech machines to produce the earliest synthetic speech sounds. This led to further work by Thomas Edison to record audio with dictation machines and play it back in corporate settings. In the 1950s-1960s there were primitive attempts to build automated speech recognition systems by Bell Labs, IBM, and others. However, it was not until the 1980s that Hidden Markov Models were used to recognize up to 1,000 words that speech recognition systems became relevant.

Date Event
1784 Wolfgang von Kempelen creates the Acoustic-Mechanical speech machine.
1879 Thomas Edison invents the first dictation machine.
1952 Bell Labs releases Audrey, capable of recognizing spoken digits with 90% accuracy.
1962 IBM Shoebox can recognize up to 16 words.
1971 Harpy is created, which can understand over 1,000 words.
1986 IBM Tangora uses Hidden Markov Models to predict phonemes in speech.
2006 National Security Agency begins research in hotword detection during normal conversations.
2008 Google launches a voice application, bring speech recognition to mobile devices.
2011 Apple releases Siri on iPhone
2014 Amazon releases Amazon Echo to make voice computing relevant to the public at large.

Around 2011, Siri emerged on Apple iPhones as the first voice assistant accessible to consumers. This innovation led to a dramatic shift to building voice-first computing architectures. PS4 was released by Sony in North America in 2013 (70+ million devices), Amazon released the Amazon Echo in 2014 (30+ million devices), Microsoft released Cortana (2015 - 400 million Windows 10 users), Google released Google Assistant (2016 - 2 billion active monthly users on Android phones), and Apple released HomePod (2018 - 500,000 devices sold and 1 billion devices active with iOS/Siri). These shifts, along with advancements in cloud infrastructure (e.g. Amazon Web Services) and codecs, have solidified the voice computing field and made it widely relevant to the public at large.

Hardware

[edit]

A voice computer is assembled hardware and software to process voice inputs.

Note that voice computers do not necessarily need a screen, such as in the traditional Amazon Echo. In other embodiments, traditional laptop computers or mobile phones could be used as voice computers. Moreover, there has become increasingly more interfaces for voice computers with the advent of IoT-enabled devices, such as within cars or televisions.

As of September 2018, there are currently over 20,000 types of devices compatible with Amazon Alexa.[3]

Software

[edit]

Voice computing software can read/write, record, clean, encrypt/decrypt, playback, transcode, transcribe, compress, publish, featurize, model, and visualize voice files.

Here are some popular software packages related to voice computing:

Package name Description
FFmpeg for transcoding audio files from one format to another (e.g. .WAV --> .MP3).[4]
Audacity for recording and filtering audio.[5]
SoX for manipulating audio files and removing environmental noise.[6]
Natural Language Toolkit for featurizing transcripts with things like parts of speech.[7]
LibROSA for visualizing audio file spectrograms and featurizing audio files.[8]
OpenSMILE for featurizing audio files with things like mel-frequency cepstrum coefficients.[9]
CMU Sphinx for transcribing speech files into text.[10]
Pyttsx3 for playing back audio files (text-to-speech).[11]
Pycryptodome for encrypting and decrypting audio files.[12]
AudioFlux for audio and music analysis, feature extraction.[13]

Applications

[edit]

Voice computing applications span many industries including voice assistants, healthcare, e-Commerce, finance, supply chain, agriculture, text-to-speech, security, marketing, customer support, recruiting, cloud computing, microphones, speakers, and podcasting. Voice technology is projected to grow at a CAGR of 19-25% by 2025, making it an attractive industry for startups and investors alike.[14]

[edit]

In the United States, the states have varying telephone call recording laws. In some states, it is legal to record a conversation with the consent of only one party, in others the consent of all parties is required.

Moreover, COPPA is a significant law to protect minors using the Internet. With an increasing number of minors interacting with voice computing devices (e.g. the Amazon Alexa), on October 23, 2017 the Federal Trade Commission relaxed the COPAA rule so that children can issue voice searches and commands.[15][16]

Lastly, GDPR is a new European law that governs the right to be forgotten and many other clauses for EU citizens. GDPR also is clear that companies need to outline clear measures to obtain consent if audio recordings are made and define the purpose and scope as to how these recordings will be used, e.g., for training purposes. The bar for valid consent has been raised under the GDPR. Consents must be freely given, specific, informed, and unambiguous; tacit consent is no longer sufficient.[17]

Research conferences

[edit]

There are many research conferences that relate to voice computing. Some of these include:

Developer community

[edit]

Google Assistant has roughly 2,000 actions as of January 2018.[22]

There are over 50,000 Alexa skills worldwide as of September 2018.[23]

In June 2017, Google released AudioSet,[24] a large-scale collection of human-labeled 10-second sound clips drawn from YouTube videos. It contains 1,010,480 videos of human speech files, or 2,793.5 hours in total.[25] It was released as part of the IEEE ICASSP 2017 Conference.[26]

In November 2017, Mozilla Foundation released the Common Voice Project, a collection of speech files to help contribute to the larger open source machine learning community.[27][28] The voicebank is currently 12GB in size, with more than 500 hours of English-language voice data that have been collected from 112 countries since the project's inception in June 2017.[29] This dataset has already resulted in creative projects like the DeepSpeech model, an open source transcription model.[30]

See also

[edit]

References

[edit]
  1. ^ Schwoebel, J. (2018). An Introduction to Voice Computing in Python. Boston; Seattle, Atlanta: NeuroLex Laboratories. https://neurolex.ai/voicebook
  2. ^ Timeline for Speech Recognition. https://medium.com/swlh/the-past-present-and-future-of-speech-recognition-technology-cf13c179aaf
  3. ^ Voicebot.AI. https://voicebot.ai/2018/09/02/amazon-alexa-now-has-50000-skills-worldwide-is-on-20000-devices-used-by-3500-brands/
  4. ^ FFmpeg. https://www.ffmpeg.org/
  5. ^ Audacity. https://www.audacityteam.org/
  6. ^ SoX. http://sox.sourceforge.net/
  7. ^ NLTK. https://www.nltk.org/
  8. ^ LibROSA. https://librosa.github.io/librosa/
  9. ^ OpenSMILE. https://www.audeering.com/technology/opensmile/
  10. ^ "PocketSphinx is a lightweight speech recognition engine, specifically tuned for handheld and mobile devices, though it works equally well on the desktop: Cmusphinx/Pocketsphinx". GitHub. 29 March 2020.
  11. ^ Pyttsx3. https://github.com/nateshmbhat/pyttsx3
  12. ^ Pycryptodome. https://pycryptodome.readthedocs.io/en/latest/
  13. ^ AudioFlux. https://github.com/libAudioFlux/audioFlux/
  14. ^ Businesswire. https://www.businesswire.com/news/home/20180417006122/en/Global-Speech-Voice-Recognition-Market-2018-Forecast
  15. ^ Techcrunch. https://techcrunch.com/2017/10/24/ftc-relaxes-coppa-rule-so-kids-can-issue-voice-searches-and-commands/
  16. ^ "Federal Register :: Request Access".
  17. ^ IAPP. https://iapp.org/news/a/how-do-the-rules-on-audio-recording-change-under-the-gdpr/
  18. ^ Interspeech 2018. http://interspeech2018.org/
  19. ^ AVEC 2018. http://avec2018.org/
  20. ^ 2018 FG. https://fg2018.cse.sc.edu/
  21. ^ ASCII 2019. http://acii-conf.org/2019/
  22. ^ Voicebot.ai. https://voicebot.ai/2018/01/24/google-assistant-app-total-reaches-nearly-2400-thats-not-real-number-really-1719/
  23. ^ Voicebot.ai. https://voicebot.ai/2018/09/02/amazon-alexa-now-has-50000-skills-worldwide-is-on-20000-devices-used-by-3500-brands/.
  24. ^ Google AudioSet. https://research.google.com/audioset/
  25. ^ Audioset data. https://research.google.com/audioset/dataset/speech.html
  26. ^ Gemmeke, J. F., Ellis, D. P., Freedman, D., Jansen, A., Lawrence, W., Moore, & Ritter, M. (2017, March). Audio set: An ontology and human-labeled dataset for audio events. In Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on (pp. 776-780). IEEE.
  27. ^ Common Voice Project. https://voice.mozilla.org/
  28. ^ Common Voice Project. https://blog.mozilla.org/blog/2017/11/29/announcing-the-initial-release-of-mozillas-open-source-speech-recognition-model-and-voice-dataset/
  29. ^ Mozilla's large repository of voice data will shape the future of machine learning. https://opensource.com/article/18/4/common-voice
  30. ^ DeepSpeech. https://github.com/mozilla/DeepSpeech