Social Proof

What is Microsoft VALL-E?

Speechify is the #1 audio reader in the world. Get through books, docs, articles, PDFs, emails - anything you read - faster.
Try for free

Featured In

forbes logocbs logotime magazine logonew york times logowall street logo
Listen to this article with Speechify!
Speechify

Microsoft VALL-E represents the latest technology advancement that can power completely natural-sounding TTS. Here’s a detailed breakdown of the tech.

Text to speech technology has been advancing in massive strides, especially in the last few years. Driven by artificial intelligence improvements, today’s TTS can deliver high-quality readouts imitating human speech.

Microsoft’s VALL-E is the latest tech solution that may make text to speech sound downright uncanny. It’s a neural codec language model based on zero-shot machine learning.

If that last sentence sounds like sci-fi technobabble, don’t worry. We’ll break down the complex concepts behind VALL-E in the article below.

Microsoft VALL-E explained

AI models are growing in power at a quick pace. By now, everyone knows about OpenAI’s ChatGPT, which might be the closest we’ve come to AI seeming like an actual person. And you’ve probably seen some AI-powered art from the DALL-E engine.

Besides startups like OpenAI, global companies like Microsoft have been significant players in the AI space.

Microsoft’s researchers have recently been working on advancements to text to speech synthesis. VALL-E represents just that.

The new AI will likely be a game-changer in the TTS landscape because it can generate human-sounding speech based on a tiny audio sample. A three-second acoustic prompt is enough for VALL-E to pick up the specific speaker’s patterns.

After receiving the speaker prompt, the AI can imitate the human’s voice and even simulate their emotional tone. Equally impressive, VALL-E preserves the acoustic environment of the unseen speaker.

Simply put, the VALL-E model excels in speaker similarity. You can hear it in action on GitHub, where Microsoft shared audio examples along with a detailed explanation of the AI.

Of course, such technology has plenty of potential uses, like creating podcasts and audiobooks. The potential may grow further as VALL-E combines with generative models like GPT-3.

But technology like VALL-E could also be used for more nefarious purposes.

Since VALL-E can sound scarily like an actual person, it’s easy to see how malicious actors could utilize the tech for scams like non-consensual, harmful deepfakes. Such possibilities prompted Microsoft to issue an ethics statement.

In the statement, the company advocates specific speech editing models that would ensure consent from the original speaker.

But controversies around VALL-E’s potential uses are a consideration for the future. For now, there’s a more exciting question on the table:

How does the AI replicate complex patterns with only a three-second audio as a baseline sample?

Unsurprisingly, the answer is rather complex.

VALL-E had extensive training data, consisting of thousands of hours of English speech. This primed the AI for seamless English language speech simulation. However, VALL-E isn’t your run-of-the-mill TTS system – it’s powered by cutting-edge machine-learning technology.

We’ve already mentioned the tech’s name: zero-shot neural codec language model. Let’s look at what those terms mean in practice.

Understanding zero-shot neural codec language models

Starting with the more straightforward term, “zero-shot” refers to a specific technology for text to speech engines. It allows for AI-generated speech based on previously unknown data. In other words, the computer can read aloud text it’s never “seen” before.

More impressively, zero-shot tech allows the machine to produce readouts with no additional training. Essentially, it’s similar to how humans can read an unfamiliar text in a language they already know.

Moving onto the complicated part, the “neural codec language model” requires a further breakdown.

TTS engines rely on audio codecs to create waveforms based on written text. The codec helps the AI translate written letters, words, and sentences into corresponding sounds. A neural codec serves the same purpose but is based on a robust neural network.

Of course, this poses an additional question: What’s a neural network?

We’ll explain it here in broader strokes without going into an even deeper dive. A neural network attempts to mimic how the human brain functions. The network consists of artificial neurons called nodes, which are connected and organized into layers.

The complex structure enables so-called deep learning, making the machine more capable of developing and adapting unfamiliar patterns.

The neural codec powers the language model, the other part of this text to speech equation.

The language model draws on a dataset to understand any text input in the context of an actual language. In other words, this is how the machine “makes sense” of text.

In VALL-E’s case, LibriLight, an audio library compiled by Facebook’s Meta, served as the AI’s language model foundation.

Listen to the cutting-edge TTS technology in action with Speechify

Although VALL-E is still unavailable to the public, you can hear what an advanced text to speech engine sounds like with Speechify. Speechify is a TTS service that can read aloud text from practically any source.

Whether you give it written text, web content, or a scanned page, Speechify will read it instantly. Better yet, the engine features narration voices that sound natural. Unlike the typical robotic TTS engines, Speechify sounds more like a human than a machine.

Additionally, you can tweak how Speechify reads. Choose your preferred language, narrator, and reading speed, and listen to any text precisely how you want.

If all this sounds exciting, you can try Speechify for free today.

FAQ

Can people use Vall-E?

There are many concerns about how VALL-E could be abused. Identity theft is a particularly worrying possibility. For that reason, Microsoft has opted not to make VALL-E publicly available.

What is Microsoft AI?

Microsoft AI isn’t a particular product. Instead, the company’s program serves as an AI development framework. Microsoft AI includes data science solutions, conversational AI, robotics, machine learning, and other advances in the industry.

What is a voice-driven interface?

A voice-driven interface is very much what it sounds like - a user interface you interact with via voice commands. This technology is already commonplace in smart devices – think Amazon’s Alexa, Apple’s Siri, Microsoft’s Cortana, or Google’s Assistant.

What is a robot?

The term “robot” denotes any machine that operates automatically. Such machines are designed as human labor replacements. Despite the typical portrayal in popular media, most robots aren’t humanoid in appearance. In fact, they might not even have a physical form. For example, today’s popular virtual assistants also count as robots.

Cliff Weitzman

Cliff Weitzman

Cliff Weitzman is a dyslexia advocate and the CEO and founder of Speechify, the #1 text-to-speech app in the world, totaling over 100,000 5-star reviews and ranking first place in the App Store for the News & Magazines category. In 2017, Weitzman was named to the Forbes 30 under 30 list for his work making the internet more accessible to people with learning disabilities. Cliff Weitzman has been featured in EdSurge, Inc., PC Mag, Entrepreneur, Mashable, among other leading outlets.