Voice is the new computer interface for mobile and ubiquitous computing.

Virtual assistants are like browsers to the voice web: they are fast becoming primary gateways between the internet and our private lives.

A natural-language programming interface for virtual assistants lets end users automate personal long-tail tasks easily.

Stanford Open Virtual Assistant Lab (OVAL) is dedicated to advancing and democratizing virtual assistant technology, while protecting public interest in privacy, open knowledge access, and open competition.

Research Agenda

  1. Virtual Assistant 2.0: We create tools to enable millions of voice interface developers to create effective conversational agents with a reasonable effort and development cost. In particular, we reduce our reliance of expensive, error-prone annotations with training data synthesis.
  2. Natural-language programming. We create tools to enable consumers and professionals to automate their long-tail digital tasks by using natural langauge.
  3. Understanding natural-language commands. Pretrained natural language models have no semantics. We explore how to add meaning to pretrained models for the vocabulary used in human-computer interface.
  4. Privacy with virtual assistants. We study how to keep and share personal data easily with privacy using federated virtual assistants.

OVAL is supported by the National Science Foundation under Grant No. 1900638, and by the Alfred P. Sloan Foundation under Grant No. G-2020-13938.

The Open Virtual Assistant Initiative

We have launched an initiative in July to create an open-source virtual assistant infrastructure to support experimentation in research and to provide a basis of collaboration in the industry. This is made possible by a grant from Alfred P. Sloan Foundation with the goals of protecting open access to knowledge and to protect privacy.

We will be making beta releases throughout this year, with the goals of delivering in one year:

  1. Genie, an open-source, well-documented, toolkit to support Virtual Assistant 2.0 Technology. This toolkit supports natural language programming by synthesizing training data from high-level specification and avoids the need of massive manual annotation of training data.
  2. Thingpedia, a non-proprietary skill repository open to all assistants. It collects natural-language interfaces to the web and the Internet of Things. Thingpedia is open and crowdsourced like Wikipedia, which can potentially grow to be larger than any proprietary database.
  3. An open-source, privacy-preserving assistant with the top 10 most popular skills. The goal is to eventually create an alternative to Alexa and Google Assistant, similar to how Unix/Linus is an alternative to Windows, and Firefox is an alternative to Chrome. Here is a stable version of our research prototype, Almond.

Please see our roadmap. This endeavor needs the support and contributions from funding agencies, companies, researchers, developers, and individuals. Please contact us.

We are also looking for a couple of talented engineers to join the team. Job announcement

Our current partners include:

  • Home Assistant provides a local gateway to over 1000 different IoT devices. Almond is bundled as a voice assistant interface.
  • Smartnews, a news aggregator, is collaborating in creating a news skill.
  • Yelp is providing access to APIs to answer questions about restaurants.

Presentations & Interviews

Our Work

Virtual Assistant 2.0 Technology

A paradigm shift is necessary to advance and democratize virtual assistants. Existing assistants built with Virtual Assistant 1.0 technology has limited extensibility and is powered by multibillion-dollar manual annotation efforts. The Virtual Assistant 2.0 technology we developed is grounded in formal programming language semantics and is powered by open-source toolsets and knowledge bases intended to help millions of developers create neural-based conversational agents for their applications cost-effectively. This technology has been shown to outperform commercial systems on long-tail questions. Please follow our technology development here.

Current Research Topics

We are actively pushing the state of the art on conversational agents. Topics include: improvement in accuracy, scope, and scalability; recovering from errors; low-cost localization; social conversations; better development tools; multimodal assistants. Check this out here.

Privacy Protection

Our virtual assistant protects privacy by letting users keep private data on their own devices. The code is separated from data. All the code for the skills are stored on the nonproprietary Thingpedia skill repository, which is open to all assistants and contains no personal information. While natural language commands are translated in the cloud, the translated commands are executed locally on the client side to provide privacy. Thus, all the personal account credentials and data are only stored on the users’ devices, and not accessible by any other third party. Local natural language processing is possible once our neural network is trained. Similarly, all the data from the home IoT devices never leave the house.

Our assistant architecture also helps users share data with privacy. It has a federated architecture, just like email; users can use different virtual assistants, which interoperate in sharing data using a standard secure communication protocol.

Software Releases

Our approach is to continuously release working prototypes, under the name of Almond. The current stable version of Almond is available at almond.stanford.edu. All released software and datasets can be downloaded from the Releases page.

Almond 2.0, to be released in the fall, is the first prototype that uses our new dialogue technology. In preparation for the 2.0 release, we are releasing Almond 1.99, which has been tested mainly on the Spotify skill for premium accounts. We are actively improving the model by adding better entity recognition. Check out the Release planning page for more details of the schedule.

Upcoming Events & News


See here for the paper abstracts.






Senior Members

PhD Students

Master and Undergraduate Students

PhD Alumni

Former Students and Collaborators

We thank them for their valuable contribution.