Offline voice assistant
Every minute, the Offline voice assistant records a five second audio clip, converts the audio clip to text locally on the edge device, and directs the host machine to execute the command and speak the output.
Before you begin
Ensure that your system meets these requirements:
- You must register and unregister by performing the steps in Preparing an edge device.
- A USB sound card and microphone is installed on your Raspberry Pi.
Registering your edge device
To run the processtext
service example on your edge node, you must register your edge node with the IBM/pattern-ibm.processtext
deployment pattern.
Perform the steps in the Using the Offline Voice Assistant Example Edge Service with Deployment Pattern Using the Offline Voice Assistant Example Edge Service with Deployment Pattern section of the readme file.
Additional information
The processtext
example source code is also available in the Horizon GitHub repository as an example for IOpen Horizon development. This source includes code for all of the services that run on the edge nodes for this example.
These Open Horizon examples services include:
- The voice2audio service records the five-second audio clip and publishes it to the mqtt broker.
- The audio2text service uses the audio clip and converts it to text offline using pocket sphinx.
- The processtext service uses the text and attempts to execute the recorded command.
- The text2speech service plays the output of the command through a speaker.
- The mqtt_broker manages all inter-container communication.
What to do next
For instructions for building and publishing your own version of Watson speech to text, see the processtext
directory steps in the Open Horizon examples repository.