AI-generated Key Takeaways
-
You can customize how your project interacts with the Google Assistant by triggering it with a button push, blinking an LED, or displaying a speech recognition transcript.
-
The Google Assistant SDK allows you to get the text transcript of the user request and the plain text of the Assistant's response.
-
Visual responses from the Assistant, such as weather information, can be rendered to a display if the feature is enabled.
-
Queries can be submitted to the Assistant via text input using the
text_query
field or via an audio file using theaudiofileinput.py
sample. -
You can control your project with custom commands by extending the Google Assistant Service sample with Device Actions or by creating an IFTTT recipe for the Assistant.
Once you have the Google Assistant running on your project, give these a try:
Customize how your project interacts with the Assistant. For example, trigger the Assistant with the push of a button or blink an LED when playing back audio. You can even show a speech recognition transcript from the Assistant on a display.
Control your project with custom commands. For example, ask your Assistant-enabled mocktail maker to make your favorite drink.
Customize how your project interacts with the Assistant
Trigger the Assistant
With the Google Assistant Service API, you control when to trigger an Assistant
request. Modify the sample code
to control this (for example, at the push of a button). Triggering
an Assistant request is done by sending a request to EmbeddedAssistant.Assist
.
Get the transcript of the user request
The Google Assistant SDK gives you a text transcript of the user request. Use this to provide feedback to the user by rendering the text to a display, or even for something more creative such as performing some local actions on the device.
This transcript is located in the SpeechRecognitionResult.transcript
field.
Get the text of the Assistant's response
The Google Assistant SDK gives you the plain text of the Assistant response. Use this to provide feedback to the user by rendering the text to a display.
This text is located in the DialogStateOut.supplemental_display_text
field.
Get the Assistant's visual response
The Google Assistant SDK supports rendering the Assistant response to a
display in the case of visual responses to certain queries. For example,
the query What is the weather in Mountain View? will render the current
temperature, a pictorial representation of the weather, and suggestions for
related queries. This HTML5 data (if present) is located in the
ScreenOut.data
field if this feature is enabled.
This can be enabled in the pushtotalk.py
and textinput.py
samples
with the --display
command line flag. The data is rendered in a browser window.
Submitting queries via text input
If you have a text interface (for example, a keyboard) attached to the device,
set the text_query
field in the config
field (see AssistConfig
).
Do not set the audio_in_config
field.
The sample code
includes the file textinput.py
. You can run this file to submit queries via
text input.
Submitting queries via audio file input
The sample code
includes the file audiofileinput.py
. You can run this file to submit a query
via an audio file. The sample outputs an audio file with the Assistant's response.
Control your project with custom commands
You can add custom commands to the Assistant that allow you to control your project via voice.
Here are two ways to do this:
Extend the Google Assistant Service sample to include Device Actions.
Create an IFTTT recipe for the Assistant. Then configure IFTTT to make a custom HTTP request to an endpoint you choose in response to an Assistant command. To do so, use Maker IFTTT actions.