google.assistant.library package.

class google.assistant.library.Assistant(credentials, device_model_id)

Client for the Google Assistant Library.

Provides basic control functionality and lifecycle handling for the Google Assistant. It is best practice to use the Assistant as a ContextManager:

with Assistant(credentials, device_model_id) as assistant:

This allows the underlying native implementation to properly handle memory management.

Once start() is called, the Assistant generates a stream of Events relaying the various states the Assistant is currently in, for example:

ON_CONVERSATION_TURN_STARTED
ON_END_OF_UTTERANCE
ON_RECOGNIZING_SPEECH_FINISHED:
    {'text': 'what time is it'}
ON_RESPONDING_STARTED:
    {'is_error_response': False}
ON_RESPONDING_FINISHED
ON_CONVERSATION_TURN_FINISHED:
    {'with_follow_on_turn': False}

See EventType for details on all events and their arguments.

Glossary:

  • Hotword: The phrase the Assistant listens for when not muted:

    "OK Google" OR "Hey Google"
    
  • Turn: A single user request followed by a response from the Assistant.

  • Conversation: One or more turns which result in a desired final result from the Assistant:

    "What time is it?" -> "The time is 6:24 PM" OR
    "Set a timer" -> "Okay, for how long?" ->
    "5 minutes" -> "Sure, 5 minutes, starting now!"
    
Parameters:
  • credentials (google.oauth2.credentials.Credentials) – The user’s Google OAuth2 credentials.
  • device_model_id (str) – The device_model_id that was registered for your project with Google. This must not be an empty string.
Raises:

ValueError – If device_model_id was left as None or empty.

device_id

Returns the device ID generated by the Assistant.

This value identifies your device to the server when using services such as Google Device Actions. This property is only filled AFTER start() has been called.

Returns:The device id once start() has been called, empty string otherwise.
Return type:str
send_text_query(query)

Sends |query| to the Assistant as if it were spoken by the user.

This will behave the same as a user speaking the hotword and making a query OR speaking the answer to a follow-on query.

Parameters:query (str) – The text query to send to the Assistant.
set_mic_mute(is_muted)

Stops the Assistant from listening for the hotword.

Allows for disabling the Assistant from listening for the hotword. This provides functionality similar to the privacy button on the back of Google Home.

This method is a no-op if the Assistant has not yet been started.

Parameters:is_muted (bool) – True stops the Assistant from listening and False allows it to start again.
start()

Starts the Assistant, which includes listening for a hotword.

Once start() is called, the Assistant will begin processing data from the ‘default’ ALSA audio source, listening for the hotword. This will also start other services provided by the Assistant, such as timers/alarms. This method can only be called once. Once called, the Assistant will continue to run until __exit__ is called.

Returns:A queue of events that notify of changes to the Assistant state.
Return type:google.assistant.event.IterableEventQueue
start_conversation()

Manually starts a new conversation with the Assistant.

Starts both recording the user’s speech and sending it to Google, similar to what happens when the Assistant hears the hotword.

This method is a no-op if the Assistant is not started or has been muted.

stop_conversation()

Stops any active conversation with the Assistant.

The Assistant could be listening to the user’s query OR responding. If there is no active conversation, this is a no-op.

class google.assistant.library.event.AlertEvent(event_type, args, **_)

Extends Event to add parsing of ‘alert_type’.

class google.assistant.library.event.AlertType

Alert types.

Used with ON_ALERT_STARTED and ON_ALERT_FINISHED events.

ALARM = 0

An event set for an absolute time such as ‘3 A.M on Monday’

TIMER = 1

An event set for a relative time such as ‘30 seconds from now’

class google.assistant.library.event.DeviceActionEvent(event_type, args, **kwargs)

Extends Event to add ‘actions’ property.

actions

A generator of commands to execute for the current device.

class google.assistant.library.event.Event(event_type, args, **_)

An event generated by the Assistant.

type

EventType – The type of event that was generated.

args

dict – Argument key/value pairs associated with this event.

static New(event_type, args, **kwargs)

Create new event using a specialized Event class when needed.

Parameters:
  • event_type (int) – A numeric id corresponding to an event in google.assistant.event.EventType.
  • args (dict) – Argument key/value pairs associated with this event.
  • kwargs (dict) – Optional argument key/value pairs specific to a specialization of the Event class for an EventType.
class google.assistant.library.event.EventType

Event types.

ON_ALERT_FINISHED = 11

Indicates the alert of alert_type has finished sounding.

Parameters:alert_type (AlertType) – The id of the Enum representing the type of alert which just finished.
ON_ALERT_STARTED = 10

Indicates that an alert has started sounding.

This alert will continue until ON_ALERT_FINISHED with the same alert_type is received. Only one alert should be active at any given time.

Parameters:alert_type (AlertType) – The id of the Enum representing the currently sounding type of alert.
ON_ASSISTANT_ERROR = 12

Indicates if the Assistant library has encountered an error.

Parameters:is_fatal (bool) – If True then the Assistant will be unable to recover and should be restarted.
ON_CONVERSATION_TURN_FINISHED = 9

The Assistant finished the current turn.

This includes both processing a user’s query and speaking the full response, if any.

Parameters:with_follow_on_turn (bool) – If True, the Assistant is expecting a follow up interaction from the user. The microphone will be re-opened to allow the user to answer a follow-up question.
ON_CONVERSATION_TURN_STARTED = 1

Indicates a new turn has started.

The Assistant is currently listening, waiting for a user query. This could be the result of hearing the hotword or start_conversation() being called on the Assistant.

ON_CONVERSATION_TURN_TIMEOUT = 2

The Assistant timed out waiting for a discernable query.

This could be caused by a mistrigger of the Hotword or the Assistant could not understand what the user said.

ON_DEVICE_ACTION = 14

Indicates that a Device Action request was dispatched to the device.

This is dispatched if any Device Grammar is triggered for the traits supported by the device. This event type has a special ‘actions’ property which will return an iterator or Device Action commands and the params associated with them (if applicable).

Parameters:dict – The decoded JSON payload of a Device Action request.
ON_END_OF_UTTERANCE = 3

The Assistant has stopped listening to a user query.

The Assistant may not have finished figuring out what the user has said but it has stopped listening for more audio data.

ON_MEDIA_STATE_ERROR = 20

Indicates that an error has occurred playing a track.

The built-in media player will attempt to skip to the next track or return to ON_MEDIA_STATE_IDLE if there is nothing left to play.

ON_MEDIA_STATE_IDLE = 16

Indicates that there is nothing playing and nothing queued to play.

This event is broadcast from the Google Assistant Library’s built-in media player for news/podcast on start-up and whenever the player has gone idle because a user stopped the media or paused it and the stream has timed out.

ON_MEDIA_TRACK_LOAD = 17

Indicates a track is loading but has not started playing.

This may be dispatched multiple times if new metadata is loaded asynchronously. This is typically followed by the event ON_MEDIA_TRACK_PLAY

Parameters:
  • metadata (dict) –

    Metadata for the loaded track. Not all fields will be filled by this time – if a field is unknown it will not be included. Metadata fields include:

    album(str): The name of the album the track belongs to. album_art(str): A URL for the album art. artist(str): The artist who created this track. duration_ms(double): The length of this track in milliseconds. title(str): The title of the track.
  • track_type (MediaTrackType) – The type of track loaded.
ON_MEDIA_TRACK_PLAY = 18

Indicates that a track is currently outputting audio.

This will only trigger when we transistion from one state to another, such as from ON_MEDIA_TRACK_LOAD or ON_MEDIA_TRACK_STOP

Parameters:
  • metadata (dict) –

    Metadata for the playing track. If a field is unknown it will not be included. Metadata fields include:

    album(str): The name of the album the track belongs to. album_art(str): A URL for the album art. artist(str): The artist who created this track. duration_ms(double): The length of this track in milliseconds. title(str): The title of the track.
  • position_ms (double) – The current position in a playing track in milliseconds since the beginning. If “metadata.duration_ms” is unknown (set to 0) this field will not be set.
  • track_type (MediaTrackType) – The type of track playing.
ON_MEDIA_TRACK_STOP = 19

Indicates that a previously playing track is stopped.

This is typically a result of the user pausing; the track can return to ON_MEDIA_TRACK_PLAY if it is resumed by the user.

Parameters:
  • metadata (dict) –

    Metadata for the stopped track. If a field is unknown it will not be included. Metadata fields include:

    album(str): The name of the album the track belongs to. album_art(str): A URL for the album art. artist(str): The artist who created this track. duration_ms(double): The length of this track in milliseconds. title(str): The title of the track.
  • position_ms (double) – The current position in a stopped track in milliseconds since the beginning. If “metadata.duration_ms” is unknown (set to 0) this field will not be set.
  • track_type (MediaTrackType) – The type of track stopped.
ON_MUTED_CHANGED = 13

Indicates that the Assistant is currently listening or not.

start() will always generate an ON_MUTED_CHANGED to report the initial value.

Parameters:is_muted (bool) – If True then the Assistant is not currently listening for its hotword and will not respond to user queries.
ON_NO_RESPONSE = 8

The Assistant successfully completed its turn but has nothing to say.

ON_RECOGNIZING_SPEECH_FINISHED = 5

The Assistant has determined the final recognized speech.

Parameters:text (str) – The final text interpretation of a user’s query.
ON_RENDER_RESPONSE = 15

Indicates that the Assistant has text output to render for a response.

Parameters:
  • type (RenderResponseType) – The type of response to render.
  • text (str) – The string to render for RenderResponseType.TEXT.
ON_RESPONDING_FINISHED = 7

The Assistant has finished responding by voice.

ON_RESPONDING_STARTED = 6

The Assistant is starting to respond by voice.

The Assistant will be responding until ON_RESPONDING_FINISHED is received.

Parameters:is_error_response (bool) – True means a local error TTS is being played, otherwise the Assistant responds with a server response.
ON_START_FINISHED = 0

The Assistant library has finished starting.

class google.assistant.library.event.IterableEventQueue(timeout=3600)

Extends queue.Queue to add an __iter__ interface.

offer(event)

Offer an event to put in the queue.

If the queue is currently full the event will be logged but not added.

Parameters:event (Event) – The event to try to add to the queue.
class google.assistant.library.event.MediaStateChangeEvent(event_type, args, **_)

Extends Event to add parsing of ‘state’.

class google.assistant.library.event.MediaTrackType

Types of track for an ON_MEDIA_TRACK_X events.

Used with ON_MEDIA_TRACK_LOAD, ON_MEDIA_TRACK_PLAY, & ON_MEDIA_TRACK_STOP

CONTENT = 2

The actual content for an item (news/podcast).

TTS = 1

A TTS introduction or interstitial track related to an item.

class google.assistant.library.event.RenderResponseEvent(event_type, args, **_)

Extends Event to add parsing of ‘response_type’.

class google.assistant.library.event.RenderResponseType

Types of content to render.

Used with ON_RENDER_RESPONSE