This page contains code snippets and descriptions of the features available for customizing a Web Receiver app.
Creating a customized Web Receiver app
The main structure of a customized Web Receiver app includes required elements (shown in bold) along with optional features to customize the app for your particular use case.
- A
cast-media-player
element that represents the built-in player UI provided with Web Receiver. - Custom CSS-like styling for the
cast-media-player
element to style various UI elements such as thebackground-image
,splash-image
, andfont-family
. - A script element to load the Web Receiver framework.
- JavaScript code to customize Web Receiver app by intercepting messages and handling events.
- Queue for autoplay.
- Options to configure playback.
- Options to set the Web Receiver context.
- Options to set commands which are supported by the Web Receiver app.
- A JavaScript call to start the Web Receiver application.
See the CastReceiver
GitHub
sample for a Web Receiver application that illustrates this full structure.
Application configuration and options
The CastReceiverContext
is the outermost class exposed to the developer, and it manages loading
of underlying libraries and handles the initialization of the Web Receiver SDK.
If the Web Receiver API detects that a sender is disconnected it will raise the
SENDER_DISCONNECTED
event. If the Web Receiver has not been able to communicate with the sender for
what we described as maxInactivity
seconds, it will also raise the SENDER_DISCONNECTED
event. During development
it is a good idea to set maxInactivity
to a high value so that the Web Receiver
app does not close when debugging the app with the Chrome Remote Debugger:
const context = cast.framework.CastReceiverContext.getInstance();
const options = new cast.framework.CastReceiverOptions();
options.maxInactivity = 3600; //Development only
context.start(options);
However, for a published Web Receiver application it is better to not set
maxInactivity
and instead rely on the default value. Note that the
Web Receiver options are set only once in the application.
The other configuration is the cast.framework.PlaybackConfig
. This
can be set as follows:
const playbackConfig = new cast.framework.PlaybackConfig();
playbackConfig.manifestRequestHandler = requestInfo => {
requestInfo.withCredentials = true;
};
context.start({playbackConfig: playbackConfig});
This configuration affects each content playback and essentially provides
override behavior. For a list of behaviors that developers can override, see the
definition of cast.framework.PlaybackConfig
. To change the configuration in
between contents, one can use the PlayerManager
to get its current playbackConfig
, modify or add an override and reset the
playbackConfig
like this:
const playerManager =
cast.framework.CastReceiverContext.getInstance().getPlayerManager();
const playbackConfig = (Object.assign(
new cast.framework.PlaybackConfig(), playerManager.getPlaybackConfig()));
playbackConfig.autoResumeNumberOfSegments = 1;
playerManager.setPlaybackConfig(playbackConfig);
Note that if PlaybackConfig
was not overridden, getPlaybackConfig()
returns a null object. And any property on PlaybackConfig that
is undefined
will use the default values.
Event listener
The Web Receiver SDK allows your Web Receiver app to handle player events. The
event listener takes a cast.framework.events.EventType
parameter
(or an array of these parameters) that specifies the event(s) that should
trigger the listener. Preconfigured arrays of cast.framework.events.EventType
that are useful for debugging can be found in cast.framework.events.category
.
The event parameter provides additional information about the event.
For example, if you want to know when a mediaStatus
change is being broadcasted, you can use the following logic to handle the event:
const playerManager =
cast.framework.CastReceiverContext.getInstance().getPlayerManager();
playerManager.addEventListener(
cast.framework.events.EventType.MEDIA_STATUS, (event) => {
// Write your own event handling code, for example
// using the event.mediaStatus value
});
Note: The Web Receiver framework automatically tracks when a sender
connects or disconnects from it and doesn't require an explicit
SENDER_DISCONNECTED
event listener in your own Web Receiver logic (as in
Web Receiver v2).
Message interception
Web Receiver SDK allows your Web Receiver app to intercept messages and execute
custom code on those messages. The message interceptor takes a cast.framework.messages.MessageType
parameter that specifies what type of message should be intercepted.
Note: The interceptor should return the modified request or a Promise that
resolves with the modified request value. Returning null
will prevent calling
the default message handler.
For example, if you want to change the load request data, you can use the following logic to intercept and modify it.
Tip: Also see Loading media using contentId
, contentUrl
and entity
.
const context = cast.framework.CastReceiverContext.getInstance();
const playerManager = context.getPlayerManager();
playerManager.setMessageInterceptor(
cast.framework.messages.MessageType.LOAD, loadRequestData => {
const error = new cast.framework.messages.ErrorData(
cast.framework.messages.ErrorType.LOAD_FAILED);
if (!loadRequestData.media) {
error.reason = cast.framework.messages.ErrorReason.INVALID_PARAM;
return error;
}
if (!loadRequestData.media.entity) {
return loadRequestData;
}
return thirdparty.fetchAssetAndAuth(loadRequestData.media.entity,
loadRequestData.credentials)
.then(asset => {
if (!asset) {
throw cast.framework.messages.ErrorReason.INVALID_REQUEST;
}
loadRequestData.media.contentUrl = asset.url;
loadRequestData.media.metadata = asset.metadata;
loadRequestData.media.tracks = asset.tracks;
return loadRequestData;
}).catch(reason => {
error.reason = reason; // cast.framework.messages.ErrorReason
return error;
});
});
context.start();
Error handling
When errors happen in message interceptor, your Web Receiver app should return an
appropriate cast.framework.messages.ErrorType
and cast.framework.messages.ErrorReason
.
playerManager.setMessageInterceptor(
cast.framework.messages.MessageType.LOAD, loadRequestData => {
const error = new cast.framework.messages.ErrorData(
cast.framework.messages.ErrorType.LOAD_CANCELLED);
if (!loadRequestData.media) {
error.reason = cast.framework.messages.ErrorReason.INVALID_PARAM;
return error;
}
...
return fetchAssetAndAuth(loadRequestData.media.entity,
loadRequestData.credentials)
.then(asset => {
...
return loadRequestData;
}).catch(reason => {
error.reason = reason; // cast.framework.messages.ErrorReason
return error;
});
});
Message interception vs event listener
Some key differences between message interception and event listener are as follows:
- An event listener does not allow you to modify the request data.
- An event listener is best used to trigger analytics or a custom function.
Note: An event listener can also be used to listen to an umbrella of events
using the CORE
, DEBUG
, or FINE
enum.
playerManager.addEventListener(cast.framework.events.category.CORE,
event => {
console.log(event);
});
- Message interception allows you to listen to a message, intercept it, and modify the request data itself.
- Message interception is best used to handle custom logic with regards to request data.
Tip: Loading media using contentId, contentUrl and entity
We suggest you use entity
in your implementation for both your sender and
receiver apps. The entity
property is a deep link URL that can be either a playlist or a specific media
content.
The contentUrl
is designed for a playable URL.
The contentId
has been deprecated. It is typically the URL of the media, and can be used as a
real ID or key parameter for custom lookup. We suggest you to use entity
to
store the real ID or key parameters, and use contentUrl
for the URL of the media.
playerManager.setMessageInterceptor(
cast.framework.messages.MessageType.LOAD, loadRequestData => {
...
if (!loadRequestData.media.entity) {
// Copy the value from contentId for legacy reasons if needed
loadRequestData.media.entity = loadRequestData.media.contentId;
}
return thirdparty.fetchAssetAndAuth(loadRequestData.media.entity,
loadRequestData.credentials)
.then(asset => {
loadRequestData.media.contentUrl = asset.url;
...
return loadRequestData;
});
});
Device capabilities
The getDeviceCapabilities
method provides device information on the connected Cast device and the video or
audio device attached to it. The getDeviceCapabilities
method provides support
information for Google Assistant, Bluetooth, and the connected display
and audio devices.
This method returns an object which you can query by passing in one of the
specified enums to get the device capability for that enum. The enums are
defined in cast.framework.system.DeviceCapabilities
.
This example checks if the Web Receiver device is capable of playing HDR and
DolbyVision (DV) with the IS_HDR_SUPPORTED
and IS_DV_SUPPORTED
keys,
respectively.
const context = cast.framework.CastReceiverContext.getInstance();
context.addEventListener(cast.framework.system.EventType.READY, () => {
const deviceCapabilities = context.getDeviceCapabilities();
if (deviceCapabilities &&
deviceCapabilities[cast.framework.system.DeviceCapabilities.IS_HDR_SUPPORTED]) {
// Write your own event handling code, for example
// using the deviceCapabilities[cast.framework.system.DeviceCapabilities.IS_HDR_SUPPORTED] value
}
if (deviceCapabilities &&
deviceCapabilities[cast.framework.system.DeviceCapabilities.IS_DV_SUPPORTED]) {
// Write your own event handling code, for example
// using the deviceCapabilities[cast.framework.system.DeviceCapabilities.IS_DV_SUPPORTED] value
}
});
context.start();
Handling user interaction
A user can interact with your Web Receiver application through sender applications (Web, Android, and iOS), voice commands on Assistant-enabled devices, touch controls on smart displays, and remote controls on Android TV devices. The Cast SDK provides various APIs to allow the Web Receiver app to handle these interactions, update the application UI through user action states, and optionally send the changes to update any backend services.
Supported media commands
The UI controls states are driven by the
MediaStatus.supportedMediaCommands
for iOS and Android sender expanded controllers, receiver and remote control
apps running on touch devices, and receiver apps on Android TV devices. When a
particular bitwise Command
is enabled in the property, the buttons which are
related to that action are enabled. If the value is not set, then the
button is disabled. These values can be changed on the Web Receiver by:
- Using
PlayerManager.setSupportedMediaCommands
to set the specificCommands
- Adding a new command using
addSupportedMediaCommands
- Removing an existing command using
removeSupportedMediaCommands
.
playerManager.setSupportedMediaCommands(cast.framework.messages.Command.SEEK |
cast.framework.messages.Command.PAUSE);
When the receiver prepares the updated MediaStatus
, it will include the
changes in the supportedMediaCommands
property. When the status is
broadcasted, the connected sender apps will update the buttons in their UI
accordingly.
For more information about supported media commands and touch devices see
Accessing UI controls
guide.
Managing user action states
When users interact with the UI or send voice commands, they can control the
playback of the content and properties related to the item playing. Requests
that control the playback are handled automatically by the SDK. Requests that
modify properties for the current item playing, such as a LIKE
command,
require that the receiver application handle them. The SDK provides a series of
APIs to handle these types of requests. To support these requests, the following
must be done:
- Intercept
USER_ACTION
messages and determine the action requested. - Update the
MediaInformation
UserActionState
to update the UI.
The below snippet intercepts the USER_ACTION
message and handles calling the
backend with the requested change. It then makes a call to update the
UserActionState
on the receiver.
playerManager.setMessageInterceptor(cast.framework.messages.MessageType.USER_ACTION,
(userActionRequestData) => {
// Obtain the media information of the current content to associate the action to.
let mediaInfo = playerManager.getMediaInformation();
// If there is no media info return an error and ignore the request.
if (!mediaInfo) {
console.error('Not playing media, user action is not supported');
return new cast.framework.messages.ErrorData(messages.ErrorType.BAD_REQUEST);
}
// Reach out to backend services to store user action modifications. See sample below.
return sendUserAction(userActionRequestData, mediaInfo)
// Upon response from the backend, update the client's UserActionState.
.then(backendResponse => updateUserActionStates(backendResponse))
// If any errors occurred in the backend return them to the cast receiver.
.catch((error) => {
console.error(error);
return error;
});
});
The below snippet simulates a call to a backend service. The function checks the
UserActionRequestData
to see the type of change that the user requested and
only makes a network call if the action is supported by the backend.
function sendUserAction(userActionRequestData, mediaInfo) {
return new Promise((resolve, reject) => {
switch (userActionRequestData.userAction) {
// Handle user action changes supported by the backend.
case cast.framework.messages.UserAction.LIKE:
case cast.framework.messages.UserAction.DISLIKE:
case cast.framework.messages.UserAction.FOLLOW:
case cast.framework.messages.UserAction.UNFOLLOW:
case cast.framework.messages.UserAction.FLAG:
case cast.framework.messages.UserAction.SKIP_AD:
let backendResponse = {userActionRequestData: userActionRequestData, mediaInfo: mediaInfo};
setTimeout(() => {resolve(backendResponse)}, 1000);
break;
// Reject all other user action changes.
default:
reject(
new cast.framework.messages.ErrorData(cast.framework.messages.ErrorType.INVALID_REQUEST));
}
});
}
The snippet below takes the UserActionRequestData
and either adds or removes
the UserActionState
from the MediaInformation
. Updating the
UserActionState
of the MediaInformation
changes the state of the button that
is associated with the requested action. This change is reflected in the smart
display controls UI, remote control app, and Android TV UI. It is also
broadcasted through outgoing MediaStatus
messages to update the UI of the
expanded controller for iOS and Android senders.
function updateUserActionStates(backendResponse) {
// Unwrap the backend response.
let mediaInfo = backendResponse.mediaInfo;
let userActionRequestData = backendResponse.userActionRequestData;
// If the current item playing has changed, don't update the UserActionState for the current item.
if (playerManager.getMediaInformation().entity !== mediaInfo.entity) {
return;
}
// Check for existing userActionStates in the MediaInformation.
// If none, initialize a new array to populate states with.
let userActionStates = mediaInfo.userActionStates || [];
// Locate the index of the UserActionState that will be updated in the userActionStates array.
let index = userActionStates.findIndex((currUserActionState) => {
return currUserActionState.userAction == userActionRequestData.userAction;
});
if (userActionRequestData.clear) {
// Remove the user action state from the array if cleared.
if (index >= 0) {
userActionStates.splice(index, 1);
}
else {
console.warn("Could not find UserActionState to remove in MediaInformation");
}
} else {
// Add the UserActionState to the array if enabled.
userActionStates.push(
new cast.framework.messages.UserActionState(userActionRequestData.userAction));
}
// Update the UserActionState array and set the new MediaInformation
mediaInfo.userActionStates = userActionStates;
playerManager.setMediaInformation(mediaInfo, true);
return;
}
Voice commands
The following media commands are currently supported in the Web Receiver SDK for
Assistant-enabled devices. The default implementations of these commands are
found in
cast.framework.PlayerManager
.
Command | Description |
---|---|
Play | Play or resume playback from paused state. |
Pause | Pause currently playing content. |
Previous | Skip to the previous media item in your media queue. |
Next | Skip to the next media item in your media queue. |
Stop | Stop the currently playing media. |
Repeat None | Disable repeating of media items in the queue once the last item in the queue is done playing. |
Repeat Single | Repeat the currently playing media indefinitely. |
Repeat All | Repeat all items in the queue once the last item in the queue is played. |
Repeat All and Shuffle | Once the last item in the queue is done playing, shuffle the queue and repeat all items in the queue. |
Shuffle | Shuffle media items in your media queue. |
Closed Captions ON / OFF | Enable / Disable Closed Captioning for your media. Enable / Disable is also available by language. |
Seek to absolute time | Jumps to the specified absolute time. |
Seek to time relative to current time | Jumps forward or backward by the specified time period relative to the current playback time. |
Play Again | Restart the currently playing media or play the last played media item if nothing is currently playing. |
Set playback rate | Vary media playback rate. This should be handled by default. You can use the SET_PLAYBACK_RATE message interceptor to override incoming rate requests. |
Supported media commands with voice
To prevent a voice command from triggering a media command on an Assistant-
enabled device, you must first set the
supported media commands
that you plan on supporting. Then you must enforce those commands by enabling
the CastReceiverOptions.enforceSupportedCommands
property. The UI on Cast SDK senders and touch-enabled devices will change to
reflect these configurations. If the flag is not enabled the incoming voice
commands will execute.
For example, if you allow PAUSE
from your sender
applications and touch-enabled devices, you must also configure your receiver
to reflect those settings. When configured, any incoming voice commands will
be dropped if not included in the list of supported commands.
In the example below we are supplying the CastReceiverOptions
when starting
the CastReceiverContext
. We've added support for the PAUSE
command and
enforced the player to support only that command. Now if a voice command
requests another operation such as SEEK
it will be denied. The user will be
notified that the command is not supported yet.
const context = cast.framework.CastReceiverContext.getInstance();
context.start({
enforceSupportedCommands: true,
supportedCommands: cast.framework.messages.Command.PAUSE
});
To further customize the behavior you can apply separate logic for
each command that you want to restrict. Remove the enforceSupportedCommands
flag and for each command that you want to restrict you can intercept the
incoming message. Here we intercept the request provided by the SDK so
that SEEK
commands issued to Assistant-enabled devices do not trigger a seek
in your Web Receiver application.
For media commands your application does not support, return an appropriate
error reason, such as
NOT_SUPPORTED
.
playerManager.setMessageInterceptor(cast.framework.messages.MessageType.SEEK,
seekData => {
// Block seeking if the SEEK supported media command is disabled
if (!(playerManager.getSupportedMediaCommands() & cast.framework.messages.Command.SEEK)) {
let e = new cast.framework.messages.ErrorData(cast.framework.messages.ErrorType
.INVALID_REQUEST);
e.reason = cast.framework.messages.ErrorReason.NOT_SUPPORTED;
return e;
}
return seekData;
});
Ducking from voice activity
If the Cast platform ducks your application's sound due to Assistant activity
such as listening to user speech or talking back, a
FocusState
message of NOT_IN_FOCUS
is sent to the Web Receiver application when the
activity starts. Another message with IN_FOCUS
is sent when the activity ends.
Depending on your application and the media being played, you might want to
pause media when the FocusState
is NOT_IN_FOCUS
by intercepting the message
type FOCUS_STATE
.
For example, it's a good user experience to pause audiobook playback if the Assistant is responding to a user query.
playerManager.setMessageInterceptor(cast.framework.messages.MessageType.FOCUS_STATE,
focusStateRequestData => {
// Pause content when the app is out of focus. Resume when focus is restored.
if (focusStateRequestData.state == cast.framework.messages.FocusState.NOT_IN_FOCUS) {
playerManager.pause();
} else {
playerManager.play();
}
return focusStateRequestData;
});
Voice-specified caption language
When a user does not explicitly state the language for the captions, the
language used for captions is the same language in which the command was spoken.
In these scenarios, the
isSuggestedLanguage
parameter of the incoming message indicates whether the associated language was
suggested or explicitly requested by user.
For example, isSuggestedLanguage
is set to true
for the command "OK Google,
turn captions on," because the language was inferred by the language the
command was spoken in. If the language is explicitly requested, such as in "OK
Google, turn on English captions," isSuggestedLanguage
is set to false
.
Metadata and voice casting
While voice commands are handled by the Web Receiver by default, you should ensure the metadata for your content is complete and accurate. This ensures that voice commands are handled properly by the Assistant and that the metadata surfaces properly across new types of interfaces such as the Google Home app and smart displays like the Google Home Hub.
Preserving session state
The Web Receiver SDK provides a default implementation for Web Receiver apps to preserve session states by taking a snapshot of current media status, converting the status into a load request, and resuming the session with the load request.
The load request generated by the Web Receiver can be overridden in the
SESSION_STATE
message interceptor if necessary. If you want to add custom data
into the load request, we suggest putting them in
loadRequestData.customData
.
playerManager.setMessageInterceptor(
cast.framework.messages.MessageType.SESSION_STATE,
function (sessionState) {
// Override sessionState.loadRequestData if needed.
const newCredentials = updateCredentials_(sessionState.loadRequestData.credentials);
sessionState.loadRequestData.credentials = newCredentials;
// Add custom data if needed.
sessionState.loadRequestData.customData = {
'membership': 'PREMIUM'
};
return sessionState;
});
The custom data can be retrieved from
loadRequestData.customData
in the RESUME_SESSION
message interceptor.
let cred_ = null;
let membership_ = null;
playerManager.setMessageInterceptor(
cast.framework.messages.MessageType.RESUME_SESSION,
function (resumeSessionRequest) {
let sessionState = resumeSessionRequest.sessionState;
// Modify sessionState.loadRequestData if needed.
cred_ = sessionState.loadRequestData.credentials;
// Retrieve custom data.
membership_ = sessionState.loadRequestData.customData.membership;
return resumeSessionRequest;
});
Stream transfer
Preserving session state is the basis of stream transfer, where users can move existing audio and video streams across devices using voice commands, Google Home App, or smart displays. Media stops playing on one device (the source) and continues on another (the destination). Any Cast device with the latest firmware can serve as sources or destinations in a stream transfer.
The event flow for stream transfer is:
- On the source device:
- Media stops playing.
- The Web Receiver application receives a command to save the current media state.
- The Web Receiver application is shut down.
- On the destination device:
- The Web Receiver application is loaded.
- The Web Receiver application receives a command to restore the saved media state.
- Media resumes playing.
Elements of media state include:
- Specific position or timestamp of the song, video, or media item.
- Its place in a broader queue (such as a playlist or artist radio).
- The authenticated user.
- Playback state (for example, playing or paused).
Enabling stream transfer
To implement stream transfer for your Web Receiver:
- Update
supportedMediaCommands
with theSTREAM_TRANSFER
command:playerManager.addSupportedMediaCommands( cast.framework.messages.Command.STREAM_TRANSFER, true);
- Optionally override the
SESSION_STATE
andRESUME_SESSION
message interceptors as described in Preserving session state. Only override these if custom data needs to be stored as part of the session snapshot. Otherwise, the default implementation for preserving session states will support stream transfer.
Custom UI data binding
If you want to use your own custom UI element instead of cast-media-player
,
you can do that; instead of adding cast-media-player
element to your HTML,
instead use the
PlayerDataBinder
class to bind the UI to the player state. The binder also supports sending
events for data changes, if the app does not support data binding.
const context = cast.framework.CastReceiverContext.getInstance();
const player = context.getPlayerManager();
const playerData = {};
const playerDataBinder = new cast.framework.ui.PlayerDataBinder(playerData);
// Update ui according to player state
playerDataBinder.addEventListener(
cast.framework.ui.PlayerDataEventType.STATE_CHANGED,
e => {
switch (e.value) {
case cast.framework.ui.State.LAUNCHING:
case cast.framework.ui.State.IDLE:
// Write your own event handling code
break;
case cast.framework.ui.State.LOADING:
// Write your own event handling code
break;
case cast.framework.ui.State.BUFFERING:
// Write your own event handling code
break;
case cast.framework.ui.State.PAUSED:
// Write your own event handling code
break;
case cast.framework.ui.State.PLAYING:
// Write your own event handling code
break;
}
});
context.start();
You should add at least one
MediaElement
to the HTML so that the Web Receiver can use it.
If multiple MediaElement
objects are available, you should tag the
MediaElement
that you want the Web Receiver to use. You do this by adding
castMediaElement
in the video's class list, as shown below; otherwise, the
Web Receiver will choose the first MediaElement
.
<video class="castMediaElement"></video>
Content preload
The Web Receiver supports preloading of media items after the current playback item in the queue. The preload operation pre-downloads several segments of the upcoming items. The specification is done on the preloadTime value in the QueueItem object (default to 20 seconds if not provided). The time is expressed in seconds, relative to the end of the currently playing item . Only positive values are valid. For example, if the value is 10 seconds, this item will be preloaded 10 seconds before the previous item has finished. If the time to preload is higher than the time left on the currentItem, the preload will just happen as soon as possible. So if a very large value of preload is specified on the queueItem, one could achieve the effect of whenever we are playing the current item we are already preloading the next item. However, we leave the setting and choice of this to developer as this value can affect bandwidth and streaming performance of the current playing item.
Note that preloading will work for HLS and Smooth streaming content by default.
For DASH content, preloading works if useLegacyDashSupport
is specified in CastReceiverOptions
, since Media Player Library (MPL) supports
preload while Shaka does not yet. For regular MP4 video and audio files such as
MP3, those will not be preloaded, as Cast devices support one media element only
and cannot be used to preload while an existing content item is still playing.
Custom messages
Message exchange is the key interaction method for Web Receiver applications.
A sender issues messages to a Web Receiver using the sender APIs for the
platform the sender is running (Android, iOS, Chrome). The event object (which
is the manifestation of a message) that is passed to the event listeners has a
data element (event.data
) where the data takes on the properties of the
specific event type.
A Web Receiver application may choose to listen for messages on a specified namespace. By virtue of doing so, the Web Receiver application is said to support that namespace protocol. It is then up to any connected senders wishing to communicate on that namespace to use the appropriate protocol.
All namespaces are defined by a string and must begin with "urn:x-cast:
"
followed by any string. For example, "urn:x-cast:com.example.cast.mynamespace
".
Here is a code snippet for the Web Receiver to listen to custom messages from connected senders:
const context = cast.framework.CastReceiverContext.getInstance();
const CUSTOM_CHANNEL = 'urn:x-cast:com.example.cast.mynamespace';
context.addCustomMessageListener(CUSTOM_CHANNEL, function(customEvent) {
// handle customEvent.
});
context.start();
Similarly, Web Receiver applications can keep senders informed about the state
of the Web Receiver by sending messages to connected senders. A Web Receiver
application can send messages using sendCustomMessage(namespace, senderId, message)
on CastReceiverContext
.
A Web Receiver can send messages to an individual sender, either in response
to a received message or due to an application state change. Beyond
point-to-point messaging (with a limit of 64kb), a Web Receiver may also
broadcast messages to all connected senders.
Cast for audio devices
See Google Cast for audio devices guide for support on audio only playback.
Android TV
This section discusses how the Google Web Receiver uses your inputs as playback, and Android TV compatibility.
Integrating your application with the remote control
The Google Web Receiver running on the Android TV device translates input from
the device's control inputs (i.e. hand-held remote control) as media playback
messages defined for the urn:x-cast:com.google.cast.media
namespace, as
described in Media Playback Messages. Your
application must support these messages to control the application media
playback in order to allow basic playback control from Android TV’s control inputs.
Guidelines for Android TV compatibility
Making your Cast application compatible with Android TV requires very little additional work. Here are some recommendations and common pitfalls to avoid in order to ensure your application is compatible with Android TV:
- Be aware that the user-agent string contains both "Android" and "CrKey"; some sites may redirect to a mobile-only site because they detect the "Android" label. Don't assume that "Android" in the user-agent string always indicates a mobile user.
- Android's media stack may use transparent GZIP for fetching data. Make sure
your media data can respond to
Accept-Encoding: gzip
. - Android TV HTML5 media events may be triggered in different timings than Chromecast, this may reveal issues that were hidden on Chromecast.
- When updating the media, use media related events fired by
<audio>/<video>
elements, liketimeupdate
,pause
andwaiting
. Avoid using networking related events likeprogress
,suspend
andstalled
, as these tend to be platform dependent. See Media events for more information about handling media events in your receiver. - When configuring your receiver site’s HTTPS certificates, be sure to include intermediate CA certificates. See the Qualsys SSL test page to verify: if the trusted certification path for your site includes a CA certificate labelled “extra download”, then it may not load on Android-based platforms.
- While Chromecast displays the receiver page on a 720p graphics plane, other Cast platforms including Android TV may display the page up to 1080p. Ensure your receiver page scales gracefully at different resolutions.