[null,null,["最后更新时间 (UTC):2025-07-26。"],[[["\u003cp\u003eActions Builder uses intents, types, scenes, and prompts to define how your Action interacts with users.\u003c/p\u003e\n"],["\u003cp\u003eIntents are used to define valid user requests, enabling your Action to understand user input.\u003c/p\u003e\n"],["\u003cp\u003eScenes handle Action logic, processing intents and generating responses to the user.\u003c/p\u003e\n"],["\u003cp\u003eYou can create custom intents and types to train the Assistant's NLU to understand specific requests and values.\u003c/p\u003e\n"],["\u003cp\u003eSlots within scenes allow you to extract typed parameters from user input for further processing and logic execution.\u003c/p\u003e\n"]]],["The core content outlines how to build conversational Actions using the Actions Builder SDK. Key actions include: defining valid user requests through **intents** and **types** to augment the Assistant's NLU; creating **scenes** to process intents and manage conversation flow; and generating **prompts** for user responses. **Intents** are defined by training phrases, while **types** define a set of values. **Scenes** execute logic, check conditions, and define slot filling. The system also manages **system intents** for common user actions.\n"],null,["# Build conversation models\n\nActions Builder Actions SDK\n\nA conversation model defines what users can say to your Actions and how your\nActions respond to users. The main building blocks of your conversation model\nare [intents](../intents), [types](../types), [scenes](../scenes), and\n[prompts](../prompts). After one of your Actions is invoked, Google Assistant\nhands the user off to that Action, and the Action begins a conversation with the\nuser, based on your conversation model, which consists of:\n\n- **Valid user requests** - To define what users can say to your Actions, you\n create a collection of intents that augment the Assistant NLU, so it can\n understand requests that are specific to your Actions. Each intent defines\n training phrases that describe what users can say to match that intent. The\n Assistant NLU expands these training phrases to include similar phrases, and\n the aggregation of those phrases results in the intent's language model.\n\n- **Action logic and responses** - Scenes process intents, carry out required logic, and generate prompts to return to the user.\n\n**Figure 1.** A conversation model consists of intents, types, scenes, and prompts that define your user experience. Intents that are eligible for invocation are also valid for matching in your conversations.\n\nDefine valid user requests\n--------------------------\n\nTo define what users can say to your Actions, you use a combination of intents\nand types. User intents and types let you augment the Assistant NLU with your\nown language models. System intents and types let you take advantage of built-in\nlanguage models and event detection like users wanting to quit your Action or\nAssistant detecting no input at all.\n\n### Create user intents\n\nUser intents let you define your own training phrases that define what users\nmight say to your Actions. The Assistant NLU uses these phrases to train itself\nto understand what your users say. When users say something that matches a\nuser intent's language model, Assistant matches the intent and notifies\nyour Action, so you can carry out logic and respond back to users.\n\n### Create system intents\n\nSystem intents let you take advantage of intents with pre-defined language\nmodels for common events like users wanting to quit your Action or when user\ninput times out. To create system intents:\n\n### Create custom types\n\nCustom types let you create your own type specification to train the NLU to\nunderstand a set of values that should map to a single key.\n\nTo create a custom type:\n\nBuild Action logic and responses\n--------------------------------\n\nThe Assistant NLU matches user requests to intents, so that your Action can\nprocess them in scenes. Scenes are powerful logic executors that let you\nprocess events during a conversation.\n\n### Create a scene\n\nThe following sections describe how to create scenes and define functionality\nfor each scene's lifecycle stage.\n\nTo create a scene:\n\n### Define one-time setup\n\nWhen a scene first becomes active, you can carry out one time tasks in the\n**On enter** stage. The On enter stage executes only once, and is the only\nstage that doesn't run inside a scene's execution loop.\n\n### Check conditions\n\nConditions let you check slot filling, session storage, user storage, and\nhome storage parameters to control scene execution flow.\n\n### Define slot filling\n\nSlots let you extract typed parameters from user input.\n\n#### Slot value mapping\n\nIn many cases, a previous intent match can include parameters that partially or\nentirely fill a corresponding scene's slot values. In these cases, all slots\nfilled by intent parameters map to the scene's slot filling if the slot name\nmatches the intent parameter name.\n\nFor example, if a user matches an intent to order a beverage by saying *\"I want\nto order a large vanilla coffee\"*, existing slots for size, flavor, and beverage\ntype are considered filled in the corresponding scene if that scene defines same\nslots.\n| **Note:** Intent data is stored for one conversation turn and is overwritten with the next user input. If you need this data to persist, you can store this data in [session storage](../storage).\n\n### Process input\n\nDuring this stage, you can have the Assistant NLU match user input to intents.\nYou can scope intent matching to a specific scene by adding the desired intents\nto the scene. This lets you control conversation flow by telling Assistant\nto match specific intents when specific scenes are active."]]