Getting started with the Gemini API and Dart and Flutter

Learn how to use the Gemini API and the Google AI Dart SDK to prototype generative AI in Dart and Flutter applications.

 

Overview: Getting started with the Gemini API and Flutter

Use the Google AI Dart SDK to make your first generative AI call using the Gemini API, build an app using Dart and Flutter, and explore cross-platform sample applications.

Note that calling the Gemini API directly from your mobile or web app using the Google AI Dart SDK is only for prototyping and exploring the Gemini generative AI model. You risk exposing the API key to malicious actors if it's embedded or retrieved by your client application. So, for use cases beyond prototyping (especially production and enterprise-scale apps), migrate to Vertex AI in Firebase to call the Gemini API directly from your client app. Alternatively, you can use the Google AI Dart SDK to access the Gemini models server-side.

Introduction to the Gemini API and prompt engineering

Pathway

Explore Google AI Studio and the capabilities of the Gemini generative AI model. Learn how to design and test the different types of prompts (freeform, structured, and chat) and get an API key for the Gemini API.

This pathway can be useful for further experimentation with the Gemini API and lays the groundwork for integrating its features into your application. Optionally, you can also try out the API using a simple NodeJS web application. If you don't already have NodeJS and NPM on your machine, feel free to skip this step and return back to Dart and Flutter in this pathway.

Note that calling the Gemini API directly from your mobile or web app using the Google AI Dart SDK is only for prototyping and exploring the Gemini generative AI models. For use cases beyond prototyping (especially production or enterprise-scale apps), use Vertex AI in Firebase instead. It offers an SDK for Flutter that has additional security features, support for large media file uploads, and streamlined integrations into the Firebase and Google Cloud ecosystem. Alternatively, you can use the Google AI Dart SDK to access the Gemini models server-side.

Run the Google AI SDK sample on DartPad

Code sample

Try out a Flutter demo of the Google AI Dart SDK on DartPad.

This interactive demo shows how to build a chat app in Flutter that uses the multi-turn conversations functionality from the SDK. Learn how to implement the user interface and manage the state of the conversation.

Enter your Gemini API key when prompted to get started.

Gemini API and Flutter: Practical, AI-Driven apps with Google AI tools

Video

Watch this talk from Google I/O 2024 to get an overview about generative AI, Google AI Studio, and prompt design.

Follow along to integrate the Google AI Dart SDK into a Flutter application and build a recipe application that uses the Gemini 1.5 Pro model with multimodal prompts.

Introduction to the Google AI Dart SDK

The Google AI Dart SDK is a Dart-first, cross-platform SDK for building your generative AI integration with the Google AI Gemini API. This SDK also supports Flutter on all platforms.

When calling the Gemini API directly from your mobile or web app, the Google AI Dart SDK is only for prototyping. There are additional security considerations for using the Gemini API key in your web and mobile client applications since you're risking exposing this API key to malicious actors if it's embedded or retrieved by your client application. So, for use cases beyond prototyping (especially production and enterprise-scale apps), migrate to Vertex AI in Firebase to call the Gemini API directly from your client app. Alternatively, you can access the Gemini models server-side using either the Google AI Dart SDK or through Vertex AI.

To get started with the Google AI Dart SDK, set up a project in Google AI Studio, which includes obtaining an API key for the Gemini API. Next, add the required dependencies to your app's pubspec.yaml (google_generative_ai). Then, you can initialize the library with your API key and make your first API call.

You can also check out this YouTube Short for a quick overview over the Google AI Dart SDK and how to get started.

Explore the Dart SDK and Flutter sample apps

Code sample

Explore the generative AI example apps for the Google AI Dart SDK for Flutter and Dart.

The Dart code samples demonstrate three key use cases: generating text, photo reasoning (using multimodal inputs), and multi-turn conversations (chat). It also shows advanced topics, such as how to use content streaming to improve response time by displaying partial results.

The Flutter sample app demonstrates how to implement multi-turn conversations (chat) and photo reasoning (using multimodal inputs) in a multi-platform application.

Follow the steps in the README for each sample to get started, which includes configuring your Gemini API key and providing it as an environment variable.

Multimodal prompting using the Google AI Dart SDK

Multimodal prompts combine different types of media together, such as text, images, and audio. For example, you could create prompts that identify objects in an image, extract text from a photo, or reference a picture.

To get started, read this guide about file prompting strategies and multimodal concepts, which includes best practices for designing multimodal prompts.

Next, explore the multimodal capabilities of the Gemini models in Google AI Studio by uploading or selecting a file as part of your prompt.

Learn how to use multimodal inputs using the Google AI Dart SDK, find image requirements for prompts, and explore the multimodal image chat demo in the Flutter sample app or in the Dart sample scripts.

For further reading, see the solution Leveraging the Gemini Pro Vision model for image understanding, multimodal prompts and accessibility.

Prepare for production by migrating to Vertex AI in Firebase

Using the Google AI Dart SDK to call the Gemini API directly from a web or mobile client is only for prototyping and experimentation. When you start to seriously develop your app beyong prototyping (especially as you prepare for production), transition to use Vertex AI in Firebase and its SDK for Flutter.

For calling the Gemini API directly from your web or mobile app, we strongly recommend using the Vertex AI in Firebase client SDK for Flutter. This SDK offers enhanced security features for web and mobile apps, including Firebase App Check to help protect your app from unauthorized client access. When you use this SDK, you can include large media files in your requests by using Cloud Storage for Firebase. Vertex AI in Firebase also integrates with other products in Google's Firebase developer platform (like Cloud Firestore and Firebase Remote Config), while also giving you streamlined access to the tools, workflows, and scale offered through Google Cloud. Among other differences , Vertex AI also supports increased request quotas and enterprise features.

Follow this guide to migrate to the Vertex AI in Firebase client SDK by updating your package dependencies, imports, and changing how the AI model is initialized.