In this exercise, you’ll use AI models to generate images and video — right from OpenCode. You’ll work with Replicate, a platform that hosts thousands of open-source AI models and lets you run them through an API.
You’ll generate images with Nano Banana 2 (Google’s image generation model built on Gemini Flash), then turn one of those images into a video with Veo 3.1 Fast (Google’s video generation model).
Prerequisites
This exercise requires the Replicate skill for OpenCode. If you completed the Skills lesson, you already have it installed. If not, install it now:
npx skills add https://github.com/replicate/replicate-skill
You’ll also need a Replicate account and API token:
- Sign up at replicate.com (requires a GitHub account)
- Generate an API token at replicate.com/account/api-tokens
- Add the token to your shell profile (
~/.zshrcor~/.bashrc):
export REPLICATE_API_TOKEN=r8_your_token_here
Restart OpenCode after adding the token so it picks up the new environment variable.
What you’ll do
Generate images — Describe what you want and Nano Banana 2 creates it. It handles photorealistic scenes, illustrations, product mockups, and more. It can also edit existing images: pass in a photo along with a text prompt to change backgrounds, swap styles, or combine multiple images.
Generate video — Take one of your generated images and bring it to life with Veo 3.1 Fast. Describe the motion or scene you want, and the model produces a short video clip.
The Replicate skill gives OpenCode everything it needs to discover models, check their input schemas, create predictions, and retrieve results — all through the API. No CLI tools or SDKs needed.