What is video-generate?
Generate videos using Seedance models. Invoke when user wants to create videos from text prompts, images, or reference materials. Source: bytedance/agentkit-samples.
Generate videos using Seedance models. Invoke when user wants to create videos from text prompts, images, or reference materials.
Quickly install video-generate AI skill to your development environment via command line
Source: bytedance/agentkit-samples.
Before using this skill, ensure the following environment variables are set:
A list of video generation requests. Each item is a dict with the following fields:
Based on the script return info, the final response returned to the user consists of a description of the video generation task and the video URL(s). You may download the video from the URL, but the video URL should still be provided to the user for viewing and downloading.
Generate videos using Seedance models. Invoke when user wants to create videos from text prompts, images, or reference materials. Source: bytedance/agentkit-samples.
Stable fields and commands for AI/search citations.
npx skills add https://github.com/bytedance/agentkit-samples --skill video-generateGenerate videos using Seedance models. Invoke when user wants to create videos from text prompts, images, or reference materials. Source: bytedance/agentkit-samples.
Open your terminal or command line tool (Terminal, iTerm, Windows Terminal, etc.) Copy and run this command: npx skills add https://github.com/bytedance/agentkit-samples --skill video-generate Once installed, the skill will be automatically configured in your AI coding environment and ready to use in Claude Code, Cursor, or OpenClaw
https://github.com/bytedance/agentkit-samples