WAN2.2 A14B Animate Model

Transform source footage with text + image guidance using the WAN2.2 A14B Animate generation model.

Checking backend...

Prompt

Describe the style, motion, or story you want to apply to your source video.

The prompt guides how the source video will be re-imagined.

Media Inputs

Mix an optional reference image with your base video for richer guidance.

Reference Image

Upload artwork or a still to influence style and lighting.

🖼️
Drag & Drop or click to upload
No file selected

Images up to 10 MB. JPEG, PNG, and WEBP supported.

Source Video

Provide footage for TIV2V to transform. Short clips finish faster.

🎞️
Drag & Drop or click to upload
No file selected

Videos up to 50 MB. MP4, MOV, and WEBM supported.

🔌 API Access

Kick off TIV2V jobs directly from your pipeline.

🔑 API Keys

To use the API, you need an API key. Manage your API keys →

POST /api/v1/generate

Launch TIV2V Transformations

Submit a prompt, base video, and optional reference image to create stylised motion.

Cost: 0.06 credits per video second

Pass duration_seconds that matches your source clip to charge the correct amount.

Request (cURL)

# Requires jq for JSON parsing
JOB_DATA=$(
  curl -s -X POST https://app.eigenai.com/api/v1/generate \
    -H "Authorization: Bearer YOUR_API_KEY" \
    -F "model=wan2_2_a14b_animate" \
    -F "prompt=A cyberpunk dancer twirling through neon-lit rain" \
    -F "nsteps=20" \
    -F "duration_seconds=5.00" \
    -F "video=@/path/to/source.mp4" \
    # -F "image=@/path/to/style.jpg"
)
JOB_ID=$(echo "$JOB_DATA" | jq -r '.job_id // .jobId')
echo "Job ID: $JOB_ID"
while true; do
  STATUS=$(curl -s -H "Authorization: Bearer YOUR_API_KEY" "https://app.eigenai.com/api/tiv2v-status?jobId=$JOB_ID")
  PHASE=$(echo "$STATUS" | jq -r '.status')
  echo "$STATUS" | jq .
  if [ "$PHASE" = "done" ] || [ "$PHASE" = "error" ]; then
    break
  fi
  sleep 5
done

Requires jq for JSON parsing.

Request (Python)

import json
import time
import requests

API_KEY = "amsk_live_..."
BASE_URL = "https://app.eigenai.com"

with open("source.mp4", "rb") as video_file, open("style.jpg", "rb") as image_file:
    start = requests.post(
        f"{BASE_URL}/api/v1/generate",
        headers={"Authorization": f"Bearer {API_KEY}"},
        data={"model": "wan2_2_a14b_animate", "prompt": "A cyberpunk dancer", "nsteps": "20", "duration_seconds": "5.00"},
        files={"video": video_file, "image": image_file}
    )

payload = start.json()
job_id = payload.get("job_id") or payload.get("jobId")
print("job_id:", job_id)

while True:
    status = requests.get(
        f"{BASE_URL}/api/tiv2v-status",
        params={"jobId": job_id},
        headers={"Authorization": f"Bearer {API_KEY}"}
    ).json()
    print(json.dumps(status, indent=2))

    if status.get("status") in ("done", "error"):
        break

    time.sleep(5)

Request (JavaScript/Node.js)

const FormData = require('form-data');
const fs = require('fs');
const axios = require('axios');

const form = new FormData();
form.append('model', 'wan2_2_a14b_animate');
form.append('prompt', 'A cyberpunk dancer twirling through neon-lit rain');
form.append('nsteps', '');
form.append('video', fs.createReadStream('./source.mp4'));
form.append('image', fs.createReadStream('./style.jpg'));

axios.post('https://app.eigenai.com/api/v1/generate', form, {
    headers: {
        ...form.getHeaders(),
        Authorization: 'Bearer YOUR_API_KEY'
    }
}).then(response => {
    console.log(response.data);
});

Parameters

Parameter Type Required Description
model string ✅ Yes Must be tiv2v
prompt string ✅ Yes Creative description of the transformation
video file / url / base64 ✅ Yes Source clip to stylise. Upload a file or provide an accessible URL/data URI.
image file / url / base64 ✅ Yes Style reference to influence the look and feel.
nsteps number Optional Diffusion steps (default ). Higher values increase quality.

Output

🎬

Awaiting generation

Upload your media, add a prompt, and run TIV2V.

Cost: 0.06 credits per video second.