How the Motn canvas works
Motn is not a timeline editor. It's an infinite canvas where every generation lives as a node — connected to its prompt, editable, and visible alongside everything else you've made. Describe an animation. Get real code or AI video in seconds. Iterate in parallel. Export and ship.
The infinite canvas — your creative workspace
Most AI generation tools give you one output per session. You describe something, get a result, and start over. Motn works differently. Every generation appears as a node on a persistent infinite canvas. You can have ten animations running in parallel — different styles, different copy, different aspect ratios — all on the same canvas, all visible at once.
Zoom out and you see your entire creative process. Zoom in on any node to inspect the result, tweak the prompt, or fork a new variation from it. The canvas remembers everything. Your brand context, your reference images, your iterations — they accumulate and compound as you work.
This is what we call vibe motion: animation that moves as fast as your ideas. No timeline. No keyframe editor. Just prompt, generate, iterate — on a canvas that keeps up with you.
The four generation nodes
Every generation starts with one of four node types. Each does something different. Each produces a result node on the canvas that you can connect to other nodes.
Real-time animation — export as code or video
Gen Code Motion is the core of the Motn canvas. It generates actual React components — not video files, not GIFs — that run at 60fps in any browser. The output is responsive, embeddable, and fully portable. Drop it directly into a Next.js app, a Framer project, or export as standalone HTML.
- →Describe anything: kinetic text, animated logo, motion background, infographic
- →Accepts images, video clips, and text assets as visual context
- →Connect a Brand node — colors and fonts are applied automatically
- →Connect a Style node — choose from presets like Apple, Rounded, Squared, or Pointillism
- →Generate multiple variants at once and compare them side by side on the canvas
- →Output can feed into a Reel node for multi-clip assembly
Text-to-video with Kling, Seedance, and Veo
Gen Video generates AI video clips from a text prompt or image reference. Choose your model — Kling 3 Pro for cinematic quality, Seedance for fast turnaround, Veo 3.1 for Google's latest output. Generate 4–12 second clips in 16:9, 9:16, or 1:1. Clips land as nodes on your canvas, ready to connect to a Reel.
- →Prompt-to-video: describe a scene, camera move, or visual concept
- →Image-to-video: connect a Gen Image result and animate it
- →Multiple models available: Kling 3 Standard, Kling 3 Pro, Seedance 1.5, Veo 3.1
- →Output connects to Reel for multi-clip assembly and export
AI-generated images as standalone assets or animation inputs
Gen Image generates 2D assets — backgrounds, brand elements, illustrations, reference images — using Gemini, GPT-Image, or Recraft. Use them as standalone visuals, or connect them to a Code Motion or Gen Video node as a visual reference. The AI reads the image to match composition, color, and layout.
- →Models: Gemini 3 Pro, GPT-Image 1.5, Recraft 4 (PNG), Recraft Vector (SVG)
- →Upload your own images as assets — drag onto the canvas or upload directly
- →Connect to Code Motion to use as layout or color reference
- →Connect to Gen Video to animate the image
- →Remove background with one click (5 tokens)
Context nodes — apply once, use everywhere
Context nodes don't generate output on their own. They feed information into generation nodes via connections. Set them up once at the start of a project and every animation on that canvas inherits the context automatically.
Enter your website URL and Motn extracts your colors and fonts automatically. Or set them manually. Connect the Brand node to any generation node and every animation inherits your brand identity — once set, applied everywhere.
Choose a visual preset: Default, Apple, Rounded, Squared, or Pointillism. Connect to any prompt node to steer the aesthetic of the output without changing your prompt.
Plan your video scene by scene before generating. The Script node structures your narrative, then each scene can spawn a Gen Video or Code Motion node directly from the canvas.
Connect multiple Code Motion results and Gen Video clips into a single Reel node. Preview the assembled sequence, reorder clips, and export as a single MP4.
How nodes connect
Nodes connect via edges on the canvas — drag from the output of one node to the input of another. The connection passes context: an image becomes a visual reference for Gen Code Motion, a Brand node makes its colors and fonts available everywhere it's connected. Here are the most useful connections:
Iterate in parallel — the Motn way
The canvas makes parallel creative work feel natural. Instead of generating one animation, reviewing it, tweaking the prompt, and waiting for another — you generate five variations at once. They all land on the canvas simultaneously. You zoom out, compare them, pick the direction you want, and iterate from there.
Every result node has its prompt attached to it. Click the prompt, change a word, generate again — the new result appears next to the old one. You never lose a version. You never have to re-describe your brand. The canvas holds your context across every generation.
For studios and agencies, this is a force multiplier. One canvas per client brief. Brand node connected to everything. Dozens of variations generated, compared, and narrowed down — all in a single session, all visible in one place.
Generate in parallel
Run multiple Code Motion or Gen Video nodes simultaneously. No waiting — results land on the canvas as they complete.
Compare side by side
Zoom out to see every result at once. Move nodes around, group related experiments, annotate with sticky notes.
Fork and iterate
Pick the version closest to what you want. Edit the prompt directly on that node, hit generate. The new result appears right next to the original.
Remix community templates
The Motn community publishes canvases and templates that anyone can remix. Browse the template gallery on the home page — hundreds of animations across kinetic typography, motion backgrounds, logo reveals, infographics, and more.
Hit Remix on any template and it forks into your own canvas. The node structure is preserved — you can see exactly how it was built, edit any prompt, swap the brand context for your own, and generate new variations from the existing structure. It's the fastest way to go from zero to something that actually looks good.
When you build something great, you can publish it back to the community. Every published canvas becomes a remixable template that other creators can learn from, fork, and build on. The community canvas is a living library of vibe motion starting points.
Export and ship
Standalone HTML
A self-contained file that runs in any browser. Embed it in a website, drop it in an email, or send it to a client. Zero dependencies.
React component
Raw JSX code you can drop directly into any Next.js or React project. The animation runs natively — no iframe, no wrapper.
MP4 video
A rendered video file for social media, presentations, app store previews, or anywhere video is expected.
Public canvas link
Share a live link to your canvas. Anyone with the link can view the animations, without needing a Motn account.
All exports are watermark-free. Everything you make on Motn is yours to use commercially without attribution.
See it for yourself
The canvas is free to try — no account, no credit card. 500 tokens to start. Sign up and get 1,500 more.