MoviePipe.AI Workflow Guide

Everything you need to know about transforming your scripts into production-ready Higgsfield Cinema Studio prompt flows.

Getting Started

MoviePipe.AI turns any script into a step-by-step production pipeline for Higgsfield Cinema Studio. Here is how to begin.

1

Create Your Account

Sign up with your email. You will receive starting credits to begin your first project immediately.

2

Create a New Project

Navigate to New Project from the sidebar. Give your project a title that describes the story you are working on.

3

Paste Your Script

Paste the full text of your script into the source text area. MoviePipe supports shorts, features, reels, music videos, and any other narrative format. The word count will determine the extraction cost.

4

Choose a Style Preset

Select a visual style that defines the cinematic look for your entire project. This master style prompt is embedded into every scene, keyframe, and composition prompt. You can also write a custom style.

5

Extract Your Production Breakdown

Click Extract and the AI will analyze your script, identifying every character, prop, world, scene, audio cue, and dialogue line. These are organized into a full production tree that you will work through step by step.

Tip: Write detailed scripts with clear character descriptions, setting details, and action lines. The richer your script, the better the AI extraction will be.

Style Presets

The style preset you choose defines the cinematic look and feel for your entire project. It controls lighting, camera, color grading, film stock, and mood across all generated prompts.

Aesthetic GRWM Promos

Lifestyle "Get Ready With Me" content with seamless product integration. Warm vanity lighting, eye-level selfie framing, authentic yet aspirational tone. Ideal for beauty, skincare, and lifestyle brands.

Instant Result Before/Afters

High-impact visual proof of transformation that stops the scroll in under 3 seconds. Clean split-screen or sequential reveal with sharp detail and bold result callouts. Ideal for product, health, and beauty ads.

Stop/Start Authority Clips

High-retention educational hooks that debunk myths to build instant trust. Professional three-point lighting, bold motion text overlays, and pattern-interrupt cuts. Perfect for coaches, educators, and thought leaders.

Strategic Expert Deep-Dives

Value-driven advisory content that positions you as the go-to specialist. Polished professional environment, cinematic three-point lighting, broadcast-quality aesthetic. Ideal for consultants, advisors, and premium service brands.

On-the-Ground Social Proof (Street Interviews)

Authentic vox-pop street interviews that build trust through real people and real reactions. Natural daylight, handheld roving camera, candid close-ups. Ideal for brands looking to leverage genuine social proof at scale.

How it works: The master style prompt is injected into every scene-level prompt (keyframes, contact sheets, sequence grids, motion direction). Character reference prompts intentionally do not include the style to preserve clean references for Soul ID face locking -- unless you toggle on the Style Preview option.

The Production Pipeline

MoviePipe organizes your script into a structured production tree with a specific recommended order. Following this order ensures that each asset has the references it needs when you reach it.

Pre-Production

1. Characters

Generate all character identity kits first. This produces the Anchor Face, Profile Anchor, and Costume Rig reference images that Higgsfield Soul ID needs for face locking across all future scenes.

2. Props

Generate prop reference images next. These are used as image references when composing scenes that feature specific items.

3. Worlds / Sets

Generate world and set designs. These establish the environments your scenes take place in and serve as visual references for scene keyframes.

Production

4. Scenes

With characters, props, and worlds ready, generate scene prompts. Each scene references the entities it contains, so their uploaded assets become image references for composition and face locking.

5. Audio / Foley

Generate ambient audio, foley, and music cues. These prompts target Kling 3.0 Audio for sound design that matches your scenes.

6. Dialogue

Generate dialogue and lip-sync prompts last. These reference the scene videos for accurate mouth movement and timing.

Why this order matters: Each phase depends on assets from the previous phase. Characters are needed as Soul ID references in scenes. Worlds set the visual environment for keyframes. Scenes must exist before you can add audio and dialogue. Following this order means you always have the references you need.

Character Workflow

Each character follows a three-step sequential workflow. Steps unlock one at a time -- you must finalize each step before the next becomes available. This ensures that later steps can reference the assets you have uploaded from earlier ones.

1

Anchor Face

The foundational reference image for this character. This generates a clean, studio-lit headshot on a neutral background -- specifically optimized for Higgsfield Soul ID face locking. The prompt produces a flat-lit reference with no cinematic styling applied.

Clean references by default: Character reference prompts are generated without your project style preset. This means flat studio lighting, neutral background, and no environmental or cinematic treatment. This is intentional -- Soul ID face locking works best with clean, unstylized reference images. The character's features, skin tone, and proportions are preserved without stylistic interference.

2

Profile Anchor

A 3/4 or profile-angle reference that gives Soul ID additional angles for consistent face generation. This step uses the uploaded Anchor Face as an image reference (with 0.8 weight) to maintain consistency.

Dependency: The Profile Anchor prompt instructs you to use your uploaded Anchor Face as an Image Reference in Nano Banana Pro. This is how the system maintains character consistency across angles.

3

Costume Rig

A full-body reference showing the character in their costume and wardrobe. This step also uses the Anchor Face as a reference (with 0.6 weight) for face consistency while giving more emphasis to the costume details.

Dependency: Uses the uploaded Anchor Face as an Image Reference for consistency. The lower reference weight (0.6 vs 0.8) gives the model more freedom for the full-body composition while still maintaining facial identity.

Sequential flow: Each step has a Finalize button. Click it to lock that step and unlock the next one. You can go back to a previous step if needed, but this will reset progress on later steps.

The workflow displays dependency chips showing what each step needs from previous steps and whether those dependencies have been uploaded.

Style Preview toggle: While the default behavior generates clean, unstylized character references (optimal for Soul ID), you can toggle on Style Preview when generating to see what your character looks like rendered in your chosen project style. This applies the master style prompt to the character reference, producing a stylized version.

Use this for creative visualization and mood boarding. For actual production use in Higgsfield Cinema Studio, the clean unstylized references will give you the best character consistency across scenes via Soul ID face locking.

Important: Always upload the Anchor Face asset after generating it in Higgsfield. The Profile Anchor and Costume Rig steps depend on this uploaded image. If you skip the upload, the dependency chips will show as incomplete, and later scene prompts that reference this character will not have a face-lock reference available.

Scene Workflow

Scenes follow a four-step sequential workflow, taking you from a still keyframe to a fully directed video clip with camera motion.

1

Keyframe

The hero still image for this scene. Generated using your master style prompt with full cinematic treatment -- lighting, camera, composition, and mood. This is the visual anchor for everything that follows.

2

Contact Sheet

A multi-frame composition showing key moments from the scene. Uses the uploaded Keyframe as a composition reference. This helps plan the visual flow before committing to video generation.

3

Sequence Grid

A 9-frame storyboard-style grid showing the scene progression. Uses the Keyframe as a narrative anchor. This gives you a complete visual plan for the scene before generating the final video.

4

Motion Direction

The final video generation prompt with camera motion, duration, motion strength, and camera velocity settings. Uses the Keyframe as the source frame for video generation. This targets Kling 3.0 for image-to-video conversion.

Character Dependencies in Scenes

Each scene knows which characters, worlds, and props it references. These dependencies are shown as colored chips below the scene node in your production tree.

Green (uploaded): The referenced entity has an uploaded asset. Its image will be available as a reference when you generate this scene in Higgsfield.

Amber (prompt ready): The referenced entity has a generated prompt but the asset has not been uploaded from Higgsfield yet.

Red (pending): The referenced entity has not been generated yet. You should generate and upload it before working on this scene.

Face lock selection: When generating the Motion Direction step (Step 4), you can select which characters to include for Kling face lock. The scene prompt will reference those characters' uploaded Anchor Face images so that Kling 3.0 maintains character identity throughout the generated video clip.

Only characters with uploaded Anchor Face assets will be available for selection. This is another reason to complete your character pipeline before starting scenes.

Clicking dependency chips: Click any dependency chip to navigate to that entity in the production tree. If the entity has an uploaded asset, clicking it will open a preview modal instead.

Dependencies & Phase Banner

MoviePipe tracks dependencies between assets and shows your progress through the production pipeline with a phase banner and status indicators.

The Dependency System

Within each asset, steps depend on previous steps. Between assets, scenes depend on characters, worlds, and props. The system tracks these relationships automatically.

Intra-asset dependencies

Within a character, the Profile Anchor depends on the Anchor Face being uploaded. Within a scene, the Contact Sheet depends on the Keyframe. These are shown as dependency chips on each prompt card, indicating what to use as an image reference and at what weight.

Cross-asset dependencies

Scenes reference characters, worlds, and props. These are shown as colored chips on the scene node. The system tracks whether each referenced entity has been uploaded, helping you identify what to complete first.

Pipeline Progress Banner

At the top of your workspace, a pipeline progress banner shows the status of each production phase. Each phase indicator uses color to communicate its state:

Green: All assets in this phase have been generated and uploaded. The phase is complete and ready to support downstream phases.

Amber: Some assets in this phase have been started (prompts generated or partially uploaded), but the phase is not yet complete.

Gray: No assets in this phase have been started yet. This phase is pending.

Red: Some assets in this phase have been invalidated due to script changes. They need to be regenerated.

Node status indicators: Each individual node in the tree also shows its status via an icon: pending, generated, uploaded, invalidated.

Uploading Assets

After generating a prompt and creating the asset in Higgsfield, upload it back into MoviePipe to track your progress and make it available as a reference for downstream prompts.

1

Generate in Higgsfield

Copy the prompt using the Copy Prompt button, click the tool link to open the right Higgsfield tool, and paste the prompt. If the prompt card shows dependency references, use those uploaded images as Image References in Higgsfield at the specified weight.

2

Upload back to MoviePipe

Click the Upload Asset button on the prompt card. You can upload in three ways:

  • File upload: Select an image, video, or audio file from your device. Files are uploaded to secure cloud storage.
  • URL link: Paste a direct URL to the asset hosted elsewhere. Useful for linking to Higgsfield outputs directly.
  • Clipboard paste: Copy an image and paste it directly into the upload dialog.
3

Finalize the step

After uploading, click Finalize to lock this step and move on to the next one. The node status will change from generated to uploaded.

Re-uploading assets: You can re-upload an asset at any time by clicking the Upload button again. The new asset replaces the old one. However, if downstream prompts have already been generated using the old asset as a reference, they may need to be regenerated for best results.

Re-uploading a character's Anchor Face, for example, will cause scenes that reference that character to show an invalidated status, signaling that they should be reviewed or regenerated.

Editing Your Script

You can edit your script at any time after extraction. MoviePipe uses AI-powered change impact analysis to understand what your edits affect and handle the cascade intelligently.

Source Editor

Open the source editor from your project workspace to modify the script text. The editor shows the current word count and indicates when you have unsaved changes. Make your edits, then click Analyze Changes to see the impact.

Change Impact Analysis

The AI analyzes the diff between your old and new script and produces a detailed impact report showing:

  • Directly affected entities: Characters, props, or worlds whose descriptions changed. Shows severity (minor tweak vs. major rework vs. removal) and how many prompts/assets would be invalidated.
  • Transitively affected scenes: Scenes that reference modified entities. These inherit invalidation from the entities they depend on.
  • New entities: Characters, props, or worlds that appear in the edited script but were not in the original. These will be added to your production tree.
  • Removed entities: Entities no longer present in the script. These will be archived (not deleted) so you can still access their generated prompts if needed.

How Invalidation Cascades Work

When you apply changes, the system updates your production tree in stages:

Direct invalidation

Changed entities have their descriptions updated and existing prompts marked as invalidated. The old prompts remain visible but are flagged with an amber Outdated badge. You can regenerate to update (regeneration costs credits).

Transitive invalidation

Scenes that reference changed entities also receive invalidation flags. Their prompts are marked as potentially outdated because their references have changed.

New entity insertion

Newly detected entities are added to the production tree in the correct position, ready for prompt generation.

Removed entity archival

Entities no longer in the script are moved to the archive tab, preserving all generated prompts and uploaded assets for reference.

Version history: Every time you apply changes, the previous version of your script is saved. You can view the full version history and see what changed at each point. Archived entities are tagged with the source version they were removed from.

Credits

MoviePipe uses a credit-based system. You start with free credits and spend them on script extraction and prompt generation.

Script Extraction Costs

Extraction cost depends on the word count of your script. The AI performs a multi-pass analysis to identify and decompose all entities.

Word CountCredits
Under 500 words10
500 - 1,000 words16
1,000 - 2,500 words22
2,500 - 5,000 words34
5,000 - 10,000 words46
10,000+ words64

Prompt Generation Costs

Generation cost varies by asset type. Characters cost more because they produce a full identity kit (3 steps). Generation costs are charged per L3 entity (the individual character, prop, scene, etc.).

Asset TypeCredits
Character Identity Kit6
Prop Design2
World / Set Design4
Scene Video Prompt8
Audio / Foley2
Dialogue + Lip-Sync2

Regeneration: You can regenerate prompts for any entity at any time. Regeneration costs the same credits as the initial generation. Add optional advice text before regenerating to steer the AI toward specific changes (e.g., "warmer lighting" or "add rain").

Credit balance: Your current credit balance is always visible in the sidebar. If a generation would cost more credits than you have, the generate button will show the cost and be disabled. Failed generations are automatically refunded.

Tips for Best Results

Follow these practices to get the most out of MoviePipe and produce consistent, high-quality Higgsfield assets.

Follow the recommended production order

Characters first, then props, then worlds, then scenes, audio, and dialogue. Each phase builds on the previous one. Skipping ahead means your scenes will lack character references for face locking.

Upload assets at every step

After generating each asset in Higgsfield, upload it back to MoviePipe immediately. This feeds the dependency system and ensures downstream prompts reference the correct images. A complete tree with all assets uploaded gives you a full production bible.

Use dependency chips to track progress

Check the dependency chips on scene nodes to see which characters, worlds, and props still need to be completed. Click a chip to jump to that entity or preview its uploaded asset. A scene with all-green dependency chips is ready for optimal generation.

Keep character references clean

Use the default unstylized character references for production. The clean studio-lit references with neutral backgrounds give Soul ID the best material for consistent face locking across different scenes, lighting conditions, and camera angles. Save styled previews for mood boards.

Edit prompts before copying

Every generated prompt has an Edit button. Use it to tweak the wording, adjust camera angles, change lighting, or add details before copying to Higgsfield. Your edits are saved with a version history so you can always revert. Prompt editing is always free.

Use advice when regenerating

When regenerating prompts, add advice text to steer the result -- type specific instructions like "make the lighting warmer" or "emphasize the red coat" to guide the AI toward your vision.

Write detailed scripts

The more detail in your script, the better the extraction. Describe character appearances, costume details, setting atmosphere, lighting conditions, and specific actions. The AI uses all of this context to produce more accurate and richly detailed prompts.

Finalize steps to track progress

Use the Finalize button after each sequential step to mark it as done and unlock the next step. This also collapses the completed step, keeping your workspace tidy. You can always go back if you need to revisit a previous step.

Ready to start?

Create your first project and transform a script into a production pipeline.