AutoVFX AI: Text-Driven VFX Creator
AutoVFX AI is transforming video editing for filmmakers, offering highly realistic visual effects from just text instructions. This tool reads the physical context of any video and creates dynamic, realistic effects based on its layout and structure.

Creating impressive visual effects (VFX) used to require hours of work, complex tools, and skilled artists. With AutoVFX, though, anyone can generate photorealistic VFX by describing their vision in plain language.
This system combines neural scene modeling, LLM-driven code generation, and physics-based simulation to produce realistic VFX without the need for technical expertise. Tests show AutoVFX not only creates effects with more accuracy and flexibility than competing tools, but it also excels at maintaining realistic physical details.
Incorporating realistic footage with computer-generated visuals, modern VFX is used widely in films, ads, AR/VR, and simulations, but until now, it’s required advanced skills and pricey software. AutoVFX changes that by making VFX accessible to everyday users. By taking a single video and simple instructions, AutoVFX applies edits like object placement, texture changes, and other effects, bridging the gap between 3D scene editing and user accessibility.
The core of AutoVFX lies in its detailed scene modeling, capturing depth, appearance, and objects from the video. With this foundation, users simply describe their desired effects, and AutoVFX generates a code sequence that applies these changes. The result? Free-viewpoint videos that look naturally altered according to the user’s instructions.
With AutoVFX, users can edit a video by simply describing the effects they want. The system’s LLM-powered code generation turns these text prompts into an executable program, producing a final video that shows the changes from any angle.
AutoVFX AI Availability
AutoVFX is available on GitHub under the MIT license (wow).
System Requirements:
- OS: Ubuntu 22.04.5 LTS
- GPU: NVIDIA GeForce RTX 4090
- Driver Version: 550
- CUDA Version: 12.4
Bottom Line
With AutoVFX, users can edit a video by simply describing the effects they want. The system’s LLM-powered code generation turns these text prompts into an executable program, producing a final video that shows the changes from any angle.
It's not a Hollywood level yet but it seems pretty decent. Reminds me a bit about Pika effects. Also reminds me of the movies when the traditional filmmaking was just beginning, the special effects there were quite ... basic. But they got the job done. And in the end they evolved into what we have now.
Every day I'm more impressed and excited by the tools we're getting, not to mention when they're released for free on an MIT licence.
Published: Nov 13, 2024 at 1:53 PM