Haiper is a London-based AI text-to-video generator startup founded by former Google DeepMind researchers Yishu Miao and Ziyu Wang. The startup has launched Haiper 1.5, an advanced AI video generation model. This new model challenges established names like RunwayML and Sora in the rapidly evolving AI text-to-video market.
Haiper 1.5 allows users to generate 8-second-long clips, a significant upgrade from its previous model’s 2-4 second limits. This update addresses user feedback and opens up broader use cases.
The new model includes an integrated upscaler to improve video quality to 1080p in a single click. This feature particularly benefits users looking to improve existing video and image content without disrupting their workflow.
In addition to video enhancements, Haiper is introducing an image generation model. This feature will allow users to create images from text prompts. You can also animate those pictures through the video generation tool, simplifying content creation.
Since emerging from stealth four months ago, Haiper has over 1.5 million users. While it remains less funded than some competitors, this growing user base signals strong market positioning.
Haiper plans to improve its perceptual foundation models to create more true-to-life content. They create realistic content by replicating emotional and physical elements of reality. Future updates are expected to improve the consistency and quality of video generations, especially for longer clips.
Currently, features, like the 8-second AI video generation and upscaler, are restricted to Pro plan users, priced at $24/month. The company plans to expand access through a credit system and make the image model free later this month.