Launch HN: Eggnog (YC W24) – AI videos with consistent characters

Hi HN, we’re Jitesh and Sam, and we’re building Eggnog: https://www.eggnog.ai, an AI video platform where your characters look the same across every scene. Eggnog lets you create characters, use them in scene generations, and share them with others for remixing. Here are some videos made with Eggnog:

- Fanfiction: https://www.reddit.com/r/harrypotterfanfiction/comments/1bn6... (includes some sounds from outside Eggnog)

- Comedy: https://x.com/jitsvm/status/1771609353725919316?s=20

- Post-apocalyptic vibes: https://x.com/saucebook/status/1771212617601659279?s=20

We got into making funny AI videos over the last year, but felt annoyed that the characters always looked different in each scene. It made it harder to make cool videos and for our friends to understand the plot.

Diffusion models, like those that make AI videos, start with random noise, then add detail. So the little things that make a character recognizable will almost always come out looking different across generations, no matter how many tricks you add into the prompt. For instance, if you want your wizard to have a red hat, it might be crooked in one generation and straight in the next.

Eggnog allows users to make consistent characters by using Low-Rank Adaption (LoRA). LoRA takes in a set of images of a character in different scenarios and uses those images to teach the model that character as a concept. We do this by taking a single prompt that a user writes for a character (e.g., an ancient Greek soldier with dark hair and bushy beard) and turning it into a training set of images of the character in different poses, shot from different angles. Once the character is trained into the model, the user can then invoke that character concept in the prompt and get consistent generations about 80% of the time.

There is still a lot of room to make Eggnog generations more consistent and controllable. Generations sometimes come out with the wrong gender, missing key details from the costume, or fail in a long tail of other ways. We also often struggle to control the character’s exact body movement. We’re planning to address these cases with more optimization of the prompt that invokes the character concept and by using new open source video models that bake in 3D representations of humans.

The other fun thing about making characters with Eggnog is that you can share them. We already made one San Francisco “American Psycho” video that got over 100k views on Twitter (https://x.com/jitsvm/status/1766987382916894966?s=20). Then we expanded on the SF universe by making another video with the same character and a new friend for him (https://x.com/SamuelMGPlank/status/1767405784986718406?s=20). Eventually, you’ll be able to create and remix all the components of a good video—the characters, the costumes, the sets, and the sounds will all be part of a library of assets built up by the Eggnog community.

Eggnog is free to use, and you can try it out in the playground: https://www.eggnog.ai/playground. If you’re looking for some inspiration, you can try using the character “herm” waving a glowing wand or the character “lep” walking down a Dublin street. We’ll make money eventually by showing ads to viewers who come to Eggnog to watch AI videos.

We’re really excited to see all the fun videos and characters people are making with Eggnog, and looking forward to hearing what you all think!

93 points | by samplank2 29 days ago

23 comments

  • fwip 29 days ago
    How much ad time does it take to pay back the (compute) cost of making AI videos? I feel like other video hosting sites like Youtube, Vimeo, etc, are already struggling to serve enough ads to turn a profit (without so many that they drive viewers away), and they don't have to pay anything for the videos.
    • samplank2 29 days ago
      Good question. It is expensive right now to generate AI videos. We think the costs will come down over the next couple years. It's hard to know exactly where they will land and how that will compare to the value of ads at that time.
  • mdwelsh 29 days ago
    This looks cool! Your website really should feature a couple of sample videos, though, since without finding the links in your post here it is hard to tell what the final results look like.
    • samplank2 29 days ago
      Thanks, this is a good suggestion.
  • frankdenbow 29 days ago
    I use stable diffusion and LoRAs frequently so I'm wondering what you hope to offer beyond that? I would love to see something that develops 3d models from these generations, is that the area you're looking to explore? More thoughts here: https://www.youtube.com/watch?v=18eIfWb0wug
    • samplank2 29 days ago
      Thanks for making a video. We will give it a watch. The thing we want to offer right now is the growing library of character (and set etc) assets that can be reused and remixed very easily. We're definitely interested in incorporating models that do 3d representations. We're constantly experimenting with the latest open source stuff.
  • gajnadsgjoas 29 days ago
    Cool stuff! I'm a bit surprised it's an YC startup, can you share your thought process or a roadmap? It sounds like a cool fun project and I assume you want to target creators similar to midjourney? Another question is if I want to have two characters in the scene I assume it's a bit more difficult with LoRA?
    • samplank2 29 days ago
      Thanks! The near-term roadmap is to build out better control over what's happening in the scene. That's the biggest thing holding us and some of the early users back from making the videos we want to quickly. Often, we need to re-run a bunch of times until we get a scene we like. The long-term roadmap is to build out a really fun viewing experience, where it's easy to remix characters from the videos you're watching. We do want Eggnog to be used by creators like those who use Midjourney. Two characters in the scene is more difficult -- it's not something you can do with Eggnog currently, but we will build that eventually.
  • sandis 29 days ago
    Prompt: A film scene of Young Woman 3 doing backflips https://streamable.com/8hm4vu
    • millgrove 29 days ago
      Love it. One of the things we're focused on is getting better anatomical movement out of existing models. These models are great with animations like waves, cars, trains, flames, etc., but they struggle with people outside of basic movements. We're optimistic about future models, and we still think there are interesting, funny, exciting stories you can tell with what's available today!
    • samplank2 29 days ago
      Haha looks like there is some room for improvement in the gymnastic capabilities of our characters.
    • jejeyyy77 29 days ago
      nailed it
      • jaggederest 29 days ago
        Something out of the Matrix, filtered through the mind of David Cronenberg
  • carlio 29 days ago
    It takes too long to iterate on a character design.

    For more explanation: I've been playing around with stable diffusion on my laptop recently; I have a gtx 4070 with 8GB dedicated VRAM so it's not nothing.

    The main problem I have is that it takes a lot of iteration on a prompt to get what I want, at lower resolution and sampling steps, before I know that I'll get roughly what I want.

    I tried making a character in Eggnog, and before I could be sure what I was getting, it told me it'd take 15-20 minutes to be ready. I worry that this will just make me wait a long time for a character that isn't what I want, and starting again too many times will put me off.

    The iteration and feedback loop needs to be tighter in my opinion, or people will get unsatisfactory results and be unwilling to go back and fine tune.

    • samplank2 29 days ago
      Thanks, this is helpful feedback. We're definitely frustrated with how long it takes to load a character. We'll see what we can do to give a better sense of what the character will look like before the training job kicks off. We should be able to show some intermediate results.
  • digging 28 days ago
    So, I'm not surprised the site would get slow after hitting the front page, but why is the character creation form taking minutes to update? There's a text box - it should take less than 1 second for me to click the text box and type a word, even if submission takes a while. What is going on there?
    • samplank2 28 days ago
      Sorry it's lagging so much. Looking into it. Just so I understand the issue, when you try to add text on the character creation page to describe the character, nothing loads in the text box? Or is it something else?
      • digging 28 days ago
        Well, every interaction with the page takes 30s+ as if it's making a request to a server under heavy load. Clicking a button does nothing, clicking into the text box doesn't even show a cursor.
        • samplank2 28 days ago
          Ok thanks for letting us know. That's not a good experience so we'll work on making the UX snappier.
  • calin_balea 21 days ago
    Just saw something similar the other day but for images. Looks interesting. At this stage it would be a great tool to storyboard a video quickly. Who’s your target user persona?
    • samplank2 20 days ago
      Anyone who is into making AI videos. There are a bunch of people on r/aivideo making exciting things. Definitely could also be used to storyboard and experiment with ideas for a full-scale movie
  • imacomputer 29 days ago
    > turning it into a training set of images of the character in different poses, shot from different angles.

    Something is wrong, it looks like it is just using multiple layers of images/video and cutting back and forth to predetermined combinations of layers...

    I would polish the idea a bit more before publishing it or people may think, (quoting your Reddit link) "Looks like stinky doo doo. Don’t quit your day job"

    • samplank2 29 days ago
      Fair enough that you don't like the videos Eggnog makes. We think you can make fun stuff with what Eggnog is capable of today and will be adding more control over the outputs.
      • imacomputer 29 days ago
        Nah, I think the videos are neat, but I have niche interests in art like datamoshing, algorithmic art and whatever... What I am saying is, if you are trying to pull a Midjourney with video that can be sold as a product (i see you are YC W24), wait a little while and make it more appealing before going public so people get the right idea about your product, and come back to it for more. Right now it seems like it is a little gimmicky.

        > We think you can make fun stuff ...

        To be brutally honest, don't say "We think you can". It does not matter if you think people will like it. Do people like it? If the average Joe sits down and plays a with the model, will they have a good time?

        I'm not trying to be abrasive or rude here, just honest.

  • EwanG 29 days ago
    Pricing? I'd like to use something like this to do an animated version of one of my books, but I'd want to be able to add music and obviously voices. I don't need lip sync since I'm looking for more of a video of a scene, perhaps with a character or two in that scene but not actually trying to fully animate them.

    To do that I figure I would need about 5-6 minutes of video per chapter (perhaps less with some looping), and the ability to DL the video or otherwise export it (assuming I can upload my other media into your Composite tool) to put on YT or the like. And would probably want to lose the watermark in that case as well.

    • samplank2 29 days ago
      Eggnog is free to use. And that is a cool use case! You can download the videos right now, but they do have a watermark. In the new-term future, you will be able to host the videos directly on Eggnog without a watermark.
      • EwanG 29 days ago
        OK, but if I want to put it on YT will I be able to? Or is the model that you are trying to drive traffic to your site rather than provide video generation?

        If so, then what is the plan (if any) to monetize for creators?

        • samplank2 29 days ago
          Yes, you can put the videos on YouTube. The video will have the Eggnog watermark, but once you download it, you can do whatever you want with it. In face, we hope you put it on YT, TikTok, or anywhere else. We do want to drive traffic to our site, but in the early days, we don't have much traffic to offer to creators, so we want creators to share where their audience already is.
  • sidcool 29 days ago
    Congrats on launching. Will play around.
    • samplank2 29 days ago
      Thank you! Feel free to share anything you make.
  • dbmikus 29 days ago
    Congrats on the launch! Image consistency is a big thing to solve. I don't make videos, but sometimes I try to get OpenAI to tweak an AI generated graphic and it ends up rendering it completely differently.

    Good luck with Eggnog! I think AI generated media is really cool.

  • nextworddev 29 days ago
    This is an important problem. Just curious though - how do you plan on competing against OpenAI etc? Or will you guys just compete on developer experience and integrations, and wrap around popular OSS / OpenAI models?
    • samplank2 29 days ago
      We're not going to compete on having the best model -- at least that's the plan. We'll wrap around open source models or whatever we can license. We will compete on having the best library of assets (characters, sets, sounds etc) for putting videos together. We like this plan because 1. most people don't like a blank canvas where they have to create everything from scratch, they'd rather be able to assemble things together 2. people like participating in trends, and they will be able to riff on a popular character super easily.
      • ttcbj 29 days ago
        This seems really wise. I often want to start with something that looks decent, and feed my content into it, as opposed to trying to develop everything from scratch.

        Eventually, you should probably mention this on your website: a broad range of starter projects and high quality assets, etc.

        • samplank2 29 days ago
          Good call, we should highlight this
      • nextworddev 29 days ago
        I see, I guess the Canva playbook could work here.
        • samplank2 29 days ago
          Yeah Canva is a cool inspiration of a huge library of very good assets.
  • EZ-Cheeze 29 days ago
    A killer feature would be uploading your own pictures to be motion-ified
    • samplank2 29 days ago
      Good idea, we can definitely add that
  • jiratickets 29 days ago
    I have to ask... why call it Eggnog?
    • codetrotter 29 days ago
      There are only two hard problems in computer science.

      Cache invalidation, naming things, and off by one errors.

    • aio2 29 days ago
      My question too.
    • samplank2 27 days ago
      It's fun to say ¯\_(ツ)_/¯
  • thwarted 29 days ago
    The fanfiction and comedy ones look like xtra normal (circa 2010) + Clutch Cargo (circa 1959).
  • ghostbrainalpha 29 days ago
    Any chance you will add the ability to upload images into your character creator?
    • samplank2 29 days ago
      Yeah we should support that soon. What did you have in mind?
      • ghostbrainalpha 28 days ago
        I've already spent HOURS in Midjourney generating hundreds of images just to get the perfect cyborg robot elf. I just want to use that work and upload the best of those images into the character creator, rather than hoping the prompt gets me something close to the character I want and waiting 20 minutes to find out.
  • SlightGenius 28 days ago
    Looks good! It will be a great addition to the Faceless video series!
  • barfbagginus 29 days ago
    I'm concerned about the ethics of not publishing the work as open source, given that diffusion models are unethically sourced - trained from non-consensually scraped data. There is also the question of whether models can be ethically sourced - even if we buy the rights to a huge collection of images, there is the contention that many of the original creators would never have anticipated or accepted AI use of their works. So virtually no matter what, there will be outstanding ethical claims against such AIs.

    Given all that, I believe that an adequate compensation for unethical sourcing in AI - the absolute best we can do for humanity - is this:

    1. We admit that the works are unethically sourced, which means they could be banned the future, and may require switching to an "ethical piracy" distribution model

    2. We ensure that the models produced this way belong to the entirety of the humanity by default, by distributing them for free under copy left licenses like GPL3

    3. We abstain from monetizing or otherwise drawing revenue or profit from unethically sourced models

    4. We assemble zero cost service models for the AI, drawing on volunteers to publicly pool compute

    Case Study: Whisper

    I follow these guides in my own work on OpenAI's Whisper. Whisper can do much good for humanity, as it allows them to freely transcribe and translate speech, meeting a core human communication need.

    But whisper needs many improvements before it is a freely available service making an impact in millions of people's lives, for zero cost. To that end I'm building extensions that let people pool CPUs and other cheap hardware to put together independent and free transcription services based on whisper. I'm building rapid customization models to help people with accents. And I'm building real-time feedback and correction models to enhance the accuracy of the naive model

    Yes, even here we are faced with the possibility of future bans, especially given the unethically sourced nature of whispers training set. That means I have chosen to abstain from ever collecting revenue through my whisper work, and consider my whisper work to be my contribution towards the legacy of all of humanity.

    I encourage you to consider the upsides of this form of engagement.

    Objection: How Do I Feed Myself?

    Yes, this model requires you to have a fully ethical alternative business that you run. I support myself on about 25k-60k of CAD design revenue through my semi-automated CAD reverse engineering service, which is based on fully ethical automation models that I built by hand and calibrated ethically on my own work stream.

    Objection: This does not protect us from bans.

    No, adopting this model does not prevent the law from banning models like yours in the future. It can even enable legal audits of your code, and make you seen as a "flight risk" - someone likely to continue illegally distributing models for ethical reasons, even after they have been banned for legal reasons. I have no good answer here yet.

    Closing Statement: Choose the ethical high ground. It is Based, and that will guide you.

    I encourage you to see this strategy as "Ethically Based". I define Ethical Basedness as a form of true ethical high ground that can guide you towards the best contribution you can make towards the shared knowledge of humanity.

    Given the extreme ethical quandaries of attempting to monetize a proprietary service on top of unethical AI, you stand to lose that higher ethical authority, and be cut off from its guidance. But you have an opportunity, now and in the future, to pick it back up.

    Good luck and keep up the good work. I hope I have moved your views, rather than merely agitating your feelings. I hope you will embrace the need for a free and openly shared AI legacy for all humanity.

    Overcome the ethical quandaries of AI sourcing by giving it away.

    Provide freely for the core human needs of creativity and visual story telling.

    Consider it.

  • sumanyusharma 29 days ago
    Love this; grats on the launch Sam + Jitesh!
  • siggi_pop 29 days ago
    [dead]
  • i_like_pie1 29 days ago
    lets go! great team. loved using eggnog so far