22 comments

  • CobrastanJorji 1078 days ago
    I think I'm missing something. I get the Sketch-to-Photo synthesis that this is based on. It's really weird, neat stuff. But as a layman, I'm having trouble seeing the difference between the result of this anime-to-sketch synthesis and what I'd expect to get out of a simple edge detection. Is the difference that it's more clever about which details to ignore?
    • fsloth 1078 days ago
      I only dabble in graphics - but generallly simple edge detection needs really uniform tonality and no textures in the input to work well. Look for example in the more "sketchy" examples how the linework that "looks right" is extracted from a quite noisy input. Also, in the top example where there are houses, the contrast difference that gets extracted to linework is lower than in the character areas.

      So, the flat shaded images with explicit black outlines - yes, it's likely there is much difference with edge detection. But when the image has lots of different contrasts and tonalities this looks much more impressive.

    • mattigames 1078 days ago
      Not actually, here is what most common edge detection methods produce (last one is Anime2Sketch): https://i.imgur.com/nt0D1ef.png
    • greatgoat420 1078 days ago
      I was actually going to ask if someone had done a comparison with edge detection.
      • loosetypes 1078 days ago
        Also not familiar but presumably a temporal aspect weighs in, so whether something is a meaningful edge isn’t strictly dependent only on the content within a specific frame?
        • edge17 1078 days ago
          My high level guess is, you would do some linear cost minimization in deciding what are edges over time when doing things the traditional way. The neural network can handle nonlinearities for the optimization so you can get better results for some set of inputs (in this case, some class of anime images)
  • wj 1078 days ago
    This looks like a great tool to generate some Pokémon and Beyblade coloring pages for my kids. We went through everything in Google image results many moons ago.
    • swsieber 1078 days ago
      I really want to see how this performs on Octonauts stills
  • fireattack 1078 days ago
    Just a heads up, you should use higher quality (or better, just use PNG) for the output.

    The default Image.save quality is very low to a point that the JPEG artifact is more prominent than the line art themselves.

    L91 @ data.py: image_pil.save(image_path, format='PNG')

  • forgotpwd16 1078 days ago
    Then someone can use https://github.com/taivu1998/GANime to recolor them.
    • slazaro 1078 days ago
      Do it iteratively one after the other to see if after a while the results are unrecognizable from the originals. Like those experiments that translated a text between languages to create gibberish.
  • gibolt 1078 days ago
    This feels like a tool with lots of business cases.

    Studios may be able to accelerate digitalization and colorization.

    The ability to convert stills to a fillable outline or repurchase for labels/marketing/branded coloring books (or apps) could be worth some money to those with a large content library.

  • Cloudef 1078 days ago
    Looks more like shaded art to unshaded lineart rather than sketch. Sketches are usually way more messy, like a blueprint for the final product.
  • dagmx 1078 days ago
    This is actually pretty impressive, and I can see it being really useful if it can generate clean line art from Animation roughs.

    It would be really interesting to see this in OpenToonz or the like.

  • 0x426577617265 1078 days ago
    Semi off-topic – is there a tool to turn a picture into an drawing? I sometimes see websites where people have created a avatar of their headshot that looks ‘toonish’.
  • swframe2 1078 days ago
    Also checkout u-2-net: https://github.com/xuebinqin/U-2-Net There is a variant that can turn images into line drawings.
  • throw_m239339 1078 days ago
    Interesting. I wonder how it fares with 3D renderings? I'm a Blender user and unfortunately, Blender "Toon Shading" capabilities are not very good compared to say Cinema 4D.
  • mushufasa 1078 days ago
    How is this technically different from photoshop filters? https://design.tutsplus.com/tutorials/sketch-photoshop-effec...
    • asutekku 1078 days ago
      Photoshop just detects edges and does thus ends up detecting both sides of a drawn line or change in shading for example. This does not appear to show such artefacts.
    • karmasimida 1078 days ago
      The project is much cleaner, and from the link you posted, the picture is noisy.
  • pjgalbraith 1078 days ago
    I wonder if this can be used for comic book inking. It looks like they have an example of that.

    Typically the workflow is pencil drawing -> cleaned up ink drawing (japanese animation uses a similar process too). If this can speed up that process it could save a lot of time.

  • ekianjo 1078 days ago
    Does not work as well as advertised :) I think the author clearly cherry picked their examples.
    • mkesper 1078 days ago
      Can you provide some counter-examples, probably as issues in the repo?
      • ekianjo 1078 days ago
        Yup, will do that. I found several anime pictures that did not work remotely as well as the examples.
        • xyk 1070 days ago
          Could we see the pictures that did not work well?
  • jakearmitage 1077 days ago
    Does anyone know a similar model that transforms normal images into Western Comic Book style? I've seen it a lot for Anime/Manga, but never for that classic style of 90's comic books.
  • ZephyrBlu 1078 days ago
    I'm not super familiar with Deep Learning, but based on the fact this is effectively extracting edges and the ConvTranspose2d layers I'm guessing it's some sort of Convolutional Neural Net?
  • A4ET8a8uTh0 1078 days ago
    It is pretty neat. Off-topic, what anime is that test from?
    • bla15e 1078 days ago
      • Cerium 1078 days ago
        Which I found by observing that the file name is "vinland_saga.gif" when I went to try putting the image in a search engine.
      • bavell 1078 days ago
        Fantastic anime, I highly recommend it!
  • androng 1078 days ago
    What could we use this for? The immediate thing that comes to mind is making a coloring book. I’m wondering if I could use it to make something original
  • zakki 1078 days ago
    If I want to use this program do I have to have a good GPU in my computer to run this program or I just need to install the required software?
    • lostgame 1078 days ago
      I believe that this will not be too GPU-intensive, but that will of course depend on the input resolution of the video.
    • knicholes 1078 days ago
      The training is what requires a good GPU. For inference, a CPU should be fine.
  • offtop5 1078 days ago
    Is there any way for someone to post a Google collab notebook with this.

    I think this would be pretty cool if it would support any picture or video.

  • istorical 1077 days ago
    can anyone show what happens if you feed it regular video / photo?
  • kalal 1078 days ago
    If you find this interesting, you may also consider canny edge detector.
  • interestica 1078 days ago
    Anything preventing this from running on Python3 on Windows?