While the results certainly are pretty, I don't see how learning has taken place.
This cartoon filter still has the same issues as previous attempts, which is:
- omitting borders that are semantically important but have a low color gradient
- not collapsing small areas into lines
For the first issue, there's an example image of a picnic on white background. A human cartoonist would most likely draw a full outline around the white spoon, because it is important for conveying the type of object that this is supposed to be. With this algorithm, the spoon gets partially merged with the white background and without the reference photo I would have a hard time identifying it as a spoon.
For the second issue, kook at the photo of the Asian girl with patterned skirt. A human cartoonist would most likely observe the regular grid pattern and replace it with thin lines, thereby communicating that all of it is one same thing. This algorithm, on the other hand, treats each tile of the pattern individually, thereby making it look more like a crystal or crumbled foil.
I personally also prefer white-box algorithms, but there's no denying that creating a cartoon requires a lot of prior knowledge about which features to retain as important and which features to abstract away. As such, I see the real challenge in somehow producing good saliency training data for millions of images. I mean ideally you would want the 5 year video stream plus eye tracking data of a baby starting to grow up...
> For the second issue, kook at the photo of the Asian girl with patterned skirt. A human cartoonist would most likely observe the regular grid pattern and replace it with thin lines, thereby communicating that all of it is one same thing. This algorithm, on the other hand, treats each tile of the pattern individually, thereby making it look more like a crystal or crumbled foil.
Something similar is going on with the photo of the merlion statue. The entire body is scaled, and a cartoonist would definitely represent that. But (because of lighting?), the algorithm renders the tail smooth instead of scaled.
If I understand this right, they've built a 'cartoonify' filter to convert real-world images into cartoon format, and then trained a neural net based on these image pairs? If so, what does the neural net add?
Sorta, it breaks down the images (from anime?) into three representations - surface, structure, and details, and also extracts each of those representations from generated images. Those representations are then cross-checked by the adverserial network, which improves the GAN's anime-esque generation ability.
The big question: Does copyright law apply to cartoon version of copyrighted images? Transformative work can circumvent copyright law, but are you allowed to feed copyrighted images into an AI algorithm to create cartooned versions? Who owns the copyright at that point?
I don't see any reason why copyright would not apply. If you take someone else's photograph, change the white balance, and start selling copies of it, that's classic copyright infringement.
As for input data to models, my intuition is that they would be tainted by the copyright of the input images. It's just that nobody has a bot for scanning AI models for their photographs, so you don't see a lot of litigation or DMCA takedown requests here. It's easy when someone just uploads your photo to their website. It's hard when the photo contributes some weights to a neural network.
My main takeaway is that copyright is very imperfect. It doesn't allow for any unsolicited enhancements of someone else's work.
In principle by the way copyright works this isn't really an issue: if an image is generated independently then copyright does not apply. However, in the event that you wind up with something similar, it becomes something which you would need to prove (and people have lost cases because they could not). On the other hand, such large-scale generation of images will likely be treated differently by the courts than other means of production.
Indeed, people often seem to assume that the output of GANs are free of the copyright of the training data, but this has not been tested in court and I get the impression the legal opinion leans towards that it does, which makes the copyright status of most GANs (and in fact most neural nets) a pretty huge mess.
It's hard to know what would have happened if Shepard Fairey hadn't gotten caught destroying documents, and we don't know the specifics of all of the settlement, but I think it's fair to say he lost that case.
To a certain extent, PowerPoint's "Artistic Features" option under "Picture Format" allows for similar effects. The paintbrush options is like this cartoon style. If I had the time, I'd definitely choose this cartoon program, as PowerPoint's effect is not as clean.
The terms "White-box model" and "Black-box model" mentioned in the paper seem to be standard terms from the ML literature, though I didn't understand precisely what they meant here. I know the metaphor: As we can see inside a white box but not inside a black box, so we can observe the inner workings of a white box model. Similar terminology is used for other domains such as system design and testing.
The conclusion here, broadly, is that white-box is better than black-box for this application.
Is there a modern terminology that avoids concerns about racial bias in language?
You're finding racial issues where there are none. The master-slave terminology used in technology comes from the human activity hence the sensitivity around usage. The terms blackbox and whitebox never had anything to do with human usage or racial meaning. Let's not create history that doesn't exist and go overboard. They are just colors.
> "The conclusion here, broadly, is that white-box is better than black-box for this application."
Well I think this is incorrect. Generally speaking, a black-box exposes an interface that you can use, and a white-box is something you can internally modify.
One is not necessarily better than the other. Using a white-box approach can create tight-coupling between components in a system as you could be relying on internal mechanisms, whereas a black-box approach enforces boundaries in your system which is generally good. Also, it's often better to test systems with a black-box mentality to ensure security, resilience etc...
It's not inherently representing good/bad, however modern terms like, opaque/transparent might be more accurate and less controversial I suppose.
The 'box' is generally considered to be a system that completes a process on input to generate output. White and black are references to the state of illumination of the inner workings of the machine.
White box is not better, it just defeats the mystery/freedom of the abstraction.
That said, given that we describe races as white and black, I'm happy to eventually pick up a new convention if it makes folks uncomfortable. I say eventually because the current trend is to consider most of these shallow accommodations as 'performative', and I have no interest in being involved in any action that could be construed as being done for social credit where there is little or no actual value. If i felt or had evidence that it actually did help it would be a different story.