Pro tip for people with SVGs that have an image inside. You can pull that base64 image out of the svg, convert to image, compress it (tinypng, pngcrush etc) then convert back to base64 and out back in the SVG. We automated this with a slackbot but it’s often overlooked with all SVG optimization scripts.
Somewhat related: a while ago realized that the SVGs that draw.io generates (exports) have an interesting property. If you include an SVG inside such a drawing and you export it to SVG, that SVG is actually embedded as base64. This is unfortunately not supported by some programs (e.g., Microsoft Office). A quick fix is to actually expand the SVG into the big SVG. I wrote a quick tool for this that does this for you (I should consider open sourcing this maybe).
Anyway, I bet doing this and then minimizing the SVG could result in more savings?
My only problem is that when pasting SVG directly from Illustrator it finds an invalid character at the end and refuses to use it. I have to paste the SVG into an editor, remove some invisible character at the end, and then paste to svgomg.
Be aware that svgo is a lossy operation. One thing it does is round the digits in numbers to a certain precision. When I last worked with it, it also did some things I did not like such as mess with viewboxes. I found that it messed up the sizing of SVGs embedded with <img> tags in certain browsers. It has been a while since I used it though.
I was curious about this. It looks like it has numerous individual plugins that provide the actual functionality. Do you know if some of the plugins are lossless, while others lossy? It may be possible to pick and choose the lossless ones.
I absolutely love ImageOptim for manually optimizing SVGs, JPGs, PNGs, and GIFs. It's a simple GUI that wraps some excellent compression libs (including SVGO), which I find much faster to use for one-off image optimization than the CLI. It does a clever job of trying multiple compressors and picking whichever one creates the smallest file for each image. You can run it in a lossless or lossy mode, and tune various compression options. It's free (gratis), OSS, and runs on Mac, Windows, and Linux.
I make it a habit of running images through ImageOptim before sending them via Messages or Slack, as a courtesy for people on slow connections.
Bonus round: When it comes to optimizing SVGs, nothing beats hand-editing the SVG source. I've done a fair bit of that optimizing the artwork for https://www.lunchboxsessions.com and it's been totally worth the effort. Taking a detailed 10k SVG down to 500 bytes with no perceptual difference is a huge win when you have hundreds of those SVGs on the page and you serve people who still use dialup.
Would be more compelling with the sizes of that neutral net in the different passes. As it is, this is like comparing two algorithms on a small dataset without accounting for growth in data on behavior.
A related (but less automated) advantage is if you're building react components that render SVG, this minified output will be much shorter and hopefully easier to reach through and figure out where you want to insert your own colours/child components.
I love seeing efforts to make software more efficient, so I enjoyed this post, but at the same time your own blog has a 434kb 512x512 png favicon. I'm not at all a web developer and so I don't know if there is a technical reason for that, but it seems a bit absurd in juxtaposition to your goal to reduce a 2kb svg to 100 bytes.
I've seen this on a number of other blogs, which seem to focus on minimal design. Can anyone explain what the need is for these enormous favicons?
512x512 is specifically for browser loading splash screens when the site is added as a launcher on a phone. It's required for Chrome to have an "Add to Home Screen" prompt and requires a 192x192 and 512x512 icon. I imagine most minimalist sites you browse are expecting people to add the website as an icon/app launcher on their phones. Not sure how common an experience that is for users, I sure as hell wouldn't but maybe other people might.
Was intended for web apps I believe but is usable by sites that aren't web apps.
I ran the examples in the article through gzip. The long one is 789 bytes compressed, and the short one is 266 bytes compressed.
The reality is that while text compresses well, gzip doesn't know what data is relevant. It keeps it all. So for this example, the original long svg contains things like inkscape configuration (the window height, the current layer name, etc.) while the shorter one omits all that. So not emitting that information will always be more efficient than emitting that information and compressing it.
Of course, but even the minified versions should net some benefits with compression. Most SVGs probably fit in 7-bit ASCII, which on itself should lead to improvements via (Huffman?) compression. Importantly, as was pointed out, I say “also” not “instead.”
SVG optimization seems like a good choice for delivery, but oftentimes I find myself using SVG in a development context. Having human readable, annotated SVG files is useful when looking through previous revisions. In terms of compressed filesize: there's no serious practical gain.
It's analogous to JS minification -- Clients on the production site should get a bare minimum and the VCS should have the readable form with all the digits of precision and whatever comments, non-functional groups acting as layers in the editing software, etc.
It does a fantastic job of defining the problem and walking the reader through the process of solving it. The language is so succinct and clear that even if the reader had never heard of SVGs before, they would understand this post.
I started by downvoting your comment, and I realized I was choosing the cowardly way of reacting. I am saying this just to illustrate my own shallow reading of your comment at first and how I was avoiding actually being helpful.
The short answer is that 99% of the time no purpose is being served by that cruft except to be bloat. Minifiers remove that cruft and optimize for the intended audience, which is machines.
The longer answer is that in optimizing for machines, you're also optimizing for humans in the long run. Anyone stuck with a 2G connection, dial-up, or Bluetooth tethered device, or a new ultra low powered device on a different CPU architecture, will appreciate not having to operate solely in the world of gigabit+ connections, 8GB mobile devices, or 8 core hyperfast CPUs. Realize that such use cases still exist, and for very legitimate, not truly edge cases.
Think of pages with hundreds or thousands of vector images. Not everyone can afford to run a Blink-powered browser or have CPU cycles for days. Every little bit helps.
In general I'd recommend the opposite: don't minify anything unless necessary, prettify it instead. Every plaintext-based format file should be optimized for human readability unless you actually need to make it hard to read (e.g. you are developing a proprietary product not meant to be open-source) or your bandwidth is limited so severely that every single byte matters.
Nevertheless the kind of minification demonstrated in the example (removing Inkscape bloat) feels really great and actually makes the file more human-readable. This reminds me of HTML files generated by MS Word and other WYSIWG editors which included tons of bloat code that actually harmed rendering (needless to say human readability).
I'm not sure I agree in the usual case. This is sort of what sourcemaps are for. Provide a link in a comment in the resource source to a prettified or at least non-minified version at a separate URL. The typical user won't need to download the extra bytes, but you can still access the original if you want.