For some reason, Figure 2 doesn't continue out beyond the few-training-samples regime. Therefore, I think we're left to assume that MothNet underperforms the other techniques in the many-samples regime. Is there something I'm missing?
(paper author) You are correct that the 'natural' moth maxes out after about 20 samples/class. It is not yet clear whether this is an intrinsic limitation of the architecture (the competitive pressure on an insect is for fast and rough learning), or whether it is just an artifact of the parameters of the natural moth. For example, slowing the Hebbian growth parameters would allow the system to respond to more training samples, which should give better test-set accuracy. We're still running experiments.
Papers are published in journal/conference proceedings/etc. that will have the date of the issue ("Transactions for the International Symposium on Computational Yak Shaving 2018"). The paper might have been written in 2017, but published in 2018, which means that when it gets cited it will be as "ABC et al., 2018".
Papers without a date are usually preprints, or published independently (e.g. on the author's website) while expecting actual publication at some point.
> The first four digits of the paper's number tells you the month and year it was first published.
I think "published" should be "submitted" there. (I suppose that one could argue for regarding submission to the arXiv as publication, especially given the presence of overlay journals—but probably that's not what you meant.)