5 comments

  • carbocation 2235 days ago
    For some reason, Figure 2 doesn't continue out beyond the few-training-samples regime. Therefore, I think we're left to assume that MothNet underperforms the other techniques in the many-samples regime. Is there something I'm missing?
    • chestervonwinch 2235 days ago
      It is ambiguous. It's not clear whether or not they performed the experiments with # samples / class > 20.
      • cdelahunt 2235 days ago
        (paper author) You are correct that the 'natural' moth maxes out after about 20 samples/class. It is not yet clear whether this is an intrinsic limitation of the architecture (the competitive pressure on an insect is for fast and rough learning), or whether it is just an artifact of the parameters of the natural moth. For example, slowing the Hebbian growth parameters would allow the system to respond to more training samples, which should give better test-set accuracy. We're still running experiments.
        • fpgaminer 2235 days ago
          It sounds like you ran experiments on the BNN with >20 samples/class. Why were those data points not included in Figure 2?
  • lootsauce 2235 days ago
    Yet to read this paper but wondering if the authors a familiar with the work of Dasgupta et al. on fly olfactory model for locality sensitive hashing?

    https://www.biorxiv.org/content/biorxiv/early/2017/08/25/180...

    I have been contemplating the relationship between random projections and compressive sensing since reading it and curious to read this paper for any insights on compressive sensing.

  • memebox3v 2235 days ago
    This is absolutely brilliant. I have been looking for a way into an understanding of learning within biological neural nets. I dont suppose there is source code around?
  • robinduckett 2235 days ago
    So are we learning that brains and neurons are general purpose computation goo that can be applied to many different areas of signal processing yet?
    • bbctol 2235 days ago
      Kind of feel like we already know that brains can do general purpose computation...
  • fovc 2235 days ago
    Needs a [pdf] flag
    • Aardwolf 2235 days ago
      Why is it, by the way, that papers have the author names at the top but not the date? The dates are added to papers in references, so why not the date of this paper itself too?

      This one happens to have "Workshop track - ICLR 2018" at the top so has some dating, but most don't even have that

      • GuiA 2235 days ago
        Papers are published in journal/conference proceedings/etc. that will have the date of the issue ("Transactions for the International Symposium on Computational Yak Shaving 2018"). The paper might have been written in 2017, but published in 2018, which means that when it gets cited it will be as "ABC et al., 2018".

        Papers without a date are usually preprints, or published independently (e.g. on the author's website) while expecting actual publication at some point.

      • tsomctl 2235 days ago
        That's the nice thing about arXiv. The first four digits of the paper's number tells you the month and year it was first published.
        • JadeNB 2235 days ago
          > The first four digits of the paper's number tells you the month and year it was first published.

          I think "published" should be "submitted" there. (I suppose that one could argue for regarding submission to the arXiv as publication, especially given the presence of overlay journals—but probably that's not what you meant.)

    • 1001101 2235 days ago
      meta: this could be automated.