Unified: More than a specification, Nimble is a product. We strongly discourage developers to (re-)implement Nimble’s spec to prevent environmental fragmentation issues observed with similar projects in the past. We encourage developers to leverage the single unified Nimble library, and create high-quality bindings to other languages as needed.
A spec. There is no substitute for it. Code isn't a spec. A spec is documentation that describes a format and/or operation. The absence of it is laziness and/or arrogance.
The outrage! The vague feelings of people I don't know taking things out of context may survive. If not, I'll bring them soup.
There are some unfortunately popular projects like Puppet over the past 20 years who tried to sell the lack of documentation as a positive, or just had incomplete and outdated documentation like Chef. ;] On the flip side, IETF RFCs, Python PEPs, and Rust RFCs are examples where specifications made things clearer and open to everyone. It's not enough to open source a thing without context or comments, the learning curve must be accounted for to make it as self-service and understandable as possible to people who have domain knowledge but have never used this particular thing before. OOTBE UX. Code maybe a communication of intended system behavior, but it is often too low-level and spread around to be relied upon as a singular, compact reference.
Why not instead make a test suite of inputs and expected pass/fail for validation of implementations.
It's not more than a specification if a single implementation is to be used--then it's a 'spec' defined by the implementation because any idiosyncrasies of that implementation become defacto specification.
Although "wide" data is touted as a optimization guideline for Nimble, how well does it fare against "normal"(?) data, i.e. with just a few to tens of columns?
It seems to be optimized towards ML where sequential scan is the access pattern. so it wouldn't be suitable for analytical workloads yet, though they are planning on working on that.
Lance dev here. We are working on a new version of our format[1] as well. We are watching Nimble too. If they are interested in solving our use cases then that is less work for us.
At the moment it is not clear that is the case. However, it is too early to tell. Our biggest concerns are:
- Good integration with object storage
- Ability to write multi-modal data without exhausting memory
- Support for fast point-lookups (with the option of cranking up the amount of metadata for richer lookup structures that will be cached in RAM)
Both Nimble and Lance are not intended to replace Parquet/Arrow. Parquet and Arrow are designed to be spread throughout a solution as a universal interchange format. E.g. you will often see them all throughout ETL pipelines so that different components can transfer data (even if it isn't a ton of data). With Arrow and Parquet interoperability is a higher priority than performance (though these formats are fast as well). They are developed slowly, via consensus, as they should be.
Nimble and Lance are designed for "search nodes" / "scan nodes" which are meant to sit in front of a large stockpile of data and access it efficiently. There are typically only a few such components (usually just a single one) in a solution (e.g. the database). Performance is the primary goal (though we do attempt to document things clearly should others wish to learn / build upon). I'd advise anyone building a search node or scan node to make the file format a configurable choice hidden behind some kind of interface.
Yes, I'd be curious to know how much better it is than them - from my limited understanding, they also share many of the advantages that Nimble boasts of, thus I can appreciate they'd both be better than legacy formats but it's not clear how close these two are.
But I wonder, if I would choose a new file format today, what to choose? Nimble is maybe too new and there is too less experience with it (outside Meta).
Is there anywhere a good overview of all available options, and some fair comparison? Some that I found, but older:
Well, Parquet seems to be so widely supported, it's my default pick, unless you can explain why it's not the right fit.
Though I'll say if your primary use case is "higher-dimensional arrays", none of Parquet etc are likely to be a good fit -- these things are columnar formats where each column has a separate name, datatype etc, not formats for multi-dimensional arrays of numbers. That's a different problem. A Parquet column can be a list of arrays, but there's no special handling of matrices.
I would prefer to write a parser with zero dependencies.
Unified: More than a specification, Nimble is a product. We strongly discourage developers to (re-)implement Nimble’s spec to prevent environmental fragmentation issues observed with similar projects in the past. We encourage developers to leverage the single unified Nimble library, and create high-quality bindings to other languages as needed.
Call me a greybeard; I want multiple implementations and a spec.
On the other hand - it's an open source project, I hope someone can contribute to it and create a PR with the spec.
There are some unfortunately popular projects like Puppet over the past 20 years who tried to sell the lack of documentation as a positive, or just had incomplete and outdated documentation like Chef. ;] On the flip side, IETF RFCs, Python PEPs, and Rust RFCs are examples where specifications made things clearer and open to everyone. It's not enough to open source a thing without context or comments, the learning curve must be accounted for to make it as self-service and understandable as possible to people who have domain knowledge but have never used this particular thing before. OOTBE UX. Code maybe a communication of intended system behavior, but it is often too low-level and spread around to be relied upon as a singular, compact reference.
It's not more than a specification if a single implementation is to be used--then it's a 'spec' defined by the implementation because any idiosyncrasies of that implementation become defacto specification.
Also, are there any preliminary benchmarks?
It seems to be optimized towards ML where sequential scan is the access pattern. so it wouldn't be suitable for analytical workloads yet, though they are planning on working on that.
1: https://lancedb.github.io/lance/
2: https://lancedb.github.io/lance/format.html
3: https://youtu.be/ixpbVyrsuL8?si=9QhF0wyxYtl2L01_
At the moment it is not clear that is the case. However, it is too early to tell. Our biggest concerns are:
- Good integration with object storage
- Ability to write multi-modal data without exhausting memory
- Support for fast point-lookups (with the option of cranking up the amount of metadata for richer lookup structures that will be cached in RAM)
Both Nimble and Lance are not intended to replace Parquet/Arrow. Parquet and Arrow are designed to be spread throughout a solution as a universal interchange format. E.g. you will often see them all throughout ETL pipelines so that different components can transfer data (even if it isn't a ton of data). With Arrow and Parquet interoperability is a higher priority than performance (though these formats are fast as well). They are developed slowly, via consensus, as they should be.
Nimble and Lance are designed for "search nodes" / "scan nodes" which are meant to sit in front of a large stockpile of data and access it efficiently. There are typically only a few such components (usually just a single one) in a solution (e.g. the database). Performance is the primary goal (though we do attempt to document things clearly should others wish to learn / build upon). I'd advise anyone building a search node or scan node to make the file format a configurable choice hidden behind some kind of interface.
[1] https://blog.lancedb.com/lance-v2/
We still use HDF (https://en.wikipedia.org/wiki/Hierarchical_Data_Format).
But I wonder, if I would choose a new file format today, what to choose? Nimble is maybe too new and there is too less experience with it (outside Meta).
Is there anywhere a good overview of all available options, and some fair comparison? Some that I found, but older:
https://www.hopsworks.ai/post/guide-to-file-formats-for-mach...
https://iopscience.iop.org/article/10.1088/1742-6596/1085/3/...
https://github.com/pangeo-data/pangeo/issues/285
Though I'll say if your primary use case is "higher-dimensional arrays", none of Parquet etc are likely to be a good fit -- these things are columnar formats where each column has a separate name, datatype etc, not formats for multi-dimensional arrays of numbers. That's a different problem. A Parquet column can be a list of arrays, but there's no special handling of matrices.