Im done being excited by anything out of Intel that isn't an desktop/laptop CPU.
I've been burned personally and professionally with every single Intel IoT device I've touched.
Remember the Edison platform? Huge promises and possibilities that turned into fatally flawed silicon that took Intel 2 years to admit.
The Compute Stick?
The whole Atom ecosystem?
I was so excited by the realsense cameras, we got a bunch of them and thought we must have gotten a bad batch. The hardware was so bad compared to similar cost machine vision cameras, it was astounding.
The SDK was great at first glance, a really easy OOBE for multi camera setup and PCL processing. Then you discover over a few weeks how flaky everything is, how brittle the SDK and drivers are (like every other Intel dev platform it seems) and after spending thousands of dollars on hardware and hundreds of hours of dev time, you finally chuck it all in a bin and say "I will never buy Intel crap again" for the third time.
Hopefully this Lidar device will buck that trend, but I doubt it. They keep making random IOT hardware platforms with seemingly no long term strategy and no path to commerical implementation.
Similar experience with Realsense, except that I need to run them in USB2 mode due to GPS interference from USB3. Their USB2 interface isn't just the USB3 at lower bandwidth, no, it has a bunch of completely unrelated bugs that only appear on USB2. Moreover, even if you turn down the framerate to make sure it isn't a bandwidth issue, the point cloud quality on USB2 is worse than USB3. No idea why, but we've binned our Realsense cameras too.
My main problem with non "classic" intel products is the shit user experience that come with them: I don't want to use ubuntu 16.04, just package software in a maintainable way, ffs.
When Intel touts a device "On Linux!(tm)", I have to lower my already meager expectations. So long as you expect the drivers to be very thin open-source wrappers around very brittle proprietary blobs, you won't be unpleasantly surprised.
I feel you. If it makes you feel better they do not take a dump only on small customers. Their axiaa business also screws their customers so much. Worst vendor in the industry. We are talking about multi million businesses.
I’ve met someone who is constantly asking me “why haven’t you tried realsense?” and you just confirmed my suspicions. When the first realsense products came out, they only supported windows. This is madness for a robotics focused product. Finally my friend tells me now they support Linux. But for me the damage has already been done. They have proven that they don’t understand me as a robotics engineer. And you’ve just confirmed that for me. So I stick with trying to use high resolution cameras and structure from motion algorithms to understand the world. No need for a specific proprietary piece of hardware. Since I’m mostly doing research in to what is possible, I prefer this non proprietary approach.
This little lidar looks nice but the last thing I need is another weird kernel module and some closed source library to support my hardware. No thanks.
I mean... your friend's not wrong, if by "Linux" she/he means "Ubuntu 16.04 LTS" with the caveats:
* disable Secure Boot xor create your own efi signing key pair, get friendly with `mokutil`, and pray your firmware's UEFI implementation supports that complicated custom KEK
* Ubuntu 18.04 support, but forcibly install at least one 16.04 package they couldn't be bothered to build for the latest stable release of their chosen distro --- or `patchelf` the shared object and, again, pray.
* accept that the debugging symbols they provide still bear the source paths from the Jenkins instance that packaged them
* Oh, yeah; sometimes the device is detected as USB 2.1. That's fun when it happens 2 hours into a calibration run
They're good if all you need is a flakey proof of concept. It sounds to me like you require something better.
Damn that sounds awful. Yeah, I don’t need another headache. I can imagine times where the hardware is the right tool for the job, but with all those hoops you have to jump through to make it work I’d avoid that at all costs.
Intel is a big company, but this also has the background context of their fierce competition (which arguably they are losing) with other chip makers in the enthusiast space. Maybe would explain an attempt for more R&D green field projects.
Why not stick to the business? It's a supplementary good to their vision processing products. Some developers may be attracted to the option of buying this together with something like the neural compute stick.
I bought one. Bad thermals and weirdly flaky wifi, thanks to the overheating. To get it to run the fan fast enough you had to manually edit the startup scripts. Intel quietly killed it a year later, and it appears to be totally unsupported.
Previously the realsense stereo depth cameras suffered from a lot of depth noise compared to TOF cameras like kinect. I had to use a lot of filtering which limited the usable frame rate. Hopefully this new lidar cam has less noise.
The realsense API is pretty good, I found it much easier to use then the kinect API.
There have been lots of little indoor LIDAR units. The SwissRanger, around 2005, was one of the early ones.
The Kinect, version 2, is one. The Kinect, version 1, was a random dot pattern projector and two cameras for triangulation. Intel made something similar, the RealSense.
So far, the most popular use for these things is video background removal, allowing "green screen" type effects without needing an actual green screen.
I really wonder how safe lidar really is for humans. Our retina are sensitive enough to detect single photons (when healthy) and lidar is known to damage digital camera sensors.
It really depends on the amount and duration of exposure. Eg lasers that are pulsed won’t warm or irritate tissue as much as continuous radiation. Some energy is also absorbed by the eyeball before it reaches the retina. The wavelength also plays a part, with some wavelengths penetrating water (=tissue) better than others.
Laser light is more powerful than eg ordinary LED light because the emitted photons act coherent (possible in both the spatial and temporal dimensions), so they are more efficient heating things up, or reacting with chromophores in cells. But if the energy arriving at the retina is low enough, this will be no issue.
In addition to safety I wonder about interference. Wouldn't lidar become ineffective if there's so much lidar around that all the lidar sources start interfering with each other and effectively blinding all receivers with noise? I really wonder why lidar-based autonomous agents plan to deal with this problem. It seems fundamental.
Usually not. They require resilience against ambient light already, so they are either very dim and use coding gains or they use short pulses which only yield a short time window for valid returns. You basically don't get non-malicious interference issues, except for e.g. the dot projector systems.
Real ToF sensors can easily filter any accidental noise. You can often spoof them, however, and there's not much one can do against it considering a blinding DoS is often technically easier (track the LIDAR with a camera to keep the laserpointer on-target)
I'm working on RealSense project at the moment. You won't be able to do it out the box, but their SDK does come with a lot of sample code, one which makes use of the RGB sensor on the D400 series to calibrate the cameras in world space. With just depth data it's a bit trickier.
Am I cynical to expect one of these in every Echo/Home/Portal "assistant" within a decade?
You know, strictly for 3D-avatar VR communication purposes only.
~~Ooh, it's the solid-state LIDAR tech I heard about a couple of years ago! They must have bought the company that invented it.~~
~~The price is also just around where they expected it to be. They talked about going down to 100 eurodollars per unit when they hit mass manufacturing.~~
ED: No this is a MEMS device. The device I'm talking about is actually solid-state, scanning the laser by way of, IIRC, acousto-optic modulation. Car companies were interested in it.
Wish I had the spare time to try hooking some of these into some kind of a machine vision system, for automatically verifying that an object being created (3D printer / CNC) was created as intended.
It'd help with automating production, but I'm not sure it'd be worth the effort.
I don’t know why they are in the business, but a cheap Lidar camera is very interesting to me from a computer vision/home robotics standpoint. Here’s to hoping for a long life for this product line
I've been burned personally and professionally with every single Intel IoT device I've touched.
Remember the Edison platform? Huge promises and possibilities that turned into fatally flawed silicon that took Intel 2 years to admit.
The Compute Stick? The whole Atom ecosystem?
I was so excited by the realsense cameras, we got a bunch of them and thought we must have gotten a bad batch. The hardware was so bad compared to similar cost machine vision cameras, it was astounding.
The SDK was great at first glance, a really easy OOBE for multi camera setup and PCL processing. Then you discover over a few weeks how flaky everything is, how brittle the SDK and drivers are (like every other Intel dev platform it seems) and after spending thousands of dollars on hardware and hundreds of hours of dev time, you finally chuck it all in a bin and say "I will never buy Intel crap again" for the third time.
Hopefully this Lidar device will buck that trend, but I doubt it. They keep making random IOT hardware platforms with seemingly no long term strategy and no path to commerical implementation.
The only real problem is that I have no clue why Intel is in this business, and I suspect they won't be for much longer.
* DKMS kernel module for what should be a plain vanilla USB 3.0 device
* firmware updates require closed-source libraries
* breaking API and ABI changes that do not respect semver or SOVERSION
The worst part of my dayjob is wrangling the Realsense software suite.
This little lidar looks nice but the last thing I need is another weird kernel module and some closed source library to support my hardware. No thanks.
* disable Secure Boot xor create your own efi signing key pair, get friendly with `mokutil`, and pray your firmware's UEFI implementation supports that complicated custom KEK
* Ubuntu 18.04 support, but forcibly install at least one 16.04 package they couldn't be bothered to build for the latest stable release of their chosen distro --- or `patchelf` the shared object and, again, pray.
* accept that the debugging symbols they provide still bear the source paths from the Jenkins instance that packaged them
* Oh, yeah; sometimes the device is detected as USB 2.1. That's fun when it happens 2 hours into a calibration run
They're good if all you need is a flakey proof of concept. It sounds to me like you require something better.
I bought one. Bad thermals and weirdly flaky wifi, thanks to the overheating. To get it to run the fan fast enough you had to manually edit the startup scripts. Intel quietly killed it a year later, and it appears to be totally unsupported.
Here is the actual press release instead of the zdnet rehash.
https://www.intelrealsense.com/lidar-camera-l515/
This is the actual page for this camera.
The realsense API is pretty good, I found it much easier to use then the kinect API.
Kinect for Azure purports to be able to support overlapping FOV (whereas Kinect 1 for Xbox did not)
So far, the most popular use for these things is video background removal, allowing "green screen" type effects without needing an actual green screen.
[1] https://youtu.be/RoeXGiWO9dU
Lidar instead of green screen is not for professional grade background removal like you see on TV.
Real ToF sensors can easily filter any accidental noise. You can often spoof them, however, and there's not much one can do against it considering a blinding DoS is often technically easier (track the LIDAR with a camera to keep the laserpointer on-target)
>> Can multiple L515 cameras be used simultaneously?
> Multiple cameras can share the same field of view utilizing our hardware sync feature.
I really want to get accurate 3D spherical volumes in real time. (30fps is sufficient, 60fps would be ideal)
I've thought about using Kinect "for Azure", because I think it satisfies this use case and does hardware clock syncing between devices:
https://azure.microsoft.com/en-us/services/kinect-dk/
Edit: It looks like their RealSense cameras can be set up in an inward-facing configuration:
https://dev.intelrealsense.com/docs/multiple-depth-cameras-c...
https://newsroom.intel.com/news/intel-realsense-lidar-camera...
The camera is the size of a tennis ball.
There are probably lots of industrial uses.
~~The price is also just around where they expected it to be. They talked about going down to 100 eurodollars per unit when they hit mass manufacturing.~~
ED: No this is a MEMS device. The device I'm talking about is actually solid-state, scanning the laser by way of, IIRC, acousto-optic modulation. Car companies were interested in it.
It'd help with automating production, but I'm not sure it'd be worth the effort.