SLAMcore has announced SLAMcore Labs, an interactive showcase that allows robot developers to test cutting-edge visual SLAM capabilities with their own datasets.
SLAMcore’s Spatial Intelligence SDK which launched earlier this year allows robot developers to quickly and easily add location and mapping functions to their designs. Downloadable and implemented with a few clicks, the SDK provides robust location and 2.5D maps for any robot using Intel® RealSense™ Depth Cameras. Designers can quickly move concepts from the lab to the real world with successful SLAM that can cope with a wide range of environments.
Now SLAMcore Labs gives developers a sneak peek of next generation features including depth completion, 3D mapping and semantic labelling. SLAMcore Labs is currently showcasing the following next generation capabilities.
AI Enhanced Depth Maps
Using a dedicated neural network running on a GPU at the edge, SLAMcore’s algorithms combine the raw infrared images, depth map and IMU data from Intel® RealSense™ depth cameras to provide a more accurate and smoother disparity or depth map.
Intelligent Position
By placing a neural network in front of raw sparse map point-clouds, SLAMcore can create better, faster and more resource efficient position estimations. The AI selectively chooses which features are best for positioning. SLAMcore SDK customers can send their own data which we will process in SLAMcore Labs and return a sparse map with specific objects labelled or removed.
3D RGB Rendered Maps
Customers can already build cm-accurate 2.5D maps using the SLAMcore SDK. These add height to ‘flat’ 2D maps. Now, using the same data, SLAMcore Labs will create full 3D maps with RGB rendering. Customers uploading data from the SDK will receive a machine-readable mesh file that can be implemented on their platform.
Semantic 3D Mapping
SLAMcore Labs has the capability to identify and label all the different things in a map. Using semantic recognition algorithms an entire map can be colour coded to identify the different surfaces and objects in the map.
Semantic 3D Mapping with Instance Segmentation
Once maps have been semantically labelled, this next generation feature counts individual instances of each semantic object. So rather than knowing that the map has ‘chairs’ in it, it can define 3 chairs and treat them as individual objects. This level of sophistication has many real-world benefits as our visual inertial SLAM system passes individual objects to other parts of the control stack where they can support wide ranging functions.
Customers can test all these emerging solutions with their own data. From the SLAMcore Labs page they simply upload data sets and specify which features they want to explore. Our engineers will then apply these algorithms to their data and send back either mesh maps or dedicated videos showing the resulting 3D maps of their data.
New capabilities are being added all the time as SLAMcore engineers innovate and expand the power of the SLAMcore algorithms Those wanting to embark on more complex demonstrations or dedicated projects that leverage these innovations should get in touch via the SLAMcore Labs page.
Announcing the Labs, SLAMcore Founder and CEO, Owen Nicholson said; “At SLAMcore we are focused on democratizing access to fast, accurate and robust SLAM that is viable at commercial scope and scale. Our SDK provides an effective solution for many as they make the journey from lab to real-world deployment. But there are many other aspects and opportunities for Spatial Intelligence to add value to robots. SLAMcore Labs provides a new route to explore emerging possibilities and to experiment with leading-edge features.”