Add to favorites

#Robotic Automation

Next-Gen Navigation Systems Change the Scope of Robot Deployment

Robots are all around us every day, in automated gas pumps, bank ATMs, and self-service checkout lanes – machines are already automating our world

They are also working behind the scenes, and now, navigation systems have made them more clever and mobile.

Mobility presents one major challenge to safety: operation in dynamic, unstructured environments where objects, people, and even physical infrastructure, like walls, move with different frequencies in different ways.

With many robotic systems, magnetic tracks, lines, mirrors, or beacons must be installed in order to create an infrastructure for the robot to be piloted within. However, the effort to install these beacons is costly and time consuming.

Even for the more sophisticated robots that do not require these lines, tracks, or beacons, there is an expensive mapping phase involved in acclimating a robot to its environment. This process uses SLAM (simultaneous localization and mapping) – the academic blue-ribbon algorithm for world mapping – which creates a static map at the time of deployment.

However, as the robot navigates its world over time, there becomes a gap between the true world state and the original map due to moving objects. Thus, the robot’s performance can slowly degrade over time.

The method for deploying a robot is changing. The next-generation navigation system requires zero infrastructure and practically eliminates the need for extensive mapping that has become so commonplace for the robotics industry. No longer will a human have to do a tremendous amount of work to integrate a robot within its work environment.

Robots will dynamically learn about the environment that they are in, removing the concept of the static map and eliminating the challenge of keeping up with mapping. The robot learns and applies its knowledge and constantly updates itself dynamically to increase its performance.

For instance, if you move a couch in a hospital waiting room, the robot will sense it, even going a step further to acknowledge that it has not moved in some time. It deduces that it will likely be in this same place and accounts for that in future trips.

There are some obstacles in the environment that are very dynamic, like a human walking down a hallway, and there are some obstacles that are semi-dynamic, like a chair, and yet others that are static, like a wall or door. It is important that the robotic system understands the differences between those types of objects and does not treat them the same way.

2D Points vs. a More Dynamic Approach

Most robotic systems see the world as a series of 2D points in which items on the floor are mapped and treated as obstacles. The robot has no concept of whether the item is a wall, a chair, or a door.

The biggest problem with this way of navigation thinking is that each 2D point is treated the same whether it is a baby crawling around the waiting room, or a wall. It is a 2D point that the robot will not drive through; it doesn’t take into account that the wall is not going to move, or that the baby isn’t going to be in the same place for very long as it crawls across the floor. This is what makes the difference for the next-generation robot that accounts for varied obstacles as dynamic, semi-dynamic, and static.

Improved Navigation Increases Performance

Robots that learn are able to deliver items significantly faster and achieve better performance. The next-generation robot can vary its speed based on the current situation. If the hallway is clear, the robot will go faster, and if it is not, then the robot will slow down to a safe speed. If the facility changes, the robot is able to learn those changes dynamically rather than having a human address them.

If a robot comes upon an obstacle and can’t go down a path that it usually goes down, the traditional robot would stop, unable to proceed. The next-generation robot can update its internal representation of the world and re-plan to find an alternate route to the destination. The difference is that these robots navigate a facility more like a human, and less like a machine would.

The range of applications for these next generation robots is very diverse. Customers are asking for a general-purpose machine. It is different from the earlier generations of robots that may have had one, clearly defined job. Today, we are asked to develop robots with specific capabilities, but we don’t know the exact job that these next-generation robots are going to do.

Robots today are asked to move materials from point A to point B in their facilities. It can be anything from medicines, trash, linen, food, endoscopes in a healthcare setting; small parts in manufacturing facilities; guest amenities in the hospitality industry; small and large objects in warehousing in warehousing. There is a chain of custody that occurs and in some cases the product transfer needs to be secured.

The correct drawer containing medication in a hospital, for example, can be locked until a biometric palm vein scanner authenticates the delivery. The movement of product can also be tracked through chain-of-custody such as is done with FedEx or UPS – and valuable product with an expiration date can be monitored.

The next generation navigation system puts safety at the forefront. Being able to operate in peopled environments, distinguishing between objects and humans will allow us to achieve higher-levels of safety than ever before.

Next-Gen Navigation Systems Change the Scope of Robot Deployment

Details

  • 36 Cambridge Park Drive, Cambridge, MA 02140, United States
  • Daniel Theobald, Vecna Co-Founder & CEO