Internet Explorer is not a supported browser for TI.com. For the best experience, please use a different browser.
Video Player is loading.
Current Time 0:00
Duration 34:36
Loaded: 0.00%
Stream Type LIVE
Remaining Time 34:36
 
1x
  • Chapters
  • descriptions off, selected
  • en (Main), selected

Good morning, everybody, and thanks for joining today. I'd like you to welcome to the Sensing in Robotics webinar. My name is Giovanni Campanella, and I am assistant manager at Texas Instruments, part of our system and marketing organization and focusing on robotics application.

Before we jump into the webinar content, I think it's important to understand why we are all here today and why we are also talking about sensors, and the simple way to do that is really thinking about us, so about humans, and thinking about us, let's say, where we cannot hear or listen to something, and imagining ourselves walking through streets or a factory. And of course, you can imagine, at some point, you will crash into something, and the same thing actually applied to robots. Now, imagine a robot without sensors or cameras. Most likely, at some point, it will also crash into something, and so I think it's important for everybody to understand that robots need sensors, and that's what we are going to talk about today.

So after a very short intro about where robots can be found, and also about the different types of robots, then we'll dive into the technical details about how sensors can solve some of the many challenges that robots find or found in the factory. And we will divide this in two main section-- one where I'm going to talk about autonomous mobile robots, and the other section, where I'm going to talk about collaborative robots. And of course, I will talk about how TI, Texas Instruments, sensor technologies will enable you to solve some of the challenges you have when building your robot.

So let's start, and I think everybody knows that industry is moving from a traditional fixed function systems to flexible systems where you get a stream of data that comes from production operation, and you adapt and learn based on the demands coming from customers. And of course, robots here play important roles in order to enable this flexibility, and we see different types of robots that are part of these factories or fulfillment centers, starting from the big ones called industrial robots. Those are the ones that are able to lift very big and heavy payloads like cars, and those ones are the ones that you need to program in advance. Will perform always the function that you have programmed and are usually surrounded by fences or a safety scanner actually to avoid that humans get close to the robots and get hurt. And those robots-- I mean, I guess many of you are aware-- exist since ages actually.

Instead, in the last decades, we've seen a rise of a new type of robots called collaborative robots, and those type of robots are the ones that collaborate, coexist, and need to co-work with humans. So they're not anymore fixed like the industrial robot, but also need to adapt to the changing environment. And because of that, they need sensors around them that will enable those robots to perform these type of functions.

And then we have two types of robots that basically are on wheels, if you want to say so, and here, you have the autonomous guided vehicle category, and then also the autonomous mobile robots category. The first ones are the ones that need to follow predefined paths and requires track for guided navigation, as basically the name of them already say, and when they see an obstacle, basically they need to stop once they detect it. On the other end, instead, autonomous mobile robot are equipped with many more sensors, like LiDAR, cameras, radar. The reason of that is because the need to navigate autonomously through a factory or a warehouse or fulfillment center, so it's important that they detect, for example, humans that are around, and also are able to detect obstacles, which can be boxes or also walls, aisles, and make sure they don't crash into them, like I show in the very first slide.

So once everybody agrees on this type of robots, then another one that we see-- it's actually a combination of an autonomous mobile robot or an autonomous guided vehicle with a collaborative robot on top of it, and that's what's standardly called industrial mobile robot. And you can find those, for example, in wafer fab or, again, in fulfillment center, and those actually use, let's say, to fix box or wafer lots from shelves. I move them from place A to place B of the factory in order to be further processed or shipped to the customer.

But if you think it's difficult actually to figure out, which robots I'm building, actually, I tried to make it easier for you with the decision tree. So I think we can all agree that we are talking about industrial applications, and robots that fit in this space. Now, I think the next question you're going to ask yourself, it's OK, I'm working on a mobile platform or not? And if it's not, then if you have a manipulator, most likely, you are designing an industrial robot. And actually collaborative robots fit into here.

Instead, if it's a mobile platform, then you need to ask yourself, where is the navigation? How the navigation is performed? And here, you can have two types of navigation.

You can have guided navigation. In this case, we are talking about AGV, or you can have autonomous navigation. In this case, we are talking about AMR. And then according to the attachment type of your robots, you can have different types of AMR, so type A, type B with the attachment, and type C with, actually, a mobile manipulator, as I shown before. But all of them will be called industrial mobile robot.

Now, if we all agree and have a better picture on the type of robots we are designing, let's start to dive into the technical details about sensing for autonomous mobile robots. And I picked autonomous mobile robots because those ones, as I showed before, are the ones where you see a lot of sensors that will fit into this type of application. The main three challenges that an autonomous mobile robot need to solve are safe human present detection, mapping and localization, and collision avoidance. I think it's clear to everybody that it's important that those robots, as they work with human, that don't harm humans. So it's important that you define an area, safety area, around the robot in order to avoid or to stop when a human-- or a human when it's in proximity of it.

The other challenge that we are trying to solve is mapping and localization. So it's important that the robot knows the map of the environment where it's going to operate. Is this an indoor environment or an outdoor environment?

And also, it's important the robots know where it is, because of course, at some point, it will need to go back to its initial position, either to charge or to pick another object and so on and so on and so forth. And then finally, also, it's important that it doesn't avoid only human, but also objects that are in its way, and also is able to navigate through, let's say, ailes or doors or any kind of obstacles that you can find in a warehouse. And of course, there are different types of sensors and cameras that will allow you to solve these challenges, and I'm going to show you, in the next slides, how.

Now, again, I've used a decision tree in order to facilitate you on the decision of which sensor you're going to pick to solve a specific challenge. So if we look now at the indoor mapping and localization, I think you need to start to think, OK, where is the environment where my robot is going to operate? And here, you have two options.

Either you're operating in a known or unknown environment. I guess it's pretty easy if it's known. Then the robot has already all the information. Rather if it's an unknown environment, then you need to create a dynamic map, and the way you're going to do it is, first of all, to measure the position of your nearest object or structure around the robot. And then the other way you're going to do it is with odometry, so basically measuring the distance of the traveled-- yeah, that the robot has traveled from its original location.

And here, you can do it with geoscope, accelerometer, encoders. So today, for a matter of time, we are not going to talk about this one. Instead, the way you're going to measure the position of objects, also the distance from them, can be done in different ways, either with ultrasonic or LiDAR, radar sensor, also stereo camera, or 3D time of flights. And will not be only one sensor, but it would be really a combination of all.

The way that you're going to solve collision avoidance-- here, I pick an example based on indoor application is the following. Here, it's not any more about the environment, but here, it's about the objects. And here, I've just listed some. I'm sure you have much more in mind, like walls, gates that you have in the factory or your fulfillment center, shelves, pallets.

And you need to, first of all, decide is this a static object so I know all the information a priori, so I know where the object is placed? And I can plan in advance so that my robot is not going to crash into that object? Or is this a dynamic object, meaning that you can have boxes around the fulfillment center that maybe have fallen, so you have not planned them in advance? In that case, you need to perform active collision avoidance, and the way are going to do that is really to fuse the data coming from different sensors, that allows you to detect the distance from the objects, the dimension of the objects, and also the speed of the objects. And again here, it's not only one sensor, but it's really a combination of many sensors.

I guess, now, you're asking yourself, you're telling me what I need to consider in order to decide which sensors I'm going to use for my robots based on the objects and environment, also the type of robots that I want to build. But at the end of the day, which sensors makes sense for my application? And I've tried to make easy for you with this table, and the short answer is really depends.

And also, it's not only one sensor, but it's either two sensor or even three sensors. Because I mean, if you already see for humans, we have weak spots. For example, we are uncomfortable to walk through, let's say, a dark area because we are not able to see anything, and the same is for to estimate the distance that we are from an object. Yeah, we can say, of course, we are around two or three meters from a table, but we cannot say precisely how far, let's say, is the table for us, and the same is with sensors. Those sensors have some pros and cons.

So for example, if we pick a radar, of course, in this case, you can estimate the exact distance from an object, but you cannot, for example, precisely detect the age of the object or classified. LiDAR, for example, will have problems on detecting transparent surfaces. Ultrasonic sensor will have a reduced range of visibility. And cameras similar to humans will have problems to, for example, work in environment with poor lighting, and also, for example, on detecting transparent surfaces. So the answer to which sensor I'm going to pick is really not only one, not only two, maybe three or even more. And it's really about fusing those sensors based on the object that you need to detect, based on the environment that your robot is going to operate, based also on the type of robots, and then based also on the functions that your robots want to perform.

Now, I would like to show you how TI sensors can enable you to solve some of these challenges for autonomous mobile robots, and yeah, I think it's mainly about radar, optical, ultrasonic sensors. So we have a broad portfolio of radar sensors with our mmWave devices, both for 60 and 77 gigahertz application, and really, it's to measure long range with very high accuracy. And also, we have optical solution, here added discrete solution, so high speed amplifiers, so transimpedence amplifiers and comparators together with the very fast gate drivers. And also we have very highly integrated solution, like our OPT3xx devices that basically is the complexity of the system, and the same is for ultrasonic.

When you want to use a sensor that is maybe a lower cost than the others, then also yeah, we have either a discrete solution based, again, on high speed amplifiers and drivers, and also very integrated ones where also the drivers and the receiving chain is in the [INAUDIBLE] chip. And again, here, easier your development of the sensor. And I'm going to talk more into the details in the next slides, starting with the mmWave devices, and this here specifically, again, what are the challenges that they are solving for mobile robots starting with the sensor avoid challenges, and also the safe human detection, by creating a safety bubble around the robots, so defining different areas like a stop zone or a slow down zone.

And the benefits that it can bring are many. If we compare it to the existing LiDAR solutions that are out there on the market, it can be a lower cost solution, still achieving very good performance. And on top of that, solve some of the challenges like detecting transparent surfaces and also work in a very rough environment, like where you have a lot of dust or smoke. Or, let's say, you're operating your robot outside also in a rainy type of environment. And on top of that, our solution is still to capable, so will also help you on certify your robot when you talk with safety entities, safety third parties around the globe.

So let's talk now about these two use cases and how we are solving them with our mmWave solution. So if you, for example, take a robot-- so we have put in there four mmWave devices around the robot. And with that we are able to cover a 360 degree horizontal field of view and also 300 degree vertical field of view, so we are able to see objects up to 10 meters. But the way mmWave also is able to detect range that are farther than 10 meters, so we have tested actually also up to 35 meters in our lab.

And then it defines the zone, so a warning zone where your robot is going to slow down when a human gets close or enter in this zone, and the danger zone where the robot is going to stop immediately, meaning that the humans is in the proximity of it. And the great news here is that we have done that in a ROS environment. So ROS, I guess, all of you are aware, it's a Robotic Operating System, and it's actually used by many robotics developers. And it should enable you to use our mmWave devices and nodes in your system and test it in your application.

Same thing we have done for the sense and avoidance applications, so also here, we have a lab in our TIREX portal. And in this lab, we have used, again, our 60 gigahertz device and 77 gigahertz device. In this case, we have a mmWave on the front of the robot having a field of view from 120 degree horizontal, 30 degree vertical, and again, we're operating in the range of 10 meters. And you can see, on the right picture, we have different objects, and there is also video available that shows how the robot is able to detect and also avoid all these objects. Again, here, we have done all of this in a ROS environment, so it would be easy for you, again, to get those EVMs, evaluation modules, and plug in your system, use them as a nodes, and test the functionality.

Now, we went one step ahead and had a camera on top of our mmWave device, and in this case, we are basically adding additional information that come from the camera. And the first challenge here, it's really aligned mechanically, the radar shift with the camera. As you can imagine, is of extremely importance in order to achieve spatial data alignment.

And the other step that you need to take before to do that is also to calibrate your sensor-- so your radar sensor, your camera. You do that with the OpenCV calibration routines and the Chessboard capture algorithm, and also with color reflectors when it comes to the radar sensors. And then also it's of extremely importance to synchronize those data coming from the radar and also coming from the camera.

And once you do all of that, then you can start to think about cluster and classify those data and perform object detection and recognition, and also classify the data that you or your sensor are capturing in the environment. And once you have done that, actually, you can accelerate the sensor fusion with our Jacinto7 processor portfolio. So this basically takes the input's different types of sensors, not only cameras and LiDARs and radars, but also LiDARs, also GPAs, IMU type of sensors, so capture those data, process those data, and then interpret those measurements in order to plan and control your robots.

And also in this case, we have a processer software development kit that have tools that range from Deep Learning open source framework like TensorFlow Lite, up to Computer Vision and Perceptual Toolkit that allows you to solve some of the challenges that I've mentioned in one of my first slides, like image classification, object detection, and also semantic segmentation. And we are now also working on a multisensor safety bible that we will be show soon in the next year. And if you look at the architecture of this processor, I mean, it has in there DSP to perform image signal processing. It also has some hardware accelerator to do deep learning. And finally, also had safety and security features up to filtering, which will enable you to basically ensure that your robots can work in a factory environment and also with humans around it.

Now let's jump instead to the challenges that we see for collaborative robots. Collaborative robots have similar challenges like AMR, but of course, here, the robot will be fixed. So, you still need to detect when a human is approaching the robot. Because most likely, the robot is operating at high speed. So we will need to slow down once the human gets close in order to not hurt him or her.

And then, also, it needs-- once the human is close to the robot and working with it, it needs also to detect safely when the human is in proximity of it, or is touching the robot. And here, you do it with many sensors, like of course, the ones that are used most today is a torque sensor. Then, you also might have pressure sensor like surrounding the robot with pads. Also, you have now, we see more and more, capacitive sensors that are used also to detect when a person is getting in proximity of the robot.

And then finally, most of these collaborative robots will have an end-of-arm tool. And this is used to pick objects that are around the robot. So it's important that your robot senses where those objects are aligned and then pick them.

And to do that, you can do again with different type of sensors, like ultrasonics, also optical sensors. And of course, you will need a 3D camera in order to classify and recognize those type of objects. And here, it's not anymore only radar and optical ultrasonic sensors that I've told for AMR, but it's also a force sensor, as I have just mentioned before.

And it's also capacitive sensors. And here, of course, we have, again many solutions-- discrete. Also, integrated, specifically when we talk about capacitive sensor with our FDC portfolio. And also, with our capTIvate MCU that will enable you to solve the challenges mentioned above.

And also, here, I'd like to show you a solution that could be used, both for collaborative robots and also autonomous mobile robots when it comes to detect safely that a human is in proximity of the robots. And this is a reference design. They use pulse time of flight approach.

So basically, it counts the times that pass when an optical pulse has been emitted and hits an object. And then, it comes back to that sensor. So you will need a very fast laser driver to fire up the laser.

And here, actually, we have recently released LMG1020 gaN driver that is able to achieve up-- or down to one nanosecond shortfalls. That will enhance the precision and accuracy of your system. And then, of course, once you have received the optical pulse with the photo diode, you will need to amplify the currents with the very fast transimpedance amplifier.

Again, we have recently released two very fast transimpedance amplifiers-- the OPA855 and OPA858, that will allow you to achieve those fast responses. And also here, you will need fast speed comparators that will compare the signal that comes from the transimpedance amplifier. And with the threshold that you have defined in advance. And of course, then you will need time-to-digital converters that will count the time that has passed between the emitted optical pulse and the received optical pulse.

And also, what we have done here is ultrasonic reference design. This will be used, or can be used to detect the distance of the objects that collaborative robots would like to pick. And again, we have used a PGA460 device that has a half-bridge integrated driver to drive the transducer. Which will generate an echo, which will hit the target and then comes back. This echo is then processed by also the PGA460, which has a time-variable gain amplifier integrated. And also, has the ADC and all the processing capabilities in order to give you the distance of the target via UART. And in this case, we have used a high-link interface, which we see more and more on the field. And we see more and more for end-of-arm tools.

Again, here, we have different options in order to implement this function. So, not only the PGA460, but we have also a newly released device, like the 2SS44XX which will give you an analog output and has also integrated a logarithm amplifier, which will give you a very wide range, in order to detect objects or different types of objects that are on the way of the robot. And here, also, you see, based on the transducer that you use, you can detect different types of distances with different also types of resolution down to the 1-millimeter range.

Now, I'd like to summarize what we have been discussing today. I think it's clear to everybody that it's important for a robot to have sensors in order to collaborate, cooperate, and coexist with humans. And of course, the sensors that they are going to use really depends on the objects, on the environments, where your robot is going to operate. But also, it depends on the type of robots that you're planning to build.

And then, as you could see, really, Texas Instruments offers various types of technologies, starting with Millimeter Wave sensors and also, ultrasonic sensors, optical sensors. And also, system solutions like the two reference designs I've shown today, also the Millimeter Wave examples I've shown today. And finally, also, processors like our Jacinto processor that will accelerate the sensor fusion among the different type of sensors that I've just mentioned.

So I will really suggest you to take in that look at the key technologies that can be used in the different types of robots. So this year, we have released an industrial robot e-book that talks about sensors, but also about the motor drive, which is also important. That the industrial communications that are used in robots. So I'll really suggest you to give a look at that. And I'd really like to thank you for your patience during this webinar