Research and Publications
Conference Talks
Robotics, AI, and the Future of Theatre
Hersch Nathan [1], Michael F Bergmann[2], Jada Vercosa [3]
[1] Department of Electrical and Computer Engineering and Department of Theatre and Dance, University of Kentucky [2] The Creative School, Toronto Metropolitan University [3] School of Engineering, School of Business, and Krenicki Arts and Engineering Institute University of Connecticut Presented at 64th Annual United States Institute for Theatre Technology Conference and Stage Expo, 2024 |
Abstract:
Lets dive in to take a look at the future of Theatre. Robots and AI are coming, but not for your job. These emerging technologies can be leveraged to produce safer, more intricate, and more immersive shows. However, we need to ask ourselves if the show needs the technology or are we using them just because we can. Presented by the Engineering Commission.
Lets dive in to take a look at the future of Theatre. Robots and AI are coming, but not for your job. These emerging technologies can be leveraged to produce safer, more intricate, and more immersive shows. However, we need to ask ourselves if the show needs the technology or are we using them just because we can. Presented by the Engineering Commission.
Useful Links
Contact Information:
Jada Vercosa
[email protected] |
Poster Presentations
Automation in Material Inspection and Handling
Hersch Nathan[1], Maggie Fontaine[1], Adam Garsha[1] | E-3: Automation, Robotics, & Controls
[1] Automation, Robotics, Controls (E-3), Los Alamos National Laboratory
Hersch Nathan[1], Maggie Fontaine[1], Adam Garsha[1] | E-3: Automation, Robotics, & Controls
[1] Automation, Robotics, Controls (E-3), Los Alamos National Laboratory
This picture has been publicly released under a Los Alamos National Lab unlimited release: LA-UR-24-28431
|
Presented at Los Alamos National Laboratory's Student symposium.
This work was the result of a summer internship with Automation, Robotics, Controls (E-3) group at Los Alamos National Laboratory. It was mentored by Dr. Beth Boardman, John Robert Schloen, and Javier Ruiz who are staff R&D engineers with the lab. This poster has been publicly released under a Los Alamos National Lab unlimited release: LA-UR-24-27726
|
Abstract:
The mission of the Los Alamos National Laboratory (LANL) is to ensure national security through deterrence. In the course of meeting LANL’s mission, workers handle hazardous materials which can have safety concerns. Improving worker safety requires advancing the use of automation and robotics technology at LANL. This project aims to adapt two commercial-off-the-shelf (COTS) robots for material handling.
Specifically, this project implements two different types of COTS robots: the Clearpath Ridgeback mobile platform robot and the Universal Robots UR5 six-axis robotic arm. These systems each provide a unique set of industrial capabilities that complement each other. First, this paper describes the mechanical and electrical integration of both robots. Then, a material handling demonstration is used to show how automation can improve processes.
Electrical and mechanical integration included mounting the UR5 control box electronics inside the Ridgeback. This required disassembling the control box and designing custom mounting brackets for each of the electronic components. To power the UR5 electronics, a battery tap was implemented providing power directly to the system. Originally, the UR5 was powered using a standard 120 V outlet. A new wireless emergency stop (e-stop) system was implemented that shuts down both robots instantaneously. The Ridgeback has four additional e-stop buttons, one on each corner of the robot that shuts down both robots. The UR5 has an independent e-stop on its pendant that only shuts down the arm. The UR5 arm is mounted on top of and toward the front of the Ridgeback.
In the project demonstration, the Ridgeback with UR5 explores an unknown environment while avoiding obstacles until it locates a metal sphere. Spheres are inspected for damage and then sorted. Ridgeback exploration is facilitated by lidar and depth cameras. AprilTags are used to identify key items, i.e. spheres, to facilitate dynamic interaction with an unknown environment. For inspection, the UR5 picks up the spheres using an OnRobot electric vacuum gripper. The UR5 then rotates the sphere in front of an Intel RealSense D435i depth camera. The spheres are sorted and placed in different locations based on the inspection results. Sorting is done using artificial intelligence and machine learning models that use transfer learning of a RESNET-18 convolutional neural network image classifier. Once sorting of the sphere is complete, the Ridgeback continues exploring the environment looking for more spheres.
A software system was developed to run the autonomous demonstration. The primary tool used to achieve this is the Robotic Operating System (ROS), which is open-source software. Communication between the different systems is done using nodes, topics, and other ROS communication mechanisms. Each system is a different node written in python or C++. Topics are the communication channels nodes use to communicate with each other. While ROS is not commonly used for industrial applications, it provides a rapid developing system that can aid in quickly testing new capabilities. ROS also allows for the integration of AprilTags and an Octomap created using the lidar camera. Simulation of the integrated system is done in RViz for testing prior to physical deployment.
The efforts of this project show that the integrated Ridgeback with UR5 can navigate and interact with the environment successfully. Mobile material handling and inspection can be done autonomously which will reduce worker hazards.
This abstract has been publicly released under a Los Alamos National Lab unlimited release: LA-UR-24-28748.
The mission of the Los Alamos National Laboratory (LANL) is to ensure national security through deterrence. In the course of meeting LANL’s mission, workers handle hazardous materials which can have safety concerns. Improving worker safety requires advancing the use of automation and robotics technology at LANL. This project aims to adapt two commercial-off-the-shelf (COTS) robots for material handling.
Specifically, this project implements two different types of COTS robots: the Clearpath Ridgeback mobile platform robot and the Universal Robots UR5 six-axis robotic arm. These systems each provide a unique set of industrial capabilities that complement each other. First, this paper describes the mechanical and electrical integration of both robots. Then, a material handling demonstration is used to show how automation can improve processes.
Electrical and mechanical integration included mounting the UR5 control box electronics inside the Ridgeback. This required disassembling the control box and designing custom mounting brackets for each of the electronic components. To power the UR5 electronics, a battery tap was implemented providing power directly to the system. Originally, the UR5 was powered using a standard 120 V outlet. A new wireless emergency stop (e-stop) system was implemented that shuts down both robots instantaneously. The Ridgeback has four additional e-stop buttons, one on each corner of the robot that shuts down both robots. The UR5 has an independent e-stop on its pendant that only shuts down the arm. The UR5 arm is mounted on top of and toward the front of the Ridgeback.
In the project demonstration, the Ridgeback with UR5 explores an unknown environment while avoiding obstacles until it locates a metal sphere. Spheres are inspected for damage and then sorted. Ridgeback exploration is facilitated by lidar and depth cameras. AprilTags are used to identify key items, i.e. spheres, to facilitate dynamic interaction with an unknown environment. For inspection, the UR5 picks up the spheres using an OnRobot electric vacuum gripper. The UR5 then rotates the sphere in front of an Intel RealSense D435i depth camera. The spheres are sorted and placed in different locations based on the inspection results. Sorting is done using artificial intelligence and machine learning models that use transfer learning of a RESNET-18 convolutional neural network image classifier. Once sorting of the sphere is complete, the Ridgeback continues exploring the environment looking for more spheres.
A software system was developed to run the autonomous demonstration. The primary tool used to achieve this is the Robotic Operating System (ROS), which is open-source software. Communication between the different systems is done using nodes, topics, and other ROS communication mechanisms. Each system is a different node written in python or C++. Topics are the communication channels nodes use to communicate with each other. While ROS is not commonly used for industrial applications, it provides a rapid developing system that can aid in quickly testing new capabilities. ROS also allows for the integration of AprilTags and an Octomap created using the lidar camera. Simulation of the integrated system is done in RViz for testing prior to physical deployment.
The efforts of this project show that the integrated Ridgeback with UR5 can navigate and interact with the environment successfully. Mobile material handling and inspection can be done autonomously which will reduce worker hazards.
This abstract has been publicly released under a Los Alamos National Lab unlimited release: LA-UR-24-28748.
Performance Analysis of Trajectory Planning Algorithms for Mobile Robotic Navigation
Hersch Nathan [1], Biyun Xie [1]
[1] Intelligent Robotic Arm Lab, Department of Electrical and Computer Engineering, University of Kentucky
Hersch Nathan [1], Biyun Xie [1]
[1] Intelligent Robotic Arm Lab, Department of Electrical and Computer Engineering, University of Kentucky
Presented at the University of Kentucky's Department of Electrical and Computer Engineering Spring Research Symposium and at University of Kentucky Office of Undergraduate Research 18th Showcase of Undergraduate Scholars
This work was the result of a multiple semester Undergraduate Research Fellowship partly funded from the University of Kentucky's Department of Electrical and Computer Engineering |
Abstract:
When major disasters happen, there is a need for robots to traverse complex environments. Those environments have many different components from impassible obstacles to density of obstacles, and even dynamics. The traditional approach to solving this problem is using trajectory planning algorithms. Broadly speaking, there are two approaches; graph based and sampling-based algorithms. Graph based algorithms are of the family of Dijkstra's which traditionally finds the shortest path between nodes in a weighted graph in discrete space. Sampling-based algorithms find the shortest path through a weighted random pulling of in a continuous space.
In this project we explore a static environment full of obstacles in different configurations. We conduct a performance analysis of a collection of trajectory planning algorithms. The configurations we are interested in are ones with transitional areas of different density. Primarily we are looking at the path length, computational time, and fail rate.
This research was partly supported through an NSF Grant 2205292 and through the University of Kentucky Department of Electrical and Computer Engineering’s Undergraduate Research Fellow Program.
When major disasters happen, there is a need for robots to traverse complex environments. Those environments have many different components from impassible obstacles to density of obstacles, and even dynamics. The traditional approach to solving this problem is using trajectory planning algorithms. Broadly speaking, there are two approaches; graph based and sampling-based algorithms. Graph based algorithms are of the family of Dijkstra's which traditionally finds the shortest path between nodes in a weighted graph in discrete space. Sampling-based algorithms find the shortest path through a weighted random pulling of in a continuous space.
In this project we explore a static environment full of obstacles in different configurations. We conduct a performance analysis of a collection of trajectory planning algorithms. The configurations we are interested in are ones with transitional areas of different density. Primarily we are looking at the path length, computational time, and fail rate.
This research was partly supported through an NSF Grant 2205292 and through the University of Kentucky Department of Electrical and Computer Engineering’s Undergraduate Research Fellow Program.
Performance Analysis of Heterogeneous Networks for Robotic Navigation
Hersch Nathan [1], Md. Saeid Anwar [2], Anuradha Ravi [2], Nirmalya Roy[2]
[1] Department of Electrical and Computer Engineering, University of Kentucky, 512 Administration Drive, KY 40506
[2] Department of Information Systems, University of Maryland, Baltimore County, 1000 Hilltop Circle, Baltimore, MD 21250
Hersch Nathan [1], Md. Saeid Anwar [2], Anuradha Ravi [2], Nirmalya Roy[2]
[1] Department of Electrical and Computer Engineering, University of Kentucky, 512 Administration Drive, KY 40506
[2] Department of Information Systems, University of Maryland, Baltimore County, 1000 Hilltop Circle, Baltimore, MD 21250
Presented at the 26th Annual Summer Undergraduate Research Fest
Hosted by University of Maryland, Baltimore County's College of Natural and Mathematical Sciences This work was the result of a Research Experience for Undergraduates in Smart Computing and Communications hosted by Mobile, Pervasive and Sensor Computing Lab (MPSC Lab). |
Abstract:
During disaster recovery, it is imperative to take the assistance of robots to navigate hostile terrains. Robots can autonomously make application-oriented decisions and send data (such as images) to human personnel for decision-making. Communication in a disaster-struck environment can be challenging with the destruction of communication infrastructure or lack thereof. Establishing satellite-based communication can be a costly affair. The requirement for wireless networks in far-reach areas led to the inception of LoRa (Long-Range) networks, which leverage Chirp Spread Spectrum (CSS) technology for long-range communication over low bandwidth. Thus, devices equipped with LoRa can communicate small chirps of data over a long-range, making them power efficient to sustain their battery life for a longer duration. Per regulations, in the United States LoRa exists on the 902-928Mhz band with power restrictions. LoRaWAN is a WAN protocol built on top of LoRa, which has typically been used to transmit small amounts of data from low-power sensor networks.
In this project, we first set up a LoRaWAN network to interface and interact with UAVs and UGVs. We then analyze the performance of LoRaWAN network on varying workloads and monitor the computation and communication power consumption of a bot while employing the LoRa network. We further explore the possibility of transmitting image data over the LoRaWAN network. We leverage the low bandwidth of LoRaWAN to send feature representatives of the images (rather than sending raw image data) that can be processed at an edge node for object classification applications. To lay down a path for decision-making (selecting the best possible network) in a heterogeneous network environment, we compare sending images and feature representatives of the raw images over WiFi via MQTT (as proposed by previous works) and LoRaWAN. We analyze the performance (delay and power consumption) of WiFi and LoRaWAN given varying workloads.
This research was partly supported through a Research Experience for Undergraduates (REU) funded by NSF Grant #2050999, ANL Grant #W911NF2120076, ONR Grant #N00014-23-1-2119, and NSF CNS EAGER Grant #2233879.
During disaster recovery, it is imperative to take the assistance of robots to navigate hostile terrains. Robots can autonomously make application-oriented decisions and send data (such as images) to human personnel for decision-making. Communication in a disaster-struck environment can be challenging with the destruction of communication infrastructure or lack thereof. Establishing satellite-based communication can be a costly affair. The requirement for wireless networks in far-reach areas led to the inception of LoRa (Long-Range) networks, which leverage Chirp Spread Spectrum (CSS) technology for long-range communication over low bandwidth. Thus, devices equipped with LoRa can communicate small chirps of data over a long-range, making them power efficient to sustain their battery life for a longer duration. Per regulations, in the United States LoRa exists on the 902-928Mhz band with power restrictions. LoRaWAN is a WAN protocol built on top of LoRa, which has typically been used to transmit small amounts of data from low-power sensor networks.
In this project, we first set up a LoRaWAN network to interface and interact with UAVs and UGVs. We then analyze the performance of LoRaWAN network on varying workloads and monitor the computation and communication power consumption of a bot while employing the LoRa network. We further explore the possibility of transmitting image data over the LoRaWAN network. We leverage the low bandwidth of LoRaWAN to send feature representatives of the images (rather than sending raw image data) that can be processed at an edge node for object classification applications. To lay down a path for decision-making (selecting the best possible network) in a heterogeneous network environment, we compare sending images and feature representatives of the raw images over WiFi via MQTT (as proposed by previous works) and LoRaWAN. We analyze the performance (delay and power consumption) of WiFi and LoRaWAN given varying workloads.
This research was partly supported through a Research Experience for Undergraduates (REU) funded by NSF Grant #2050999, ANL Grant #W911NF2120076, ONR Grant #N00014-23-1-2119, and NSF CNS EAGER Grant #2233879.