-
Notifications
You must be signed in to change notification settings - Fork 2
/
Copy pathindex.html
427 lines (426 loc) · 36.9 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"><head>
<meta http-equiv="content-type" content="text/html; charset=utf-8" />
<link rel="stylesheet" type="text/css" href="style.css" /><title>Robust Field Autonomy Lab</title>
</head>
<body>
<div id="wrap">
<div style="font-family: Calibri;" id="header">
<h1><span style="color: black;">The Robust
Field Autonomy Lab at Stevens Institute
of Technology</span> </h1>
<h1 style="color: black;"><small>Research Group of
Brendan Englot, Ph.D. </small></h1>
</div>
<div id="menu">
<ul>
<li><a href="index.html">Home</a></li>
<li><a href="people.html">People</a></li>
<li><br />
</li>
<li><a href="publications.html">Publications</a></li>
<li><br />
</li>
</ul>
</div>
<div id="contentwrap">
<div style="width: 745px;" id="content">
<h2>Robust Autonomy in Complex Environments</h2>
<div style="padding: 10px 0pt; float: left;"><img style="width: 285px; height: 194px;" src="./index_files/Hull_Inspection.png" alt="image" /> <br />
</div>
<p style="text-align: justify;">We design<span>
perception, navigation and decision-making algorithms that help mobile
robots achieve robust autonomy in complex physical
environments. Specific goals of our research include improving the reliability of
autonomous navigation for unmanned underwater, surface, ground and aerial
vehicles subjected to noise-corrupted and drifting sensors, bandwidth-limited communications, incomplete knowledge
of the environment, and tasks that require interaction with surrounding
objects and structures. Our recent work uses artificial intelligence to
improve the situational awareness of mobile robots operating in degraded conditions,
and to enable intelligent robot decision-making under uncertainty.</span></p>
<br />
<div style="text-align: center;"> <img style="width: 251px; height: 131px;" alt="" src="./index_files/VideoRay1.jpg" /> <img style="width: 258px; height: 131px;" alt="" src="./index_files/VideoRay2.jpg" /> <img style="width: 210px; height: 131px;" alt="" src="./jackal-kinova.png" /><br />
</div>
<div style="text-align: center;"> <img style="width: 155px; height: 163px;" alt="" src="./index_files/BlueROV.jpg" /> <img style="width: 262px; height: 163px;" alt="" src="./index_files/rov_pier_test.jpg" /> <img style="width: 197px; height: 163px;" alt="" src="./index_files/sonar_point_cloud.png" /><br />
</div>
<div style="text-align: center;"> <img style="width: 168px; height: 133px;" alt="" src="./index_files/jackal_hilltop_2.jpg" /> <img style="width: 238px; height: 133px;" alt="" src="./index_files/Jackal_Photo_2.jpg" /> <img style="width: 237px; height: 133px;" alt="" src="./index_files/Jackal_Photo_3.png" /><br />
</div>
<p style="text-align: justify;"><span style="font-weight: bold;">Top:</span> Testing an ROV in Stevens'
Davidson Laboratory tank; testing a custom-built mobile manipulator in our lab. <br>
<span style="font-weight: bold;">Center:</span>
Acoustically mapping the pilings of Stevens' Hudson River pier with our custom-built BlueROV. <br>
<span style="font-weight: bold;">Bottom:</span>
Bench-testing our Clearpath Jackal unmanned ground vehicle after a
field experiment in Hoboken's Pier A Park.</p>
<h2>Recent News:</h2>
<div style="text-align: justify;">
<br />
<h2>Low-Fidelity-Sim 2 High-Fidelity-Sim: Autonomous Navigation in Congested Maritime Environments via Distributional Reinforcement Learning
</h2><br />
<div style="text-align: center;"><video width="560" height="315" controls> <source src="./Lin_RA-L_2024_10-FinalVideo.mp4" type=video/mp4> </video><br />
</div><br />
We are pleased to announce that our paper "Distributional Reinforcement Learning based Integrated Decision Making and Control for Autonomous Surface Vehicles" will appear in the February 2025 issue of IEEE Robotics and Automation Letters <a style="font-weight: bold;" href="https://ieeexplore.ieee.org/document/10804093">(Link to IEEExplore)</a>. A preprint of our paper is available on <a style="font-weight: bold;" href="https://arxiv.org/abs/2412.09466">arXiv</a>, and our code is available on <a style="font-weight: bold;" href="https://github.com/RobustFieldAutonomyLab/Distributional_RL_Decision_and_Control">GitHub</a>. This work was led by Xi Lin.
</p>
<h2>Large-Scale Underwater 3D Mapping with a Stereo Pair of Imaging Sonars
</h2><br />
<div style="padding: 10px 0pt; float: left; text-align: center;"><img style="width: 604px; height: 395px;" alt="" src="./JEB_photo_reduced.jpg" /> <img style="width: 603px; height: 399px;" alt="" src="./JEB_map.png" />
</div><br />
We are excited to announce that a paper telling the complete story of our lab's work with dense 3D underwater mapping using a stereo pair of (orthogonally oriented) imaging sonars has been published in the <a style="font-weight: bold;" href="https://ieeexplore.ieee.org/document/10742648">IEEE Journal of Oceanic Engineering</a> (a preprint of the paper is available on <a style="font-weight: bold;" href="https://arxiv.org/abs/2412.03760">arXiv</a>). This work,
led by John McConnell, was demonstrated via field experiments conducted at Joint Expeditionary Base Little Creek in VA (pictured above, alongside a sonar-derived 3D map of the structures visible in the photo), Penn's Landing in Philadelphia, PA, and SUNY Maritime College in Bronx, NY, and builds on our earlier papers published at <a style="font-weight: bold;" href="https://ieeexplore.ieee.org/document/9340995">IROS 2020</a> (McConnell, Martin and Englot) and <a style="font-weight: bold;" href="https://ieeexplore.ieee.org/document/9560737">ICRA 2021</a> (McConnell and Englot).
The custom-instrumented BlueROV underwater robot used in this work, and its mapping capabilities, were recently highlighted in a both a <a style="font-weight: bold;" href="https://www.youtube.com/watch?v=IovJYX44URs">video filmed by ASME</a>, and in a <a style="font-weight: bold;" href="https://www.stevens.edu/news/these-underwater-robots-are-mapping-where-you-dont-want-to">Stevens news article</a>.
</p>
<h2>Mobile Manipulation for Inspecting Electric Substations
</h2><br />
<div style="text-align: center;"><video width="560" height="315" controls> <source src="./Pearson_ASME_2024.mp4" type=video/mp4> </video><br />
</div><br />
We are pleased to announce that our paper "Robust Autonomous Mobile Manipulation for Substation Inspection" has been published in the ASME Journal of Mechanisms and Robotics,
in its special issue on Selected Papers from IDETC-CIE. The paper can be accessed in <a style="font-weight: bold;" href="https://asmedigitalcollection.asme.org/mechanismsrobotics/article-abstract/16/11/115001/1200571/Robust-Autonomous-Mobile-Manipulation-for?redirectedFrom=fulltext">ASME's Digital Collection</a>,
and more details are illustrated in the accompanying <a style="font-weight: bold;" href="https://robustfieldautonomylab.github.io/Pearson_ASME_2024.mp4">video attachment</a> (shown above). This research was
led by Erik Pearson.
<h2><br />
<h2>ICRA 2024 Papers on Autonomous Navigation under Uncertainty
</h2><br />
<div style="text-align: center;"><video width="560" height="315" controls> <source src="./Lin_ICRA24_Video.mp4" type=video/mp4> </video><br />
</div><br />
We recently presented three papers at ICRA 2024 addressing autonomous navigation under different types of uncertainty. The first paper, "Decentralized Multi-Robot Navigation for Autonomous Surface Vehicles with Distributional Reinforcement Learning,"
addresses ASV navigation in congested and disturbance-filled environments. A preprint of the paper is available on <a style="font-weight: bold;" href="https://arxiv.org/abs/2402.11799">arXiv</a>, the corresponding code is available on <a style="font-weight: bold;" href="https://github.com/RobustFieldAutonomyLab/Multi_Robot_Distributional_RL_Navigation">GitHub</a>,
and more details are illustrated in the accompanying <a style="font-weight: bold;" href="https://robustfieldautonomylab.github.io/Lin_ICRA24_Video.mp4">video attachment</a> (shown above). This research was
led by Xi Lin.
The second paper, "Multi-Robot Autonomous Exploration and Mapping Under Localization Uncertainty with Expectation-Maximization,"
uses virtual maps to support high-performance multi-robot exploration of unknown environments. A preprint of the paper is available on <a style="font-weight: bold;" href="https://arxiv.org/abs/2403.04021">arXiv</a>, the corresponding code is available on <a style="font-weight: bold;" href="https://github.com/RobustFieldAutonomyLab/Multi-Robot-EM-Exploration">GitHub</a>,
and more details are illustrated in the accompanying <a style="font-weight: bold;" href="https://robustfieldautonomylab.github.io/Huang_ICRA24_Video.mp4">video attachment</a>. This research was
led by Yewei Huang.
The third paper, "Real-Time Planning Under Uncertainty for AUVs Using Virtual Maps,"
also uses virtual maps, as a tool to support planning under localization uncertainty across long distances underwater. A preprint of the paper is available on <a style="font-weight: bold;" href="https://arxiv.org/abs/2403.04936">arXiv</a>
and more details are illustrated in the accompanying <a style="font-weight: bold;" href="https://robustfieldautonomylab.github.io/Collado_ICRA24_Video.mp4">video attachment</a>. This research was
led by Ivana Collado-Gonzalez.
<h2><br />
<h2>New Papers and Code on Distributional Reinforcement Learning</h2>
<br />
<div style="text-align: center;"><img style="width: 675px; height: 300px;" alt="" src="./Lin_IROS_2023_CoverImage.png" /><br />
</div>
<p style="text-align: justify;">We have released two new code repositories with tools that have supported our research on Distributional Reinforcement Learning.
Our paper on Robust Unmanned Surface Vehicle (USV) Navigation with Distributional RL will be appearing at IROS 2023 in October.
A preprint of the paper is available on <a style="font-weight: bold;" href="https://arxiv.org/abs/2307.16240">arXiv</a>, the corresponding code is available on <a style="font-weight: bold;" href="https://github.com/RobustFieldAutonomyLab/Distributional_RL_Navigation">GitHub</a>,
and more details are illustrated in the accompanying <a style="font-weight: bold;" href="https://robustfieldautonomylab.github.io/Lin_IROS_2023_video.mp4">video attachment</a>. This research was
led by Xi Lin.
<br />
<div style="text-align: center;"><img style="width: 654px; height: 300px;" alt="" src="./Lin-Szenher_UR_2023_CoverImage.png" /><br />
</div>
<p style="text-align: justify;">A second paper on Robust Route Planning with Distributional RL in a Stochastic Road Network Environment appeared earlier this summer at Ubiquitous Robots 2023.
A preprint of the paper is available on <a style="font-weight: bold;" href="https://arxiv.org/abs/2304.09996">arXiv</a>, and the accompanying code, which provides Stochastic Road Networks derived
from maps in the <a style="font-weight: bold;" href="https://carla.org">CARLA Simulator</a>, is available on <a style="font-weight: bold;" href="https://github.com/RobustFieldAutonomyLab/Stochastic_Road_Network">GitHub</a>.
This research was led by Xi Lin, Paul Szenher, and John D. Martin.
</p>
<h2>Underwater Robotics Research Highlighted by The American Society of Mechanical Engineers (ASME)
</h2><br />
<div style="text-align: center;"> <iframe src="https://www.youtube.com/embed/IovJYX44URs" allowfullscreen="" frameborder="0" height="315" width="560"></iframe>
</div><br />
Our underwater robotics research was recently highlighted in ASME's new series on "What's in your lab?". ASME filmed a 3D sonar mapping experiment performed with one of our customized
BlueROV robots, which was led by John McConnell and Ivana Collado-Gonzalez. The experiment is part of an effort to enhance the capabilities described in an earlier paper published at IROS (McConnell, Martin and Englot, IROS 2020), and will be documented in a new paper that is currently in preparation.
</p>
<h2>Active Perception with the BlueROV Underwater Robot
</h2><br />
<div style="text-align: center;"><video width="560" height="315" controls> <source src="./index_files/em_3.mp4" type=video/mp4> </video><br />
</div><br />
We are excited to share some news about our "most autonomous" robot deployed in the field to date, which used sonar-based active SLAM to autonomously explore and map an obstacle-filled harbor environment with high accuracy.
To achieve this, we adapted our algorithms for Expectation-Maximization based autonomous mobile robot exploration published at ISRR (Wang and Englot, ISRR 2017)
and IROS (Wang, Shan and Englot, IROS 2019) to run on our BlueROV underwater robot, which uses its imaging sonar for SLAM. This work was performed by Jinkun Wang
with the help of Fanfei Chen, Yewei Huang, John McConnell, and Tixiao Shan, and it was recently published in the October 2022 issue of the <a style="font-weight: bold;" href="https://ieeexplore.ieee.org/document/9806387">IEEE Journal of Oceanic Engineering</a>. A preprint of our paper can be found on <a style="font-weight: bold;" href="https://arxiv.org/abs/2202.08359">arXiv</a>, and a recent seminar discussing our work on this topic can be viewed <a style="font-weight: bold;" href="https://kaltura.stevens.edu/media/%22Virtual+Maps+for+High-Performance+Mobile+Robot+Exploration+Under+Uncertainty%22+with+Professor+Brendan+Englot/1_ijlmuwgi">here</a>.
Our BlueROV SLAM code used to support this work, along with sample data, is available on <a style="font-weight: bold;" href="https://github.com/jake3991/sonar-SLAM">GitHub</a>.
<h2><br />
<h2>Introducing DRACo-SLAM: Distributed, Multi-Robot Sonar-based SLAM Intended for use with Wireless Acoustic Comms
</h2><br />
<div style="text-align: center;"><video width="560" height="315" controls> <source src="./DRACo-SLAM_VideoAttachment.mp4" type=video/mp4> </video><br />
</div><br />
We are excited to announce that our recent work on DRACo-SLAM (Distributed Robust Acoustic Communication-efficient SLAM for Imaging Sonar Equipped Underwater Robot Teams) will be presented at IROS 2022. A preprint of our paper is available on <a style="font-weight: bold;" href="https://arxiv.org/abs/2210.00867">arXiv</a>, and the DRACo-SLAM library is available on <a style="font-weight: bold;" href="https://github.com/jake3991/DRACo-SLAM">GitHub</a>. This work was led by John McConnell.
</p>
<h2>Using Overhead Imagery of Ports and Harbors to Aid Underwater Sonar-based SLAM
</h2><br />
<div style="text-align: center;"> <iframe src="https://www.youtube.com/embed/_uWljtp58ks" allowfullscreen="" frameborder="0" height="315" width="560"></iframe>
</div><br />
We are excited to announce that our recent work proposing Overhead Image Factors for underwater sonar-based SLAM has been accepted for publication in IEEE Robotics and Automation Letters, and for presentation at ICRA 2022. Our paper is available on <a style="font-weight: bold;" href="https://ieeexplore.ieee.org/document/9721066">IEEExplore</a>.
This work uses deep learning to predict the above-surface appearance of underwater objects observed by sonar, which is registered against the contents of overhead imagery to provide an absolute position reference for underwater robots operating in coastal areas. This research was led by John McConnell.
</p>
<h2>Introducing DiSCo-SLAM: A Distributed, Multi-Robot LiDAR SLAM Code/Data Release
</h2><br />
<div style="text-align: center;"><video width="560" height="315" controls> <source src="./DiSCo-SLAM.mp4" type=video/mp4> </video><br />
</div><br />
We are excited to announce that our recent work on DiSCo-SLAM (Distributed Scan Context-Enabled Multi-Robot LiDAR SLAM with Two-Stage Global-Local Graph Optimization) has been accepted for publication in IEEE Robotics and Automation Letters, and for presentation at ICRA 2022. Our paper is available on <a style="font-weight: bold;" href="https://ieeexplore.ieee.org/document/9662965">IEEExplore</a>.
The DiSCo-SLAM library, along with two new multi-robot SLAM datasets intended for use with DiSCo-SLAM, are available on <a style="font-weight: bold;" href="https://github.com/RobustFieldAutonomyLab/DiSCo-SLAM">GitHub</a>. This work, which was also featured recently on the <a style="font-weight: bold;" href="https://clearpathrobotics.com/blog/2022/03/stevens-institute-of-technology-develops-slam-framework-for-efficient-wireless-robot-communication/">Clearpath Robotics Blog</a>, was led by Yewei Huang.
</p>
<h2>Zero-Shot Reinforcement Learning on Graphs for Autonomous Exploration
</h2><br />
<div style="text-align: center;"> <iframe src="https://www.youtube.com/embed/62phOSf2HEg" allowfullscreen="" frameborder="0" height="315" width="560"></iframe>
</div>
<p style="text-align: justify;">We are excited that our paper "Zero-Shot Reinforcement Learning on Graphs for Autonomous Exploration Under Uncertainty" has been accepted for presentation at ICRA 2021.
In the video above, which assumes a lidar-equipped mobile robot depends on segmentation-based SLAM for localization, we show the exploration policy learned
by training in a single Gazebo environment, and its successful transfer both to other virtual environments and to robot hardware.
A preprint of the paper is available on <a style="font-weight: bold;" href="https://arxiv.org/pdf/2105.04758.pdf">arXiv</a>, our presentation
of the paper can be viewed <a style="font-weight: bold;" href="https://robustfieldautonomylab.github.io/Chen_ICRA_2021_Presentation.mp4">here</a>. This work was led by Fanfei Chen.
</p>
<h2>Predictive Large-Scale 3D Underwater Mapping with Sonar
</h2><br />
<div style="text-align: center;"> <iframe src="https://www.youtube.com/embed/WouCrY9eK4o" allowfullscreen="" frameborder="0" height="315" width="560"></iframe>
</div>
<p style="text-align: justify;">We are pleased to announce that our paper on predictive large-scale 3D underwater mapping using a pair of wide-aperture imaging sonars has been accepted for presentation at ICRA 2021.
This work features our custom-built heavy configuration BlueROV underwater robot, which is equipped with two orthongally oriented Oculus multibeam sonars (the software packages
for our BlueROV can be found on <a style="font-weight: bold;" href="https://github.com/jake3991/Argonaut">GitHub</a>).
A preprint of the paper is available on <a style="font-weight: bold;" href="https://arxiv.org/pdf/2104.03203.pdf">arXiv</a>, and our presentation
of the paper can be viewed <a style="font-weight: bold;" href="https://robustfieldautonomylab.github.io/McConnell_ICRA_2021_Presentation.mp4">here</a>. This work was led by John McConnell.
</p>
<h2>Lidar-Visual-Inertial Navigation, and Imaging Lidar Place Recognition</h2>
<br />
<div style="text-align: center;"> <iframe src="https://www.youtube.com/embed/8CTl07D6Ibc" allowfullscreen="" frameborder="0" height="315" width="560"></iframe>
</div>
<p style="text-align: justify;">Two collaborative works with MIT, led by lab alumnus Dr. Tixiao Shan and featuring data gathered with our Jackal UGV, will be appearing at ICRA 2021. The first, shown above, is LVI-SAM, a new
framework for lidar-visual-inertial navigation. A preprint of the paper is available on <a style="font-weight: bold;" href="https://arxiv.org/pdf/2104.10831.pdf">arXiv</a>, a presentation
of the paper can be viewed <a style="font-weight: bold;" href="https://www.youtube.com/watch?v=RfCEBiovx8M">here</a>, and the LVI-SAM library is available on <a style="font-weight: bold;" href="https://github.com/TixiaoShan/LVI-SAM">GitHub</a>.
<br />
<div style="text-align: center;"> <iframe src="https://www.youtube.com/embed/yH1hLBFaNoI" allowfullscreen="" frameborder="0" height="315" width="560"></iframe>
</div>
<p style="text-align: justify;">The second work, shown above, proposes a new framework for place recognition using imaging lidar, which is implemented using the Ouster OS1-128 lidar, operated in both a hand-held mode
and aboard Stevens' Jackal UGV. A preprint of the paper is available on <a style="font-weight: bold;" href="https://arxiv.org/pdf/2103.02111.pdf">arXiv</a>, a presentation
of the paper can be viewed <a style="font-weight: bold;" href="https://www.youtube.com/watch?v=yJUY8IwZT_M">here</a>, and we encourage you to download
the library from <a style="font-weight: bold;" href="https://github.com/TixiaoShan/imaging_lidar_place_recognition">GitHub</a>.
</p>
<h2>Lidar Super-resolution Paper and Code Release</h2>
<br />
<div style="text-align: center;"> <iframe src="https://www.youtube.com/embed/rNVTpkz2ggY" allowfullscreen="" frameborder="0" height="315" width="560"></iframe>
<br />
</div>
<p style="text-align: justify;">We have developed a
framework for lidar super-resolution that is trained completely using synthetic data from the <a style="font-weight: bold;" href="https://www.carla.org">CARLA Urban Driving Simulator</a>. It is capable of accurately enhancing
the apparent resolution of a physical lidar across a wide variety of real-world environments. Our paper on this work was recently published in <a style="font-weight: bold;" href="https://www.sciencedirect.com/science/article/pii/S0921889020304875">Robotics and Autonomous Systems</a>, and we encourage you to download
our library from <a style="font-weight: bold;" href="https://github.com/RobustFieldAutonomyLab/lidar_super_resolution">GitHub</a>.
The author and maintainer of this library is Tixiao Shan.
</p>
<h2>Copula Models for Capturing Probabilistic Dependencies in SLAM</h2>
<br />
<div style="text-align: center;"><img style="width: 400px; height: 400px;" alt="" src="./index_files/factor_graph_copulas.png" /><br />
</div>
<p style="text-align: justify;">We are happy to announce that our paper on using copulas for modeling the probabilistic dependencies in simultaneous localization and mapping (SLAM) with landmarks has been accepted for
presentation at IROS 2020. A preprint of the paper "Variational Filtering with Copula Models for SLAM" is available on <a style="font-weight: bold;" href="https://arxiv.org/pdf/2008.00504.pdf">arXiv</a> and our presentation
of the paper can be viewed <a style="font-weight: bold;" href="https://robustfieldautonomylab.github.io/index_files/Martin_IROS_2020_Presentation.mp4">here</a>.
This collaborative work with MIT was led jointly by John Martin and lab alumnus Kevin Doherty.
</p>
<h2>Autonomous Exploration using Deep Reinforcement Learning on Graphs
</h2><br />
<div style="text-align: center;"> <iframe src="https://www.youtube.com/embed/e7uM03hMZRo" allowfullscreen="" frameborder="0" height="315" width="560"></iframe>
</div>
<p style="text-align: justify;">We are pleased to announce that our paper "Autonomous Exploration Under Uncertainty via Deep Reinforcement Learning on Graphs" has been accepted for presentation at IROS 2020.
In the video above, which assumes a range-sensing mobile robot depends on the observation of point landmarks for localization, we show the performance of several
competing architectures that combine deep RL with graph neural networks to learn how to efficiently explore
unknown environments, while building accurate maps.
A preprint of the paper is available on <a style="font-weight: bold;" href="https://arxiv.org/pdf/2007.12640.pdf">arXiv</a>, our presentation
of the paper can be viewed <a style="font-weight: bold;" href="https://robustfieldautonomylab.github.io/index_files/Chen_IROS_2020_Presentation.mp4">here</a>, and we encourage you to download
our code from <a style="font-weight: bold;" href="https://github.com/RobustFieldAutonomyLab/DRL_graph_exploration">GitHub</a>. This work was led by Fanfei Chen, who is
the author and maintainer of the "DRL Graph Exploration" library.
</p>
<h2>Dense Underwater 3D Reconstruction with a Pair of Wide-aperture Imaging Sonars
</h2><br />
<div style="text-align: center;"><video width="560" height="315" controls> <source src="./McConnell_IROS_2020_video.mp4" type=video/mp4> </video><br />
</div><br />
We are pleased to announce that our paper on dense underwater 3D reconstruction using a pair of wide-aperture imaging sonars has been accepted for presentation at IROS 2020.
This work features our custom-built heavy configuration BlueROV underwater robot, which is equipped with two orthongally oriented Oculus multibeam sonars (the software packages
for our BlueROV can be found on <a style="font-weight: bold;" href="https://github.com/jake3991/Argonaut">GitHub</a>).
A preprint of the paper is available on <a style="font-weight: bold;" href="https://arxiv.org/pdf/2007.10407.pdf">arXiv</a>, and our presentation
of the paper can be viewed <a style="font-weight: bold;" href="https://robustfieldautonomylab.github.io/index_files/McConnell_IROS_2020_Presentation.mp4">here</a>. This work was led by John McConnell.
<h2><br />
<h2>Lidar Inertial Odometry via Smoothing and Mapping (LIO-SAM)</h2>
<br />
<div style="text-align: center;"> <iframe src="https://www.youtube.com/embed/OF_wOgPTNhs" allowfullscreen="" frameborder="0" height="315" width="560"></iframe>
</div>
<p style="text-align: justify;">We recently brought our Jackal UGV to a nearby park to perform some additional validation of LIO-SAM, a framework for tightly-coupled lidar inertial odometry which will be presented at
IROS 2020. A preprint of the paper is available on <a style="font-weight: bold;" href="https://arxiv.org/pdf/2007.00258v3.pdf">arXiv</a>, a presentation
of the paper can be viewed <a style="font-weight: bold;" href="https://robustfieldautonomylab.github.io/index_files/Shan_IROS_2020_Presentation.mp4">here</a>, and we encourage you to download
the library from <a style="font-weight: bold;" href="https://github.com/TixiaoShan/LIO-SAM">GitHub</a>.
This collaborative work with MIT was led by lab alumnus Dr. Tixiao Shan, who is the author and maintainer of the LIO-SAM library.
<br />
<div style="text-align: center;"> <iframe src="https://www.youtube.com/embed/lheEmUZwBzU" allowfullscreen="" frameborder="0" height="315" width="560"></iframe>
</div>
<p style="text-align: justify;">We also recently mounted the new 128-beam Ouster OS1-128 lidar on our Jackal UGV, and performed some additional LIO-SAM mapping on the Stevens campus (all earlier results
have been gathered using the 16-beam Velodyne VLP-16). It was encouraging to see LIO-SAM support real-time operation despite the greatly-increased sensor resolution.
</p>
<h2>Stochastically Dominant Distributional Reinforcement Learning</h2>
<br />
<div style="text-align: center;"><img style="width: 654px; height: 300px;" alt="" src="./index_files/ssd_drl_overview_image.png" /><br />
</div>
<p style="text-align: justify;">We are happy to announce that our paper on risk-aware action selection in distributional reinforcement learning has been accepted for
presentation at the 2020 International Conference on Machine Learning (ICML). A preprint of the paper "Stochastically Dominant Distributional Reinforcement Learning" is available on <a style="font-weight: bold;" href="https://arxiv.org/pdf/1905.07318.pdf">arXiv</a>, and our presentation
of the paper can be viewed <a style="font-weight: bold;" href="https://icml.cc/virtual/2020/poster/6410">here</a>.
This work was led by John Martin.
</p>
<h2>Sonar-Based Detection and Tracking of Underwater Pipelines</h2>
<br />
<div style="text-align: center;"> <iframe src="https://www.youtube.com/embed/oL45QrxqbYI" allowfullscreen="" frameborder="0" height="315" width="560"></iframe> <br />
</div>
<p style="text-align: justify;">At ICRA 2019's <a style="font-weight: bold;" href="http://icra-2019-uwroboticsperception.ge.issia.cnr.it/2019-04-17-acceptedpapers/">Underwater Robotics Perception Workshop</a>,
we recently presented our work
on deep learning-enabled detection and tracking of underwater pipelines
using multibeam imaging sonar, which is collaborative
research with our colleagues at Schlumberger. In the above video, our
BlueROV performs an automated flyover of a pipeline placed in Stevens'
Davidson Laboratory towing tank. Our paper describing this work is available <a style="font-weight: bold;" href="http://personal.stevens.edu/~benglot/Wang_ICRA_2019_UWPerceptionWorkshop.pdf">here</a>.<span></span></p>
<h2>Learning-Aided Terrain Mapping Code Release</h2>
<br />
<div style="text-align: center;"> <iframe src="https://www.youtube.com/embed/4pdBpeRGXmw" allowfullscreen="" frameborder="0" height="315" width="560"></iframe>
<br />
</div>
<p style="text-align: justify;">We have developed a
terrain mapping
algorithm that uses Bayesian generalized kernel (BGK) inference for
accurate traversability
mapping under sparse Lidar data. The BGK terrain mapping
algorithm was presented at the <a style="font-weight: bold;" href="http://proceedings.mlr.press/v87/shan18a">2nd Annual
Conference on Robot Learning</a>. We encourage you to download
our
library from <a style="font-weight: bold;" href="https://github.com/RobustFieldAutonomyLab/BGK_traversability_mapping">GitHub</a>.
A specialized version for ROS supported unmanned ground vehicles, which
includes Lidar odometry and motion planning, is also available on <a style="font-weight: bold;" href="https://github.com/RobustFieldAutonomyLab/traversability_mapping">GitHub</a>.
The author and maintainer of both libraries is Tixiao Shan.
</p>
<h2>Marine Robotics Research Profiled by NJTV News</h2>
<br />
<div style="text-align: center;"> <iframe src="https://player.pbs.org/viralplayer/3015116434/" marginwidth="0" marginheight="0" seamless="" allowfullscreen="" frameborder="0" height="332" scrolling="no" width="512"></iframe>
<br />
</div>
<p style="text-align: justify;">NJTV News recently joined
us for a
laboratory experiment with our BlueROV underwater robot where we tested
its ability to autonomously track an underwater pipeline using deep
learning-enabled segmentation of its sonar imagery. The full article
describing how this work may aid the inspection of New Jersey's
infrastructure is available at <a style="font-weight: bold;" href="https://www.njtvonline.org/news/video/how-machine-learning-can-help-support-new-jerseys-infrastructure/">NJTV
News</a>. </p>
<h2>LeGO-LOAM: Lightweight, Ground-Optimized Lidar Odometry and
Mapping</h2>
<br />
<div style="text-align: center;"> <iframe src="https://www.youtube.com/embed/O3tz_ftHV48" allowfullscreen="" frameborder="0" height="315" width="560"></iframe>
<br />
</div>
<p style="text-align: justify;">We have developed a new
Lidar odometry
and mapping algorithm intended for ground vehicles, which uses small
quantities of features and is suitable for computationally
lightweight, embedded systems applications. Ground-based and
above-ground features are used to solve different components of the six
degree-of-freedom transformation between consecutive Lidar frames. The
algorithm was presented earlier this year at the University of
Minnesota's <a style="font-weight: bold;" href="http://www.roadwaysafety.umn.edu/events/seminars/2018/012518/">Roadway
Safety Institute Seminar Series</a>. We are excited that
LeGO-LOAM will appear at IROS 2018! We encourage you to download our
library from <a style="font-weight: bold;" href="https://github.com/RobustFieldAutonomyLab/LeGO-LOAM">GitHub</a>.
The author and maintainer of this library is Tixiao Shan. </p>
<h2>3D Mapping Code Release - The Learning-Aided 3D Mapping
Library (LA3DM)</h2>
<div style="text-align: center;"><img style="width: 654px; height: 367px;" alt="" src="./index_files/Mapping_Overview.jpg" /><br />
</div>
We have released our <a style="font-weight: bold;" href="https://github.com/RobustFieldAutonomyLab/la3dm">Learning-Aided
3D Mapping (LA3DM) Library</a>,
which includes our implementations of Gaussian process occupancy
mapping (GPOctoMap - Wang and Englot, ICRA 2016) and Bayesian
generalized kernel occupancy mapping (BGKOctoMap - Doherty, Wang and
Englot, ICRA 2017). We encourage you to download our library from <a style="font-weight: bold;" href="https://github.com/RobustFieldAutonomyLab/la3dm">GitHub</a>.
The authors and maintainers of this library are Jinkun Wang and Kevin
Doherty.
<h2><br />
</h2>
<h2>Autonomous Navigation with Jackal UGV</h2>
<br />
<div style="text-align: center;"> <iframe src="https://www.youtube.com/embed/B6lrbAEhEnE" allowfullscreen="" frameborder="0" height="315" width="560"></iframe> <br />
</div>
<p style="text-align: justify;">We have developed terrain
traversability mapping and autonomous navigation capability for our
LIDAR-equipped Clearpath Jackal Unmanned Ground Vehicle (UGV). This
work by Tixiao Shan was recently highlighted on the <a style="font-weight: bold;" href="https://www.clearpathrobotics.com/sit-advances-autonomous-mapping-navigation-research-using-jackal-ugv/">Clearpath
Robotics Blog</a>.</p>
<!--<h2>ROS Package for 3D Mapping with a Hokuyo UTM-30LX Laser
Rangefinder</h2>
<br />
<div style="text-align: center;"><img style="width: 654px; height: 275px;" alt="" src="./index_files/rotating_hokuyo_composite.jpg" /><br />
</div>
<br />
We have released a new ROS package to produce 3D point clouds using a
Hokuyo UTM-30LX scanning laser rangefinder and a Dynamixel MX-28 servo.
Please visit the ROS wiki page for the package <a style="font-weight: bold;" href="http://wiki.ros.org/spin_hokuyo">spin_hokuyo</a>
for more information on how to download, install, and run our software.
The authors and maintainers of this package are Sarah Bertussi and Paul
Szenher.
<h2><br />
</h2>
<h2>3D Exploration ROS Package for Turtlebot</h2>
<br />
<div style="text-align: center;"> <iframe src="https://www.youtube.com/embed/ocZOaySmwHU" allowfullscreen="" frameborder="0" height="315" width="560"></iframe> <br />
</div>
<p style="text-align: justify;">We have released a 3D
autonomous exploration ROS package for the TurtleBot! Please visit the
ROS wiki page for the package <a style="font-weight: bold;" href="http://wiki.ros.org/turtlebot_exploration_3d">turtlebot_exploration_3d</a>
for more information on how to download, install, and run our software.
The authors and maintainers of this package are Xiangyu Xu and Shi
Bai. </p>
<h2>Recent Underwater Localization and 3D Mapping Results</h2>
<br />
<div style="text-align: center;"> <iframe src="https://www.youtube.com/embed/XdkxnGSEufw" allowfullscreen="" frameborder="0" height="315" width="560"></iframe> <br />
</div>
<p style="text-align: justify;">We recently visited Pier
84 in
Manhattan to test our algorithms for underwater localization and 3D
mapping, supported by a single-beam scanning sonar. See above for a
summary of our results from this field experiment, which is detailed in
the ICRA 2017 paper "Underwater Localization and 3D Mapping of
Submerged Structures with a Single-Beam Scanning Sonar," by Jinkun
Wang, Shi Bai, and Brendan Englot.<span></span></p>
<h2>Moved to a New Laboratory Facility</h2>
<div style="padding: 10px 0pt; float: left; text-align: center;"><img style="width: 604px; height: 395px;" alt="" src="./index_files/ABS_lab_photo.png" /> <img style="width: 603px; height: 399px;" alt="" src="./index_files/ABS_grand_opening.png" />
</div>
<p style="text-align: justify;"></p>
<p style="text-align: justify;"></p>
<p style="text-align: justify;"></p>
<p style="text-align: justify;"></p>
<p style="text-align: justify;"></p>
<p style="text-align: justify;"></p>
<p style="text-align: justify;"></p>
<p style="text-align: justify;"></p>
<p style="text-align: justify;"></p>
<p style="text-align: justify;"></p>
<p style="text-align: justify;"></p>
<p style="text-align: justify;"></p>
<p style="text-align: justify;"></p>
<p style="text-align: justify;"></p>
<p style="text-align: justify;"></p>
<p style="text-align: justify;"><br />
</p>
<p style="text-align: justify;"></p>
<p style="text-align: justify;"></p>
<p style="text-align: justify;"></p>
<p style="text-align: justify;"><br />
</p>
<p style="text-align: justify;">Our lab recently relocated
to the ABS Engineering Center, a newly renovated facility at Stevens
that will support interdisciplinary research and education in civil,
mechanical, and naval engineering. Our lab sits in the former location
of Tank 2, a 75' square rotating arm basin that was built in
1942,
whose walls still form the perimeter of the facility. </p>
<span style="font-weight: bold;">At top:</span> A
photo of the ABS Engineering Center, with the entrance to the Robust
Field Autonomy Lab at bottom center. The former rotating arm
of Tank
2 is visible at top.
<p style="text-align: justify;"><span style="font-weight: bold;">At bottom:</span> Members
of the lab at the ABS Engineering Center's grand opening in November
2016.</p>-->
</div>
<br />
</div>
<div style="clear: both; text-align: center;"><img alt="" style="width: 299px; height: 200px;" src="./index_files/StevensNewLogo.svg" /></div>
</div>
<div id="footer">
<p>© Copyright 2025 <a href="http://www.stevens.edu">Stevens
Institute of Technology</a> | <a href="http://www.stevens.edu/ses">School of Engineering and
Science Home</a> | <a href="http://www.stevens.edu/schaefer-school-engineering-science/departments/mechanical-engineering">Mechanical
Engineering Home</a></p>
</div>
</div>
</body></html>