Bob Iannucci: CROSSMobile: Using Drones on Networks to Improve Infrastructure

Bob Iannucci: CROSSMobile: Using Drones on Networks to Improve Infrastructure


So we launched this effort a few years ago
to really ask and answer the question, what will networks be like in the future? That led us to a project that we call CROSSMobile,
where we run an open software defined LTE network on campus. We wanted to measure our signal strength. So that led us to the project of building
some instrumentations that we could walk around. Well we didn’t like to walk around too much,
and we decided it would be interesting to strap those sensors on drones, and that opened
up a whole range of research questions in the relationship between drones as things
that would measure a network’s performance, to drones being elements of the network, to
drones being used in swarms to track targets, to participate in humanitarian assistance
and disaster relief, pipeline inspection, and that led us to using neural networks on
drones. One of the projects that we’re working on
right now is aerial reinforcement learning. So we built technology that lets the drone
essentially take off with the expertise of a four-year-old and recognizing physical objects
in the world, but the four-year-old doesn’t know exactly what you’re looking for. What we then created was the ability to kind
of provide flash cards to the four-year-old. So as the drone is flying, it’s sending
back still images of what it sees, and an operator on the ground can say, “Yes, the
object of interest is in the image,” or “No, it’s not.” Within about a minute or two, the drone very
quickly gets it, about what the object is that we are looking for. We recently did an experiment where we sent
some drones up, one first, to recognize light poles. Sort of simulate city infrastructure inspection
kind of thing. So the drone went up, it knew nothing about
light poles, so the drone is scanning around, the operator on the ground sees the images,
there’s a light pole in the picture, pushes a button that says “Yes, this is an interesting
image.” And then when the camera pans to some other
areas where there isn’t a light pole, we’ll say, “No, no light pole there,” and pretty
quickly it recognizes light poles. Well that’s great, except for the fact the
drone’s battery only lasts for about 15 minutes. So as the drone is landing, we’ve now developed
a capability of taking a second drone off the ground and transferring that knowledge
that the first one gained about how to recognize light poles into the second one. So the first one can land, and then the second
one now is beyond the four-year-old’s knowledge, it has the four-year-old’s knowledge, plus
light poles. So the ultimate impact of this research comes
when we realize that what we’re creating is a new kind of computing platform. Doing this work on drones today, where we’re
pushing very hard to put all the computation on the drones, and drones, by the way, have
big batteries so we don’t have to worry about power, helps teach us how to distribute
computing in a way that can be energy-efficient in the future, and eventually will make its
way into all of the little tiny sensors that are scattered throughout the environment. And above all of it is some way of programming
these intelligent sensing devices, whether they’re camera sensors on drones or sensors
in the pavement, we’d like to be able to program that in the same way.

Leave a Reply

Your email address will not be published. Required fields are marked *