How a Flock of Drones Developed Collective Intelligence

The drones rise all at once, 30 strong, the domes of light on their undercarriages glowing 30 different hues—like luminescent candy sprinkles against the gray, dusky sky. Then they pause, suspended in the air. And after a couple seconds of hovering, they begin to move as one.

As the newly-formed flock migrates, its members’ luminous underbellies all change to the same color: green. They’ve decided to head east. The drones at the front approach a barrier, and their tummies turn teal as they veer south. Soon, the trailing members’ lights change in suit.

Zsolt Bézsenyi
Zsolt Bézsenyi

It beautiful. It’s also kind of amazing: These drones have self-organized into a coherent swarm, flying in synchrony without colliding, and—this is the impressive bit—without a central control unit telling them what to do.

That makes them utterly different from the drone-hordes you’ve seen deployed at places like the Super Bowl and the Olympics. Sure, those quadcopter fleets can number more than a thousand, but the movement and position of each unit is all programmed ahead of time. In contrast, each of these 30 drones is tracking its own position, its own velocity, and simultaneously sharing that information with other members of the flock. There is no leader among them; they decide together where to go—a decision they make on the literal, honest-to-goodness fly.

Video by Balazs Tisza

They’re like birds in that way. Or bees, or locusts. Or any number of creatures capable of organizing themselves majestically and somewhat mysteriously into cohesive groups—a so-called emergent property of their individual actions. A few years ago, they managed to pull it off with 10 drones. Now they’ve done it with three times that many.

But pulling it off was more than three times as difficult. The drones owe their formation to a highly realistic flocking model described in the latest issue of Science Robotics. “The numbers themselves don’t express how much harder it is,” says Gabor Vásárhelyi, director of the Robotic Lab in the Department of Biological Physics at Eötvös University in Budapest and the study’s first author. “I mean, parents with three kids know how much tougher they can be to manage than just one kid. And if you have 20 or 30 to look after, that’s orders of magnitude more difficult. Believe me. I have three sons. I know what I’m talking about.”

Animation by Vásárhelyi et al.

Vásárhelyi’s team developed the model by running thousands of simulations and mimicking hundreds of generations of evolution. “The fact that they’ve done this in a decentralized fashion is quite cool,” says SUNY Buffalo roboticist Karthik Dantu, an expert in multi-robot coordination who was unaffiliated with the study. “Each agent is doing its own thing, and yet some mass behavior emerges.”

In coordinated systems, more members usually means more opportunities for error. A gust of wind might throw a single drone off course, causing others to follow it. A quadcopter might misidentify its position, or lose communication with its neighbors. Those mistakes have a way of cascading through the system; one drone’s split-second delay can be quickly amplified by those flying behind it, like a traffic jam that starts with a single tap of the brakes. A hiccup can quickly give rise to chaos.

But Vásárhelyii’s team designed their flockinging model to anticipate as many of those hiccups as possible. It’s why their drones can swarm not just in a simulation, but the real world. “That’s really impressive,” says roboticist Tønnes Nygaard, who was unaffiliated with the study. A researcher at the Engineering Predictability With Embodied Cognition project at University of Oslo, Nygaard is working to bridge the gap between simulations of walking robots and actual, non-biological quadrupeds. “Of course simulations are great,” he says, “because they make it easy to simplify your conditions to isolate and investigate problems.” The problem is that researchers can quickly oversimplify, stripping their simulations of the real world conditions that can dictate whether a design succeeds or fails.

Instead of subtracting complexity from their flocking model, Vásárhelyi’s team added it. Where other models might dictate two or three restrictions on a drone’s operation, theirs imposes 11. Together, they dictate things like how quickly a drone should align with other members of the fleet, how much distance it should keep between itself and its neighbors, and how aggressively it should maintain that distance.

To find the best settings for all 11 parameters, Vásárhelyi and his team used an evolutionary strategy. The researchers generated random variations of their 11-parameter model, using a supercomputer to simulate how 100 flocks of drones would perform under each set of rules. Then they took the models associated with the most successful swarms, tweaked their parameters, and ran the simulations again.

Sometimes a promising set of parameters led to a dead end. So they’d backtrack, perhaps combining the traits of two promising sets of rules, and run more simulations. Several years, 150 generations, and 15,000 simulations later, they’d arrived at a set of parameters they were confident would work in with actual drones.

And so far those drones have performed with flying colors; real-world tests of their model have resulted in zero collisions. Then there’s the literal flying colors: the lights on the quadcopters’ undercarriages. They’re color-mapped to the direction of each drone’s travel. They were originally developed for multi-drone light shows—you know, Super Bowl type stuff—but the researchers decided at the last minute to add them to their test units. Vásárhelyi says they’ve made it much easier to visualize the drones’ status, spot bugs, and fix errors in the system.

They’re also beautiful, and straightforwardly so—a simple, roboluminescent representation of complex coordination.


More Great WIRED Stories

Leave a Reply

Your email address will not be published. Required fields are marked *