These are the 5 best Amazon deals right now

— Our editors review and recommend products to help you buy the stuff you need. If you make a purchase by clicking one of our links, we may earn a small share of the revenue. However, our picks and opinions are independent from USA Today’s newsroom and any business incentives.

While Amazon hasn’t yet announced the official date of Prime Day, rumors are swirling that it’ll fall on or near July 17. As such, there’s been a significant decline in deals on worthwhile items. While this bodes well for anyone waiting to see what discounts Prime Day will offer, it’s not so great if you’re looking to save money on something you need sooner than later.

Thankfully, there are still a handful of deals that are good enough to impress us, and they’re on useful products. But, if you don’t need to buy something right away, we do recommend adding what you need/want to your Amazon cart or wish list and waiting until Prime Day for the best shot at the best deals. In the mean time, here are the top deals we found on Amazon today.

1. Tools to make yard work something you look forward to

Yard work can seem like a chore, but it can also be fun, especially if you’ve got a good set of power tools and equipment to get the job done faster. Right now, Greenworks is running a Deal of the Day on a whole arsenal of lawn care tools, including a mower, a pole saw, a blower, a string trimmer, and a hedge trimmer.

They’re all cordless and run on the same 80V battery (also on sale today), which can be used interchangeable so you don’t need one battery for each tool. Some include the battery, some don’t so pay attention to what you’re getting. Greenworks’ outdoor lawn equipment is always popular among our readers when it goes on sale, and these are the best prices we’ve seen on these products in about a year.

2. Extra-long lightning cables for convenient charging

A charging cable that’s just to short to use comfortably while charging your phone is not a world-ender, but it is annoying. The best solution, aside from using your phone less and conserving the battery, is to get a cable that gives you more flexibility. Right now, Anker’s discounting its 10-foot lightning cables in red, black, gold, and silver, but you’ll need to use the code “ANKER454” at checkout.

I’ve been using these cables for the last few years and I love them. They’re powerful enough to fast-charge (with the right adapter), and the braided nylon prevents fraying and splitting. I personally opted for the red cables and couldn’t be happier, but the silver, gold, and black are just as nice.

Get the Anker Powerline+ II 10-Foot Lightning Cable for $13.99 (Save $6) with the code “ANKER454”

3. A new set of sheets with loads of positive reviews

Sleeping on a fresh, new set of sheets is hard to beat, and right now you can pick up a highly rated set in one of Amazon’s Deals of the Day. These sheets are made of 100% Egyptian cotton and tout a 1000 thread count.We haven’t seen the price drop this low for this sheet set since last September, and while $86 might seem like a lot to shell out for sheets, nearly 2,000 reviewers claim it’s well worth the investment. They’re soft, well made, and machine washable, and they come in a variety of colors. One buyer even said “these are some of the nicest sheets I’ve ever seen, let alone slept in.”

Get the Thread Spread True Luxury Egyptian Cotton Sheet Set for $86.24 (Save $28.75)

4. The classic KitchenAid stand mixer

Whether you’ve got a wedding coming up or you’re just ready to upgrade your baking game, you’ll be delighted to know that the ever-popular KitchenAid stand mixer is on sale right now. You can get it for just $210 in a handful of fun colors, including almond, majestic yellow, cobalt blue, tangerine, pistachio, and persimmon. Considering the Artisan mixer usually sells for $300-$400 (depending on the color) this discount has us excited.

Get the KitchenAid Artisan 5-Qt. Stand Mixer for $209.99 (Save $80)

5. Cult-favorite water bottles

Drinking enough water every day can do wonders for your health. Staying hydrated is great for your energy levels, your skin, and your appetite. The easiest way to ensure you’re on top of your intake is to keep a water bottle with you. Right now, there’s a Deal of the Day on six different Contigo water bottles, many of which are down to their lowest prices in months. I had an older version of the Addison bottle a couple years ago and loved it until I lost it on the train. They’re well-made, they don’t leak, and they’re large enough that you’re not constantly refilling them.

Prices are accurate at the time this article was published, but may change over time.

Read or Share this story: https://usat.ly/2yFtRZ1

Facebook has 'no plans' to listen in on your conversations (for now), but the creepy stories mount

Zuckerberg shoots down conspiracy theory that Facebook taps your microphone in his Senate hearing. USA TODAY

Facebook has ‘no plans’ to listen to your conversations for now. But the social media giant does not appear to be ruling out the possibility that at some point it might.(Photo: Pixabay)

SAN FRANCISCO – California technology analyst Brian Solis was having a conversation with a friend while the two were driving through Texas. His friend was buying a ranch in Texas but was having trouble with the financing because it was considered a “barndominium.” Solis had never heard the term before nor had he ever researched it online.

But as soon as he hopped out of his friend’s car and checked Facebook, up popped an ad for barndominiums in Texas.

“How is that possible?” he wrote on Facebook.

His friends piled on with stories. Facebook began showing ads from a bathing suit company to one friend after her daughter showed her something in person she bought there. Another says she was talking about Lexus with a friend in the car and then the friend started getting Facebook ads from Lexus.

People were split on whether Facebook was listening in. Maybe it was tracking people when they were together and showing them ads of interest to their friends, Solis speculated. But, he said, “the general consensus was that something was happening that is creepy. It was just too specific to be a coincidence.”

No matter how many times Facebook denies it and media outlets like this one write articles debunking it, people keep getting freaked out that the social media giant is eavesdropping on their conversations.

The unfounded theory goes like this: Facebook records audio over smartphone microphones and then uses voice recognition software to show relevant ads in people’s News Feeds. Facebook says it only accesses users’ microphones if they have given the Facebook app permission and if they are actively using a specific feature that requires audio such as voice messaging.

But the theory keeps floating around the Internet like a bad cold. In April, Mark Zuckerberg was asked about audio surveillance while testifying before Congress.

“Yes or no, does Facebook use audio obtained from mobile devices to enrich personal information about users?” asked Sen. Gary Peters, a Michigan Democrat. “No,” the Facebook CEO replied.

 

Facebook still flat-out denies it’s bugging your conversations like Gene Hackman in Francis Ford Coppola’s 1974 film “The Conversation.” The reason to believe Facebook? It collects so much information on its 2.2 billion users, it really doesn’t need to. But its latest remarks on the subject have set off a new wave of concerns.

The subject came up again last week when Facebook responded in writing to questions posed by lawmakers.

Facebook reassured Congress several times it’s not listening in on people’s conversations. But in a response to Sen. Ted Cruz, the Texas Republican who had asked if Facebook would commit to never surreptitiously gathering audio or visual information on its users, Facebook repeated that it does not currently listen to users, but did not rule out eavesdropping in the future.

 

NOT EVEN A PINKY PROMISE?” asked online publication Quartz. Then Slate pressed Facebook on the subject. Facebook replied: We don’t eavesdrop on people and “we have no plans to change this.”

Facebook told USA TODAY the same. “Facebook has never used your phone’s microphone to inform ads or to change what you see in News Feed, and we have no plans to do so in the future,” Facebook’s vice president of ads Rob Goldman said.

That may be reassuring to some, says Slate, but it’s still different from saying no way will this ever happen.

“The takeaway at this point seems to be: Facebook isn’t spying on us via our phones, and it doesn’t have immediate plans to do so in the future,” wrote Will Oremus, “but you never know.”

But “you never know” is what keeps these suspicions alive and kicking. That and the fact that the tech industry is very much getting into the listening business, with digital helpers on smartphones and smart speakers that exist to listen for our voice commands and respond to our every want and need.

“We are living in an era where active listening is now becoming an extension of our connectivity,” Solis says.

These new devices sitting on bedside tables and kitchen counters are a big hit – but there have been some mishaps, too.

The Google Home and Amazon Echo are always listening, but they are only supposed to start recording when they hear a voice command. With Google Home, that’s “OK Google” or “Hey Google.”

Last October, a tech blogger discovered the Google Home was recording audio in his home even when he didn’t give voice commands. And a Portland, Oregon, family’s private conversations were recorded by their Amazon Echo smart speaker and emailed to a random phone contact.

These experiences may be anomalies, but Solis says consumers are becoming wary. And on the heels of Russian interference and Cambridge Analytica, Facebook isn’t one of America’s most trusted companies right now.

Facebook executives seem to know that audio surveillance is a touchy subject these days. The Silicon Valley company shelved plans to introduce Facebook’s new home product – connected speakers with digital-assistant and video-chat capabilities to compete with Google’s Home and Amazon’s Echo – until the company could review how the device handles people’s personal information and how people respond to it.

For Facebook and the industry, Solis suggests this rule of thumb. Don’t listen in on conversations unless people know you are listening and have given you permission to listen.

As a consumer, Solis says, “I am not comfortable with active listening on devices that I haven’t explicitly authorized to listen.”

If you’re worried about Facebook surveillance, here’s what you can do: Go into your phone settings, choose Facebook and turn off the permissions that allow the Facebook app to access the microphone. But be forewarned: You will need to turn those permissions back on to record live videos with sound.

More: How to stop your devices from listening to (and saving) what you say

More: How Facebook tracks your every move: Fact vs. fiction

More: How to listen to what Amazon’s Alexa has recorded in your home

 

 

 

Read or Share this story: https://usat.ly/2tmrzsL

VR Pilot Training Now Comes With a Sense of Touch

Aviation simulators—the most valuable training tool pilots have—have to get things right. The instrument panel. The wind and the rain. The response of the aircraft when you flip a switch or pull on the yoke. It all must be as high fidelity, as true to life, as possible. Otherwise, pilots risk uncertainty or disorientation when transferring their simulated experience to the real world.

With the rise of virtual reality-based simulation, in which users wear headsets instead of sitting in a cockpit where everything is real but the view out the windshield, the challenge of maintaining that verisimilitude has really taken off. These systems cost just a few thousand dollars, instead of the tens or hundreds of thousands you pay for a full-size cockpit mockup. They’re smaller and more portable too, a plus for clients like militaries who like the option of training pilots in remote locations.

The downside is that in today’s systems, beside the joystick, rudder pedals, and maybe a throttle lever, all the controls are digital renderings. You “activate” the switches and dials by poking and jabbing into thin air. That amplifies the challenge of VR-based training, where the nuances of touch and movement are essential to programming the pilot’s brain.

One solution—long pursued across many virtual-reality applications, from gaming and design to sex—is haptic feedback. Mechanical actuators placed in contact with different areas of the user’s body, most notably the hands and fingertips, add the sensation of touch to these computer-generated worlds. Now, a French company called Go Touch VR is putting it into action.

Working with US virtual-reality simulation software developer FlyInside, Go Touch VR has adapted its fingertip-mounted technology for aviation. The goal is to give pilots using virtual-reality flight simulators that touch-based confirmation with every switch and dial used on their flights, just as they would experience in the kind of full-sized cockpit mockups found in large, commercial multimillion-dollar motion simulators.

“You should only have to give a glance to button that you need to press during an operation, while all the rest of the action is confirmed by the touch sensation—the ‘click’ that you have from the virtual switch,” says Eric Vezzoli, Go Touch’s co-founder and CEO. “Without that fundamental confirmation, you must look back and check if the action was performed, and spend precious time and attention that you need to dedicate to flying operations.”

The trick is fine-tuning the subtle mechanical interactions—what Vezzoli calls the “cutaneous force feedback through skin indentation”—so they feel natural.
FlyInside

In Go Touch VR’s new system, derived from its engineers’ expertise in haptic feedback, the user wears three sensors on each hand, which resemble the things look like the blood-pressure sensors doctors place on your fingertips. By applying pressure to your fingertips, the actuators can replicate object stiffness, coarse textures, and the sensation of holding physical objects in your hands. The devices contain numerous actuators beneath a flexible rubber cover, and they can be individually controlled and varied in pressure to simulate light touches up through more pronounced contact. Though clunky in appearance, they’re lightweight and designed not to interfere with natural hand and finger movements. (The company is working to miniaturize them further before starting production.)

The view from the headset omits the attachments from its representation of the user’s hands, so they’re easy to forget about. All the pilot in training knows is that when she flips her finger, she can feel the switch move as well as see it. Part of the effectiveness, the company explains, comes from the user’s brain amplifying the sensors’ work by merely anticipating and recognizing the physical contact.

In Go Touch VR’s new system, derived from its engineers’ expertise in haptic feedback, the user wears three sensors on each hand, which resemble the things look like the blood-pressure sensors doctors place on your fingertips.
Go Touch VR

The trick is fine-tuning the subtle mechanical interactions—what Vezzoli calls the “cutaneous force feedback through skin indentation”—so they feel natural. “The technology reproduces the exact skin stimulation that you perceive when you are interacting with real objects,” he says. “We are concentrating in the area that you use to interact most, the fingertips. When we couple it with a visual rendering in virtual or augmented reality, you reach out your hand toward an object, activating the skin pressure, the brain ‘clicks’ and let you perceive the virtual object in front of your eyes as real, because it is feeling a sensation that it is expecting.”

The system, and has potential far beyond aviation. According to the company, the technology can improve a wide variety of VR interactions, including, for example, catching and throwing balls with greater accuracy than other control systems.

The company exhibited the product, which is still in the development kit phase, at the European defense and security conference Eurosatory in mid-June. It says pilots and engineers who tried it out affirmed its effectiveness, and that some noted the portability benefits for military personnel and others the ease of use.

In addition, the technology has potential benefits beyond aviation, including retail contexts, allowing consumers to “touch” products remotely before buying them, and manufacturing training roles, where manual skills need to be taught and practiced before being applied in the real world. Beyond that, the sky is pretty clearly the limit.


More Great WIRED Stories

Denmark’s Carbon Footprint Is Set to Balloon—Blame Big Tech

This story originally appeared on CityLab and is part of the Climate Desk collaboration.

Denmark’s reputation as one of the most proactive countries in the world in the fight against climate change took a heavy knock this week. Despite its reputation as a green energy pioneer, a Danish government memorandum obtained by newspaper Politiken suggests that the country’s carbon emissions are due to rise sharply, by as much as 10 percent between now and 2030. The potential culprits: Apple, Facebook, and Google, among others.

That’s because these tech companies are all either planning or building major data centers in the country. It’s estimated (if not confirmed) that Denmark will host six new data centers within 12 years, and the three companies mentioned above are all either in the process of constructing or scouting sites for major facilities. According to analysis by the Danish Energy Authority, just one data center could push up the country’s electricity consumption by 4 percent—that’s more than is used in an entire year in Denmark’s third city, Odense.

Those presumptions are indeed broadly born out by figures from elsewhere in the world. In the US alone, data centers consume 90 billion kilowatt-hours of electricity a year—enough to meet 40 percent of the United Kingdom’s annual energy needs. This consumption isn’t declining any time soon, either. By 2025, the communications industry could consume one-fifth of all the world’s electricity. The increasing use of the cloud for video streaming is a major factor driving the need for ever more capacious data centers, as is the rise of bitcoin mining.

The growth in use is large and alarming, and provides its own explanation as to why Denmark has been chosen as a site for the expansion of such facilities. Communications companies are well aware of their heavy carbon footprint and are seeking out places with large renewable energy sectors. The same impulse drove Google this year to buy up all the energy from the Netherlands’ largest solar park, while it has set up data centers in Sweden and (along with Microsoft) in Finland.

Certainly, Denmark can supply more green energy than most. In 2017, the country generated 43 percent of its power from wind turbines, a level it hopes to increase to 50 percent by 2030. Green energy is not its sole selling point, either. Add to that an existing role as a global data-cable hub, a stable, corruption-free government, and a location where disruptive major natural disasters are extremely rare, and you have an ideal-looking place for communications companies looking to expand.

There is no way that existing renewables facilities could hope to meet the prospective demand from data centers.

The problem is that Denmark’s green energy has its limits. There is no way that existing renewables facilities could hope to meet the prospective demand from data centers. That means the country will have to generate power using more conventional, polluting sources, including coal. Although it aims to phase them out by 2030, Denmark in fact retains three coal-fired power stations, which growing energy demand will make it harder to dispense with. Just as the country has succeeded in reducing its emissions, it now looks set to push them up again.

So what can be done? There are some ways to offset and mitigate data centers’ energy consumption. One could be to channel the heat they produce into district heating systems, a process already in place in Sweden that could at least reduce the need to generate electricity for nearby towns and cities. A new design proposal called The Spark from a Norwegian real-estate developer and the architecture firm Snøhetta takes this idea to its logical conclusion, depicting a city whose very core is built around data centers, using their excess heat to warm the city and pumping the city’s cool air back into the centers to create a virtuous circle. (This makes sense as a model, even if suitable real-life settings might be hard to come by.)

Another long-discussed option is locating data centers underwater on the seabed, where water can provide natural cooling for the heat-generating computer servers that removes the need for electric-powered air conditioning—although its still unclear what negative effect heating nearby waters might have.

Microsoft has in fact just submerged one of these not all that far from Denmark, sinking a 40-foot-long container filled with data drives off the coast of Scotland’s Orkney Islands. Solutions like this could help drive down data centers’ energy consumption—but Denmark is going to need something altogether more comprehensive if its long-standing emissions push isn’t to come to nought.

For the rest of the world, however, Denmark still doesn’t represent the worst situation. Data centers located in a country with a cool climate and high renewable energy capacity are less environmentally damaging that their counterparts in hotter regions where coal is still widely used for power generation—such as much of China or the Southern United States. All the same, the fact that Denmark’s green reputation is now attracting industries that may increase the country’s carbon emissions substantially seems like something of an own goal.

Bose SoundSport Free Review: Amazing Sound, No Strings Attached

In the past, other WIRED writers have loved real wireless buds, but I’ve been skeptical. In fact, I viscerally dislike them. They hit a specific Uncanny Valley of technology that threatens to cross the line between implant and implement.

I’m not the only one. While testing the Bose SoundSport Free, my toddler daughter repeatedly asked me to take them out. And I can’t blame her. I’ve been known to pick up my infant before I hug my father-in-law, just so that the baby will snatch his wireless bud out of his ear (sorry!).

Moreover, the design doesn’t seem to make sense, especially for workout headphones. It’s one thing if a bud is dislodged when you’re sitting at your desk, but when you’re running and sweating? What if they fall in a thorny bush? Or a pile of dirt, or into a puddle of MRSA sweat at your gym? No thanks.

Since wearing them, I’ve been reluctantly converted by the Bose SoundSport Free. They fit securely and comfortably; they’re convenient and easy to use. And they sound so friggin’ good! I’ve written that sound quality might matter less with workout headphones, but when you can get it, you might as well enjoy it.

Like many of Bose’s products, these ‘buds are outrageously expensive. Even writing that a pair of workout headphones costs $200 makes me gag a little bit. But in nearly every way but price, they performed spectacularly.

This Case is the Place

Bose

The headphones come with a clamshell charging case, three sets of winged sport tips, and a USB cable. It took two hours to charge the first time, and Bose states that the earbuds have a five-hour run time. I got down to 70 percent after three hours, which seems like a slightly longer runtime than stated. But in a week of dog walks, gym trips, and trail running wearing the buds, it was hard to run them down farther down than that.

That’s because I was terrified to leave such expensive buds out of the case, in a house filled with dogs and kids where tiny objects tend to go missing without a trace. I found diligently returning them every time I was done was the best way to ward off the same gremlins that steal half of your favorite pair of socks.

When I carefully replaced the buds in their little house, they started recharging, since the case also stores enough battery life for two full charges. You can check the case’s battery life by pushing the opening clasp to illuminate the built-in status LEDs. Even though Bose has a companion app that lets you locate your earbuds, sometimes it was only able to locate one bud or the other reliably.

Despite my initial skepticism, I found these Bose ‘buds comfortable to wear, even for three straight hours. Unlike in-ear buds, which often require an involved and irritating trial period wherein I poke one little rubber blueberry in my ear canal after another, the Bose ear tips are larger. They rest just outside the canal and are held in place with wings.

The rubber of the wings was much softer and more comfortable than other buds that I’ve tested, and the medium size out of the box worked well. It was so convenient, and such a relief to not have to test different bud sizes by jogging in place in my living room in front of my bewildered dogs.

They stayed in place while hiking, running, walking, and climbing. They’re IPX4-rated, so they didn’t react at all to my sweaty ear folds. They did let in ambient noise, but I appreciated that quality. I like being able to hear dogs and cyclists approaching when I’m outside.

Oh yeah, and the sound. The sound! The first time I heard the intro to Missy Elliott’s “Lose Control” come thundering in, I almost did lose control. Everything, from L7’s growling guitars to silky Celtic fiddling, sounded rich, full, and vibrant.

In particular, the bass sounded fantastic. I could almost feel Beyoncé’s “Formation” pulsing in my chest. These sounded good enough to transition from workout headphones to an everyday desk set.

And of course, they’re by Bose, so everything has a sleek, premium look and feel. The smooth black case feels great resting in your hand, and the earbuds click satisfyingly in place thanks to some magnets. The app even suggests great nicknames for your buds when you set them up. Little Miss Dynamite? Why yes! That’s me!

Buggity Buggin’

Though their rockin’ sound helped win me over, there are still a few issues worth mentioning. The on-bud controls are stiff and difficult to push. I had to take them out to turn them off and turn the volume up and down, or answer calls. If you’re anything like me (by which I mean a total klutz), this is when you’re going to lose one—while trying to answer a call, untangle two dog leashes, and walk at the same time.

Thankfully, Bluetooth never dropped out on me, even when climbing and walking around 30-foot boulders away from my phone. That said, I have seen comments that some customers have experienced lag or sound dropping out while watching videos on phones or tablets. Bose has updated the firmware on the SoundSport Free since it was released last year, so I updated my tester. I didn’t notice any lagginess while watching videos on my computer, but I did notice a barely-perceptible lag while watching videos on my iPhone 8. I don’t usually watch movies on my phone, so this didn’t bother me. But at this price point, there shouldn’t be any bugs at all.

I understand that these aren’t for everyone. They’re 1.5 inches deep and 1.25 inches tall–about as wide around as a quarter. They’re not small, and they stick out of your ears. So yes, you will look a little weird. And I know they’re rugged, high-quality buds, but they’re so expensive. I couldn’t keep myself from treating them like tiny precious objects, closing up that tiny clamshell sarcophagus with as much care as if it held the remains of a long-lost pharaoh. I felt a little bit like a pretentious dweeb when wearing them, but maybe that’s part of the package when you buy a Bose product.

You might miss noise filtering if you have a busy city commute. And if you stream movies on your phone, that lag might get annoying, if you even notice it. But if you love amazing sound quality, comfort, and convenience, the Bose SoundSport Free are worth a look. I finally understand why people can be convinced to pay so much for these things.

How Can We Make Technology Healthier for Humans?

In a well-known parable, a group of blind men encounters an elephant. Each man touches a different part of the elephant and receives very different tactile feedback. Their later descriptions of the elephant to each other disagree, though each individual’s description is accurate and captures one portion of the elephant: a tusk, a leg, an ear. Humans often have only partial information and struggle to understand the feelings and observations of others about the same problem or situation, even though those feelings and observations may be absolutely accurate and valid in that person’s context.

Our relationships with technology are similar: Each of us relates to technology in a unique, highly personal way. We lose or cede control, stability, and fulfillment in a million different ways. As Leo Tolstoy wrote in the novel Anna Karenina, “All happy families are alike; each unhappy family is unhappy in its own way.”

In the same vein, the road back from unhappiness, the path to taking control over technology, and, by extension, the path to regaining freedom of choice takes a multitude of steps that are different for each of us. The steps nonetheless carry some common characteristics that we can all use as a basis for rediscovering and reentering real life.

Excerpted from Your Happiness Was Hacked: Why Tech Is Winning the Battle to Control Your Brain—and How to Fight Back by Vivek Wadhwa and Alex Salkever

Berrett-Koehler Publishers

The refrain we commonly hear is that we need to unplug and disconnect. Conceptually, this recommendation may feel good as a way to take back total control and to put technology back in its place as a subservient, optional tool. But using technology is no longer a matter of choice.

If you were to apply for a white-collar job of any kind and inform the hiring manager that you refuse to use e-mail, you’d get a swift rejection. Our friends share pictures digitally; no longer are printed photographs of the soccer team or birthday party mailed to us. Restaurants that use the OpenTable online reservation system often will not take phone calls for reservations. Even the most basic services, such as health care and checking in for a flight, are in line for mandatory digitalization. Yes, we can opt out of those services and businesses, but if we do, we lose out.

Unplugging wholesale is not an option. Nor for most of us is it an appropriate response to life in the age of technology. The question then becomes how to selectively unplug. How can we set better limits? How can we control our environments at work and at home, and the environments our children live in, in order to make them a bulwark against assaults on our freedoms, privacy, and sociability?

Understanding Our Tech Dependence and Addiction

Vivek first visited China more than a decade ago, before the era of wireless data connections and ubiquitous broadband. He found that he could not book ordinary hotels in advance and that catching a taxi was a nightmare because no one spoke English. He needed to have the concierge write his destination on a piece of paper to hand to the taxi driver, praying that he didn’t end up in the wrong part of the city.

About the Authors

Vivek Wadhwa

is a distinguished fellow at Harvard Law School’s Labor and Worklife Program and a distinguished fellow and professor at Carnegie Mellon University’s College of Engineering.

Alex Salkever

is the former technology editor of BusinessWeek.com and vice president of marketing at Mozilla. He advises companies around the world on how to adapt to rapid technology changes.

When he visited again in 2016, Vivek found that the technology landscape had changed. Everyone had a smartphone with fast information transfer. Booking hotels was easy, as were finding online restaurant reviews and catching cabs. Communication was easier, not because more people spoke English but because real-time translation applications had become so good that the Chinese people could hold slow but functional conversations with Vivek by uttering a phrase into their phones and playing back the English version. This trip was less fraught with stress and uncertainty, thanks to modern technology.

The smartphone became a way to help Vivek make the most of his journey and spend less time on the drudgery of logistics and discovery. He felt more in control, better able to navigate, and more mentally free to experience and be present on the trip rather than worry about where he would stay or eat. And whereas using Google Maps in our hometown takes us away from the present and reduces us to watching the blue dot and remembering a lot less about the journey, the map and general online knowledge are an enormous help to the traveler who visits the hinterlands of China, where navigation is more challenging.

In almost every case with regard to our use of technology, the context matters. The nuances of context offer special challenges in building smart strategies for healthy technology use and in shifting our interactions with technology from toxic to measured and beneficial.

There is no defined category for technology addiction, but psychiatrists have been debating whether internet addiction is a real malady. It was not added to the latest version of the Diagnostic and Statistical Manual of Mental Disorders, the diagnostic bible of mental health professionals around the world. (Online gaming is a subsection of the gambling-addiction section in that publication.) But a working definition of internet addiction serves as a useful lens through which to view most technology pathologies. In an article on the topic, psychiatrist Jerald Block broke down internet addiction into three clear subtypes: sexual preoccupation, excessive gaming, and excessive or uncontrolled e-mail or text messaging. This article was written in 2008, so Block probably had not taken account of social media, then not yet in broad adoption. Social media, online shopping, and video watching would be additional subcategories today.

Regardless of the category, Block’s enumeration of the phenomenon’s negative influences is relevant to nearly any form of addiction or technology pathology.

The first is excessive use, sometimes associated with a loss of sense of time or an (occasionally fatal) neglect of basic needs such as food, drink, bodily evacuation, and sleep. The second is some form of withdrawal, including feelings of anger, irritability, tension, or depression when a device is not available or when there is no (or limited) internet connectivity. The third is tolerance of and willingness to make alterations or purchases to accommodate the addiction. The tolerance may be to acquiring better computer equipment or more software, to spending more hours of use, or to spending a great deal of money. The fourth is the negative psychic repercussions stemming from arguments, lying, lack of achievement, social isolation, and fatigue. According to the research cited earlier, the repercussions include depression, anxiety, and loneliness.

With these negative influences in mind, we can propose a simple set of questions to ask ourselves in deciding how to create a more mindful and conscious engagement with our technology. Does our interaction or use of the technology make us happy or unhappy? There are many derivatives of this question: Does it make us tense or relaxed? Does it make us anxious or calm? The answer may be “both,” and that is OK, but we should consider whether, on balance, an interaction leaves us with good or bad feelings.

Good Tech or Bad Tech: Engagement by Design

One way to address the overall question of how a technology affects you is to go through the following exercise. It is a classic decision-framing exercise, not magic; but being able to count, visualize, and weigh effects and considerations is immensely helpful in undertaking it.

Here is what you do. Write down a particular activity or technology at the top of a sheet of paper. (It is definitely best to do this exercise on paper.) It can be anything relating to screens and technology. Draw a line down the middle of the paper. On the left-hand side, list all the positive things and benefits that you feel this technology or technology-driven behavior brings you. On the right-hand side, list all the negatives.

Ask yourself: Should you remove Facebook or Twitter from your phone? Should you install an application such as Slack on it? Should you ban screens from your bedroom? Should you turn off the internet on Sundays and after 8 p.m.? Should you lock your phone in your car’s glove compartment? If you consume porn or online gaming, should you completely ban it from your life in order to restore balance? These are some of the decisions you will want to make.

The refrain we commonly hear is that we need to unplug and disconnect. But using technology is no longer a matter of choice.

You will also want to examine the secondary effects. For example, Alex has until recently used the music app Spotify to play tunes during his runs and workouts. On its face, this seems to make sense. Research has shown that music can positively affect motivation to work out. Alex really liked the feature on Spotify that matches his running pace with song beats of the same pace.

Then he started to pay attention to how much time it was taking for him to manage Spotify during workouts and how much time it was taking away from the workout. Though not the majority of it, the time was considerable. For example, in a standard weightlifting and calisthenics workout, Alex was spending about three minutes per session to manage songs. In a thirty-minute session on a busy day, that was 10 percent of his time—for no good reason. It was dead time due to technology.

Listening to music on Spotify is surely a net positive: Providing an endless selection of tunes with infinite playlists, it opens up rich new worlds. The service also makes sharing with friends very easy. It allows Alex to expose his children to Bach, Mozart, John Coltrane, and Celia Cruz, all from one easy screen, the same screen from which they hear music by Nicki Minaj, the Gym Class Heroes, and Kendrick Lamar. But this example shows the importance of consciously designing the style of our engagement even with a technology application whose use is, by and large, positive.

We can efficiently analyze our interactions with technology, and evaluate their effects, through six questions. The answers can be as simple as a mental checklist, and they are usually obvious and intuitive. It can even be useful to list positives and negatives explicitly.

The questions to ask yourself about a technology or application are as follows: Does it make us happier or sadder? Do we need to use it as part of our lives or work? Does it warp our sense of time and place in unhealthy ways? Does it change our behavior? Is our use of it hurting those around us? If we stopped using it, would we really miss it?

In engaging with technology, we should actively and consciously lean toward the contexts and uses in which we find the technology behavior to be largely beneficial and satisfying. Though simple, it’s an approach that any of us can make work, simply by asking ourselves relevant questions—and being honest about the feelings and other effects the technology raises in us.

The Rise of DNA Data Storage

The 144 words of Robert Frost’s seminal poem “The Road Not Taken” fit neatly onto a single printed page or a 1 kilobyte data file. Or in Hyunjun Park’s hands, a few drops of water in the bottom of a pink Eppendorf tube. Well, really what’s inside the water: invisible floating strands of DNA.

Scientists have long touted DNA’s potential as an ideal storage medium; it’s dense, it’s easy to replicate, it’s stable over millennia. And in the last few years they’ve managed to encode all kinds of things in those strings of As, Ts, Cs, and Gs: War and Peace, Deep Purple’s “Smoke on the Water,” a galloping horse GIF. But in order to replace existing silicon chip or magnetic tape storage technologies, DNA is going to have to get a lot cheaper to predictably read, write, and package.

That’s where scientists like Park come in. He and the other co-founders of Catalog, an MIT DNA storage spinoff emerging out of stealth on Tuesday, have come a long way since encoding their first poetic kilobyte by hand a year and a half ago. Now they’re building a machine that will write one terabyte of data a day, using 500 trillion molecules of DNA. They plan to launch industrial scale storage services for IT companies, the entertainment industry, and the federal government within the next few years—joining several much larger tech companies like Microsoft, Intel, and Micron that are funding their own DNA storage projects.

If successful, DNA storage could be the answer to a uniquely 21st century problem: information overload. Five years ago humans had produced 4.4 zettabytes of data; that is set to explode to 160 zettabytes (each year!) by 2025. Current infrastructure can only handle a fraction of the coming data deluge, which is expected to consume all the world’s microchip-grade silicon by 2040.

Most digital archives—from music to satellite images to research files—are currently saved on magnetic tape. Tape is cheap. But it takes up space. And it has to be replaced about every 10 years. “Today’s technology is already close to the physical limits of scaling,” says Victor Zhirnov, chief scientist of the Semiconductor Research Corporation. “DNA has information storage density several orders of magnitude higher than any other known storage technology.”

How dense exactly? Imagine formatting every movie ever made into DNA; it would be smaller than the size of a sugar cube. And it would last for 10,000 years.

The trouble of course, is cost. Sequencing—or reading—DNA has gotten far less expensive in the last few years. But the economics of writing DNA remain problematic if it’s going to become a standard archiving technology. DNA synthesis companies like Twist Bioscience charge between 7 and 9 cents per base. Which means a single minute of high quality stereo sound could be stored for just under $100,000.

Catalog thinks it can rewrite those cost curves by decoupling the process of writing DNA from the process of encoding it. Traditional methods map the sequence of bits—zeros and ones—onto a sequence of DNA’s four base pairs. In 2016, when Microsoft set a record storing 200 megabytes of data in nucleotide strands, the company used 13,448,372 unique pieces of DNA. What Catalog does, instead, is cheaply generate large quantities of a just a few different DNA molecules, each one not more than 30 base pairs long. Then it uses billions of enzymatic reactions to encode information into the recombination patterns of those prefab bits of DNA. Instead of mapping one bit to one base pair, bits are arranged in multidimensional matrices, and sets of molecules represent their locations in each matrix.

“If you think of information as a book, you can record that information by copying it down by hand,” says Park. But instead of transcribing letter for letter, Catalog is instead creating a printing press, where each typeface is represented by a small molecule of DNA. “By rearranging these premade molecules in different ways we can organize all the different words into the original order of the book.”

Devin Leake, who recently left his role as head of DNA synthesis at Ginkgo Bioworks to be Catalog’s chief science officer, says this approach should have the company approaching costs competitive with tape storage within a few years, once it scales up automation. Zhirnov says that might be feasible with Catalog’s “library approach,” because it won’t have to synthesize new DNA for every new piece of stored information, the company can just remix their pre-fabricated DNA molecules instead.

If it achieves those economies of scale, Catalog could move beyond what most people have identified as early applications of the technology, namely storing data that needs to be archived for legal or regulatory reasons—like rarely-accessed surveillance video, medical records, or historical government documents. According to Leake and Park, the company will start commercial pilots early next year, focusing on intelligence and space agencies within the federal government as well as the IT sector and Hollywood.

Molecular data storage has become something of a pet project for the Defense Advanced Research Projects Agency. Last year it dropped $15.3 million in grants to discover new biochemical ways to store binary. And big tech companies have begun piloting their own projects as well. Microsoft plans to have an operational prototype storage system based on DNA working inside one of its data centers by 2020.

According to Doug Carmean, a partner architect at Microsoft Research, it will initially be offered to “boutique” customers, with data needs in the gigabyte to petabyte range. The long-term goal though, is much more ambitious. “We’re going after totally replacing tape drives as an archival storage,” says Carmean. By drafting the enormous waves of interest in consumer genetics and synthetic biology, he thinks that could actually happen sooner rather than later. “As people get better access to their own DNA, why not also give them the ability to read any kind of data written in DNA?” Data storage just might be a modern day problem looking for a 3.8 billion-year-old solution.


More Great WIRED Stories

How Social Networks Set the Limits of What We Can Say Online

Content moderation is hard. This should be obvious, but it’s easily forgotten. It is resource intensive and relentless; it requires making difficult and often untenable distinctions; it is wholly unclear what the standards should be, especially on a global scale; and one failure can incur enough public outrage to overshadow a million quiet successes. We as a society are partly to blame for having put platforms in this untenable situation. We sometimes decry the intrusions of moderators, and sometimes decry their absence.

Even so, we have handed to private companies the power to set and enforce the boundaries of appropriate public speech. That is an enormous cultural power to be held by so few, and it is largely wielded behind closed doors, making it difficult for outsiders to inspect or challenge. Platforms frequently, and conspicuously, fail to live up to our expectations—in fact, given the enormity of the undertaking, most platforms’ own definition of success includes failing users on a regular basis.

Adapted from Custodians of the Internet by Tarleton Gillespie.

The social media companies that have profited most have done so by selling back to us the promises of the web and participatory culture. But those promises have begun to sour. While we cannot hold platforms responsible for the fact that some people want to post pornography, or mislead, or be hateful to others, we are now painfully aware of the ways in which platforms invite, facilitate, amplify, and exacerbate those tendencies.

For more than a decade, social media platforms have portrayed themselves as mere conduits, obscuring and disavowing their active role in content moderation. But the platforms are now in a new position of responsibility—not only to individual users, but to the public more broadly. As their impact on public life has become more obvious and more complicated, these companies are grappling with how best to be stewards of public culture, a responsibility that was not evident to them—or us—at the start.

For all of these reasons, we need to rethink how content moderation is done, and what we expect of it. And this begins by reforming Section 230 of the Communications Decency Act—a law that gave Silicon Valley an enormous gift, but asked for nothing in return.

The Offer of Safe Harbor

The logic of content moderation, and the robust protections offered to intermediaries by US law, made sense in the context of the early ideals of the open web, fueled by naïve optimism, a pervasive faith in technology, and entrepreneurial zeal. Ironically, these protections were wrapped up in the first wave of public concern over what the web had to offer.

The CDA, approved in 1996, was Congress’s first response to online pornography. Much of the law would be deemed unconstitutional by the Supreme Court less than a year later. But one amendment survived. Designed to shield internet service providers from liability for defamation by their users, Section 230 carved out a safe harbor for ISPs, search engines, and “interactive computer service providers:” so long as they only provided access to the internet or conveyed information, they could not be held liable for the content of that speech.

About the Author

Tarleton Gillespie is a principal researcher at Microsoft Research, an affiliated associate professor at Cornell University, and the author of Wired Shut: Copyright and the Shape of Digital Culture.

The safe harbor offered by Section 230 has two parts. The first shields intermediaries from liability for anything their users say; intermediaries that merely provide access to the internet or other network services are not considered “publishers” of their users’ content in the legal sense. Like the telephone company, intermediaries do not need to police what their users say and do. The second, less familiar part adds a twist. If an intermediary does police what its users say or do, it does not lose its safe harbor protection. In other words, choosing to delete some content does not suddenly turn the intermediary into a “publisher.” Intermediaries that choose to moderate in good faith are no more liable for moderating content than if they had simply turned a blind eye to it. These competing impulses—allowing intermediaries to stay out of the way and encouraging them to intervene—continue to shape the way we think about the role and responsibility of all internet intermediaries, including how we regulate social media.

From a policy standpoint, broad and unconditional safe harbors are advantageous for internet intermediaries. Section 230 provided ISPs and search engines with the framework on which they have depended for the past two decades: intervening on the terms they choose, while proclaiming their neutrality to avoid obligations they prefer not to meet.

We sometimes decry the intrusions of moderators, and sometimes decry their absence.

It is worth noting that Section 230 was not designed with social media platforms in mind, though platforms claim its protections. When Section 230 was being crafted, few such platforms existed. US lawmakers were regulating a web largely populated by ISPs and amateur web “publishers”—personal pages, companies with stand-alone websites, and online discussion communities. ISPs provided access to the network; the only content intermediaries at the time were “portals” like AOL and Prodigy, the earliest search engines like Altavista and Yahoo, and operators of BBS systems, chatrooms, and newsgroups. Blogging was in its infancy, well before the invention of large-scale blog-hosting services like Blogspot and WordPress. Craigslist, eBay, and Match.com were less than a year old. The ability to comment on a web page had not yet been simplified as a plug-in. The law predates not just Facebook but also MySpace, Friendster, and Livejournal. It even predates Google.

Section 230 does shield what it then awkwardly called “access software providers,” early sites that hosted content provided by users. But contemporary social media platforms profoundly exceed that description. While it might capture YouTube’s ability to host, sort, and queue up user-submitted videos, it is an ill fit for YouTube’s ContentID techniques for identifying and monetizing copyrighted material. While it may approximate some of Facebook’s more basic features, it certainly didn’t anticipate the intricacy of the News Feed algorithm.

The World Has Turned

Social media platforms are eager to retain the safe harbor protections enshrined in Section 230. But a slow reconsideration of platform responsibility is underway. Public and policy concerns around illicit content, initially focused on sexually explicit and graphically violent images, have expanded to include hate speech, self-harm, propaganda, and extremism; platforms have to deal with the enormous problem of users targeting other users, including misogynistic, racist, and homophobic attacks, trolling, harassment, and threats of violence.

In the US, growing concerns about extremist content, harassment and cyberbullying, and the distribution of nonconsensual pornography (commonly known as “revenge porn”) have tested this commitment to Section 230. Many users, particularly women and racial minorities, are so fed up with the toxic culture of harassment and abuse that they believe platforms should be obligated to intervene. In early 2016, the Obama administration urged US tech companies to develop new strategies for identifying extremist content, either to remove it or to report it to national security authorities. The controversial “Allow States and Victims To Fight Online Sex Trafficking Act” (FOSTA), signed into law in April, penalizes sites that allow advertising that facilitates sex trafficking cloaked as escort services. These calls to hold platforms liable for specific kinds of abhorrent content or behavior are undercutting the once-sturdy safe harbor principle of Section 230.

These hesitations are growing in every corner of the world, particularly around terrorism and hate speech. As ISIS and other extremist groups turn to social media to spread fear with shocking images of violence, Western governments have pressured social media companies to crack down on terrorist organizations. In 2016, European lawmakers persuaded the four largest tech companies to commit to a “code of conduct” regarding hate speech, promising to develop more rigorous review and to respond to takedown requests within 24 hours. Most recently, the European Commission delivered expanded (non-binding) guidelines requiring social media platforms to be prepared to remove terrorist and illegal content within one hour of notification.

Neither Conduit nor Content

Even in the face of longstanding and growing recognition of such problems, the logic underlying Section 230 persists. The promise made by social media platforms—of openness, neutrality, meritocracy, and community—remains powerful and seductive, resonating deeply with the ideals of network culture and a truly democratic information society. But as social media platforms multiply in form and purpose, become more central to how and where users encounter one another online, and involve themselves in the circulation not just of words and images but of goods, money, services, and labor, the safe harbor afforded them seems more and more problematic.

Social media platforms are intermediaries, in the sense that they mediate between users who speak and users who might want to hear them. This makes them similar not only to search engines and ISPs but also to traditional media and telecommunications companies. Media of all kinds face some sort of regulatory framework to oversee how they mediate between producers and audiences, speakers and listeners, the individual and the collective.

Rethinking a bedrock internet law.

Social media violate the century-old distinction embedded in how we think about media and communication. Social media platforms promise to connect users person to person, “conduits” entrusted with messages to be delivered to a select audience (one person, or a friend list, or all users who might want to find it). But as a part of their service, these platforms not only host that content, they organize it, make it searchable, and often algorithmically select some of it to deliver as front-page offerings, newsfeeds, trends, subscribed channels, or personalized recommendations. In a way, those choices are the product, meant to draw in users and keep them on the platform, paid for with attention to advertising and ever more personal data.

The moment that social media platforms added ways to tag or sort or search or categorize what users posted, personalized content, or indicated what was trending or popular or featured—the moment they did anything other than list users’ contributions in reverse chronological order—they moved from delivering content for the person posting it to packaging it for the person accessing it. This makes them distinctly neither conduit nor content, not only network nor only media, but a hybrid not anticipated by current law.

It is not surprising that users mistakenly expect them to be one or the other, and are taken aback when they find they are something altogether different. Social media platforms have been complicit in this confusion, as they often present themselves as trusted information conduits, and have been oblique about the way they shape our contributions into their offerings. And as law scholar Frank Pasquale has noted, “policymakers could refuse to allow intermediaries to have it both ways, forcing them to assume the rights and responsibilities of content or conduit. Such a development would be fairer than current trends, which allow many intermediaries to enjoy the rights of each and responsibilities of neither.”

Reforming Section 230

There are many who, even now, strongly defend Section 230. The “permissionless innovation” it provides arguably made the development of the web, and contemporary Silicon Valley, possible; some see it as essential for that to continue. As legal scholar David Post remarked, “No other sentence in the US Code… has been responsible for the creation of more value than that one.” But among defenders of Section 230, there is a tendency to paint even the smallest reconsideration as if it would lead to the shuttering of the internet, the end of digital culture, and the collapse of the sharing economy. Without Section 230 in place, some say, the risk of liability will drive platforms either to remove everything that seems the slightest bit risky, or to turn a blind eye. Entrepreneurs will shy away from investing in new platform services because the legal risk would appear too costly.

I am sympathetic to this argument. But the typical defense of Section 230, even in the face of compelling concerns like harassment and terrorism, tends to adopt an all-or-nothing rhetoric. It’s absurd to suggest that there’s no room between complete legal immunity offered by a robust Section 230 without exception, and total liability for platforms as Section 230 crumbles away.

It’s time that we address a missed opportunity when Section 230 was drafted. Safe harbor, including the right to moderate in good faith and the freedom not to moderate at all, was an enormous gift to the young internet industry. Historically, gifts of this enormity were fitted with a matching obligation to serve the public in some way: the monopoly granted to the telephone company came with the obligation to serve all users; broadcasting licenses have at times been fitted with obligations to provide news, weather alerts, and educational programming.

The gift of safe harbor could finally be fitted with public obligations—not external standards for what to remove, but parameters for how moderation should be conducted fairly, publicly, and humanely. Such matching obligations might include:

  • Transparency obligations: Platforms could be required to report data on the process of moderation to the public or to a regulatory agency. Several major platforms voluntarily report takedown requests, but these typically focus on government requests. Until recently, none systematically reported data on flagging, policy changes, or removals made on their own accord. Facebook and YouTube began to do so this year, and should be encouraged to continue.

  • Minimum standards for moderation: Without requiring that moderation be handled in a particular way, minimum standards for the worst content, minimum response times, or obligatory mechanisms for redress or appeal could help establish a base level of responsibility and parity across platforms.

  • Shared best practices: A regulatory agency could provide a means for platforms to share best practices in content moderation, without raising antitrust concerns. Outside experts could be enlisted to develop best practices in consultation with industry representatives.

  • Public ombudsman: Most major platforms address the public through their corporate blogs, when announcing policy changes or responding to public controversies. But this is on their own initiative and offers little room for public response. Each platform could be required to have a public ombudsman who both responds to public concerns and translates those concerns to policy managers internally; or a single “social media council” could field public complaints and demand accountability from the platforms.

  • Financial support for organizations and digital literacy programs: Major platforms like Twitter have leaned on non-profit organizations to advise and even handle some moderation, as well as to mitigate the socio-emotional costs of the harms some users encounter. Digital-literacy programs help users navigate online harassment, hate speech, and misinformation. Enjoying safe harbor protections of Section 230 might require platforms help fund these non-profit efforts.

  • An expert advisory panel: Without assuming regulatory oversight of a government body, a blue-ribbon panel of regulators, experts, academics, and activists could be given access to platforms and their data to oversee content moderation, without revealing platforms’ inner workings to the public.

  • Advisory oversight from regulators: A government regulatory agency could consult on and review the content moderation procedures at major platforms. By focusing on procedures, such oversight could avoid the appearance of imposing a political viewpoint; the review would focus on the more systemic problems of content moderation.

  • Labor protections for moderators: Content moderation at large platforms depends on crowdworkers, either internal to the company or contracted through third-party temporary services. Guidelines could ensure these workers basic labor protections like health insurance, assurances against employer exploitation, and greater care for the psychological harm that can be involved.

  • Obligation to share moderation data with qualified researchers: The safe harbor privilege could come with an obligation to set up reasonable mechanisms for qualified academics to access platform moderation data, so they might investigate questions the platform might not think to or want to answer. The new partnership between Facebook and the Social Science Research Council has yet to work out details, but some version of this model could be extended to all platforms.

  • Data portability: Social media platforms have resisted making users’ profiles and preferences interoperable across platforms. But moderation data like blocked users and flagged content could be made portable so it could be applied across multiple platforms.

  • Audits: Without requiring complete transparency in the moderation process, platforms could build in mechanisms for researchers, journalists, and even users to conduct their own audits of the moderation process, to understand better the rules in practice.

  • Regular legislative review: The Digital Millennium Copyright Act stipulated that the Library of Congress revisit the law’s exceptions every two years, to account for changing technologies and emergent needs. Section 230, and whatever matching obligations might be fitted to it, could similarly be reexamined to account for the changing workings of social media platforms and the even more rapidly changing nature of harassment, hate, misinformation, and other harms.

We desperately need a thorough, public discussion about the social responsibility of platforms. This conversation has begun, but too often it is hamstrung between the defenders of Section 230 and those concerned by the harms it may shield. Until the law is rethought, social media platforms will continue to enjoy the right but not the responsibility to police their sites as they see fit.

This essay is excerpted from Custodians of the Internet by Tarleton Gillespie, published by Yale University Press and used by permission. Copyright c 2018 by Tarleton Gillespie.


More Great WIRED Stories

Aclima sucks in $24M to scale its air quality mapping platform

Aclima, a San Francisco-based company which builds Internet-connected air quality sensors and runs a software platform to analyze the extracted intel, has closed a $24 million Series A to grow the business including by expanding its headcount and securing more fleet partnerships to build out the reach and depth of its pollution maps.

The Series A is led by Social Capital which is joining the board. Also participating in the round: The Schmidt Family Foundation, Emerson Collective, Radicle Impact, Rethink Impact, Plum Alley, Kapor Capital and First Philippine Holdings.

Three years ago Aclima came out of stealth, detailing a collaboration with Google on mapping air quality in its offices and also outdoors, by putting sensors on StreetView cars.

Though it has actually been working on the core problem of environmental sensing and intelligence for about a decade at this point, according to co-founder Davida Herzl.

“What we’ve really been doing over the course of the last few years is solving the really difficult technical challenges in generating this kind of data. Which is a revolution of air quality and climate change emissions data that hasn’t existed before,” she tells TechCrunch.

“Last year we announced the results of our state-wide demonstration project in California where we mapped the Bay Area, the Central Valley, Los Angeles. And really demonstrated the power of the data to drive new science, decision making across the private and public sector.”

Also last year it published a study in collaboration with the University of Texas showing that pollution is hyperlocal — thereby supporting its thesis that effective air quality mapping requires dense networks of sensors if you’re going to truly reflect the variable reality on the ground.

“You can have the best air quality and the worst air quality on the same street,” says Herzl. “And that really gives us a new view — a new understanding of emissions but actually demonstrated the need for hyperlocal measurement to protect human health but also to manage those emissions.

“That data set has been applied across a variety of scientific research including studies that really showed the linkages between hyperlocal data and cardiovascular risk. In LA our black carbon data was used to support increased filtration in schools to protect school children.”

“Our technology is really a proof point for emerging and new legislation in California that’s going to require community based monitoring across the entire state,” she adds. “So all of that work in California has really demonstrated the power of our platform — and that has really set us up to scale, and the funding round is going to enable us to take this to a lot more cities and regions and users.”

Asked about potential international expansion — given the presence of strategic investors from southeast Asia backing the round — Herzl says Aclima has had a “global view” for the business from the beginning, even while much of its early work has focused on California, adding: “We definitely have global ambitions and we will be making more announcements about that soon.”

Its strategy for growing the reach and depth of its air quality maps is focused on increasing its partnerships with fleets — so there’s a slight irony there given the vehicles being repurposed as air quality sensing nodes might themselves be contributing to the problem (Herzl sidestepped a question of whether Uber might be an interesting fleet partner for it, given the company’s current attempts to reinvent itself as a socially responsible corporate — including encouraging its drivers to go electric).

“Our mapping capabilities are amplified through our partnerships with fleets,” she says, pointing to Google’s StreetView cars as one current example (though this is not an exclusive partnership arrangement; a London air quality mapping project involving StreetView cars which was announced earlier this month is using hardware from a rival UK air quality sensor company, called Air Monitors, for example).

But flush with fresh Series A funding Aclima will be working on getting its kit on board more fleets — relying on third parties to build out the utility of its software platform for policymakers and communities.

“There’s a number of fleets that we are going to be speaking about our partnerships with but our platform can be integrated with any fleet type and we believe that is an incredible advantage and position for the company in really achieving our vision of creating a global platform for environmental intelligence to help cities and entire countries really manage climate risk at a scale that really hasn’t been possible before,” she adds.

“Our technology provides 100,000x greater spacial resolution than existing approaches and we do it at 100-1,000x cost reduction so our vision is to be the GPS of the environment — a new layer of environmental awareness and intelligence that really informs day-to-day decisions.

“We’re really excited because it’s taken really years of work. I incorporated Aclima 10 years ago and started really working on the technology around 2010. So this has taken… a tremendous amount of technical development and scientific rigor with partners… to really have the technology at a place where it’s really set up to scale.”

It finances (or part finances) the deployment of its sensors on the vehicles of fleet partners — with Aclima’s business model focused on monetizing the interpretation of the data provided by its SaaS platform. So a chunk of the Series A will be going to help pay for more sensor rollouts.

In terms of what fleet partners get back from agreeing for their vehicles to become mobile air quality sensing nodes, Herzl says it’s dependent on the partner. And Aclima’s isn’t naming any additional names on that front yet.

“It’s specific to each fleet. But I can say that in the case of Google we’re working with Google Earth outreach and the team at StreetView… to really reflect their commitment to sustainability but also to expand access to this kind of information,” she says of the perks for fleets, adding: “We’ll be talking more about that as we make announcement about our other partners.”

The Series A financing will also go on funding continued product development, with Aclima hoping to keep adding to the tally of pollutants it can identify and map — building on a list which includes the likes of CO2, methane and particulate matter.

“We have a very ambitious roadmap. And our roadmap is expansive — ultimately our vision is to make the invisible visible, across all of the pollutants and factors in the invisible layer of air that supports life. We want to make all of that visible — that’s our long term vision,” she says.

“Today we’re measuring all of the core gaseous pollutants that are regulated as well as the core climate change gases… We are not only deploying and expanding our platform’s availability but in our R&D efforts investing in next generation sensing technologies, whether it’s the tiniest PM2.5 sensor in the world to on our roadmap really having the ability to speciate VOC [volatile organic compounds].

“We can’t do that today but are working on it and that is an area that is really important for specific communities but for industry and for policy makers as well.”

A key part of its ongoing engineering work is focused on shrinking certain sensing technologies — both in size and cost. As that’s the key to the sought for ubiquity, says Herzl.

“There’s a lot of hard work happening there to shrink [sensors],” she notes. “We’re talking about sensors that are the size of a thumb tack. Traditional technologies for this are very large, very difficult to deploy… so it’s not that capabilities don’t exist today but we’re working on shrinking those capabilities down into really, really tiny components so that we can achieve ubiquity… You have to shrink down the size but also reduce the cost so that you can deploy thousands, millions of these things.”

Of all the hard, global problems that remain unsolved, the sustainability of our planet is the most consequential,” said Jay Zaveri, partner at Social Capital. “Aclima has an unparalleled technology platform for air quality that addresses public health concerns, industrial safety and an improved quality of life for urban citizens. We’re thrilled to support Davida and partner with the Aclima team to take on air pollution worldwide.”

Apple CEO Tim Cook explains why he spoke out about immigration

SAN FRANCISCO — Apple CEO Tim Cook elaborated on earlier remarks critical of the Trump administration’s “zero tolerance” immigration policy, noting many Apple employees likely went through a similar situation experienced by families trying to cross the U.S.-Mexico border.

“We have a lot of immigrants that work at Apple,” Cook said at the Fortune CEO Initiative in San Francisco Monday. More than 300 are part of DACA (Deferred Action for Childhood Arrivals), he said. “I want to stand up for them.”

Last week, on a trip to Ireland, Cook told the Irish Times that the separation of children from their families at the U.S./Mexico border was “inhumane” and that Apple would be working with the government to be a “constructive voice” on the issue.

He was among a handful of tech executives voicing concerns about the Trump administration policy that’s separated over 2,000 children from parents or guardians detained for crossing the border illegally. The response from the broader business community has been largely channeled through trade groups as company executives attempt to navigate a highly charged political climate.

More: Fact check: Did the Obama administration separate families?

More: Texas border chaos: Courts, families, government collide in zero tolerance debacle

Speaking to the conference of business executives, which was live-streamed, Cook said CEOs should speak out when they see something that isn’t consistent with their company’s values. “Think about if you don’t, then you’re in the ‘appalling silence of the good people’ category,” he said. “This is something I’ve never wanted to be a part of.”

“Ultimately that is what human rights is all about,” he said. “It’s about treating people with dignity and respect, at the end of the day.”

Read or Share this story: https://usat.ly/2tBDJ0l

The New Satellite Arms Race Threatening to Explode in Space

Shelton, who retired in 2014 after 38 years in the Air Force, lives not far from 2Sops in Colorado; these days he chairs an educational and advocacy nonprofit called the Space Foundation. He still expends a lot of energy worrying about what is happening in the heavens. “We as a nation have been too slow to respond to this threat,” he says. He’s particularly troubled by the failure of the US to procure new space systems. Some GPS satellites are older than the people running them. “Our systems are archaic,” Shelton says. “Because space assets are so expensive, we deploy ‘just enough’; there’s no backup or excess capability.” (The Air Force noted that the GPS constellation consists of more than 30 satellites, which provides some redundancy.)

China, by contrast, is investing heavily in its space program, seeing it as a symbol of its growing prominence. As soon as this year, it could land a craft on the never-before-touched far side of the moon. And China’s global navigation satellite system, known as BeiDou, has some capabilities that outmatch even the United States’ GPS. In 2015, China created a new space-­focused military service, known as the People’s Liberation Army Strategic Support Force. Meanwhile, the US relies entirely on Russian rockets to get its astronauts to the Space Station (although NASA has awarded contracts to Boeing and SpaceX to fix that). As Cheng says, “Today China is one of two countries that can put a person into space—and the other country isn’t the United States.”

Many of America’s space warriors, as they call themselves, share Shelton’s sense that the US isn’t responding nearly quickly enough to the threat of orbital war. “We needed to be marching faster,” says Deborah Lee James, who served as President Obama’s secretary of the Air Force. “Why aren’t there more space and cyber officers at the top of the Air Force?”

Deadly Debris

In orbit, trash becomes shrapnel. When objects in space collide—whether by accident or because, say, someone down on Earth has decided to launch a missile at a satellite—it sometimes creates a hail of smaller fragments that fan out across Earth’s orbit. It’s already getting difficult to operate satellites and conduct launches amid all the junk zipping around up there. That’s why, around the world, scientists and engineers are devising ways to pull space junk out of orbit. In April, a SpaceX rocket carried a collection of experimental debris-removal technologies to the International Space Station. During its time in orbit, the satellite will test out nets, harpoons, and drag sails designed to reduce detritus.

— Saraswati Rathod


20,000

Pieces of space debris larger than a softball



500,000

Pieces of debris the size of a marble or larger



4,300

Number of satellites in space



72

Percent of satellites that are non­functioning



$1.4 billion

Cost of degradation to commercial satellites caused by debris



2,000

Number of trackable fragments created by the last major satellite collision in 2009



160 million

Estimated number of pieces of space junk too small to be tracked

Sources: European Space Agency; NASA; Aerospace Corporation

Addressing these issues, as James’ question suggests, is not just about throwing money at the space-industrial complex. It involves organizational changes too. The Air Force is building what it calls the nation’s first Space Mission Force, made up of airmen trained to respond to the demands of an orbital war. On the same base as the 2Sops command center, the military has established the National Space Defense Center, which puts representatives from various military and intelligence offices focused on space under a single roof. And the defense authorization bill is full of upgrades to the Air Force’s space-­fighting capabilities, including the creation of an additional Air Force unit responsible for space warfighting operations.

Not content to tinker with the Air Force, a growing number of people in Washington—including the commander in chief—have to come to favor creating an entire new military branch dedicated to space operations. In May, during a ceremony honoring West Point’s football team, President Trump told his audience, “We’re getting very big in space, both militarily and for other reasons, and we are seriously thinking of the Space Force.” The comment sounded to many listeners like yet another oddball Trumpian tangent.

But then, after reportedly meeting resistance from the Air Force, Trump escalated. At a mid-June meeting of the newly constituted US Space Council, he announced—much to the surprise of his own advisors and the military itself—that he was ordering the Pentagon to move forward. As he said, “I’m hereby directing the Department of Defense and Pentagon to immediately begin the process necessary to establish a Space Force as the sixth branch of the Armed Forces. That’s a big statement. We are going to have the Air Force and we are going to have the Space Force—separate but equal. It’s going to be something.”

The Space Force is, of course, not a fait accompli. Any military reorganization has to be approved by Congress—which is not necessarily an easy path. (Last year, a bill that included the creation of just such a new branch of the military passed the US House of Representatives, but that provision was taken out of the Senate version.) And the establishment of a new branch of the military involves a vast set of logistical and structural questions.

Yet Trump’s push may speed up a natural evolution toward an independent space branch by years, if not a decade. Space, the president said, was “going to be important monetarily and militarily. We don’t want China and Russia and other countries leading us. We’ve always led.”

But where—and to what—are we leading? Part of the challenge in figuring out how to think about space conflict is the sheer complexity of the orbital environment—an arena that has long belonged to nation-states, but that is increasingly becoming a domain of commerce and tourism. How do countries protect their interests up above—and down here? Right now, countries appear to be racing to build their military capabilities—but an arms race isn’t the only answer.

The last time an arms race appeared poised to overtake space, the world’s superpowers banded together to sign the 1967 Outer Space Treaty, which banned weapons of mass destruction in space and held that “the moon and other celestial bodies” should be reserved for peaceful purposes. The Outer Space Treaty is still in force, but it is by now full of holes. Legal scholars had a hard time proving that China’s 2007 anti-­satellite test, for instance, violated the agreement. That’s because the missile that China fired was not technically addressed in the 50-year-old treaty.

Part of what makes space such volatile terrain right now is that it’s hard even to apply the existing laws of war to it. No country can claim sovereignty in orbit, and it’s impossible to occupy territory there. So what counts as an act of territorial aggression? What qualifies as a proportional response? It’s even difficult to say, with certainty, what the physics of war in space will look like. We don’t well understand, for instance, how a kinetic attack on a satellite constellation might spill over into a spiraling Kessler effect.

Humans have “millennia of experience in blowing up things on land,” says Laurie Blank, a law professor at Emory University and a specialist in the laws of armed conflict. “We’re still learning the consequences of all these things in space.”

Blank recently joined together with an international team of legal experts to create what they’re calling the Woomera Manual on the International Law of Military Space Operations—a kind of rule book for celestial international conflict, one that will endeavor to translate the laws of terrestrial war for space. It’s a daunting task, and the resulting document will be nonbinding. But, Blank says, it’s a necessary first step for anyone who would seek to contain a conflict that has, in some senses, already begun.


Garrett M. Graff(@vermontgmg) is a WIRED contributing editor. He wrote about US special counsel Robert Mueller’s combat experience during the Vietnam War for issue 26.06.

This article appears in the July issue. Subscribe now.

Listen to this story, and other WIRED features, on the Audm app.


More Great WIRED Stories

The Real Reason You Use Closed Captions for Everything Now

In this moment, there is only one thing I wish to know, and those are the words coming out of Sylvester Stallone’s mouth—if indeed they are words. I’m watching Guardians of the Galaxy Vol. 2. Incomprehensibly, Stallone has a small part in it, speaking, as he often does, incomprehensibly. But, gosh, he looks very important. Therefore he must be saying something important. Probably the whole of this film depends on it.

So I rewind Netflix, one of life’s more torturous little rituals. Then I squeeze my eyes shut—the better, I believe, to open my ears. Don’t anyone move, I mind-command the empty room. When Stallone speaks again, I’m prepared, my breath held tight. This is what I hear: “In Santo which is warmer but I ain’t got married and I said let me oh I know the girl.”

Goddammit.

Stallone’s a special kind of mumbler, obviously. But this is not some rando-Rambo exception. I find myself rewinding constantly in the modern era, straining to hear. Auditory breakdowns repeat, loop, divide. Movies and TV are, it seems, simply harder to hear in general these days.

Part of it is relative: When you watch more TV, you miss more TV. This very second, in living rooms nationwide, innumerable couch-bound bingers are failing to synthesize a piece of dialog emanating from their new-age sound bars, and it pains them. Whether it’s Bernard in Westworld or Jon Snow in Game of Thrones, the lines are not cohering into meaningful English. “What did he say?”—already the most uttered (and annoying) question in the history of talking pictures—is by now a nightly interrogation, yanny/laurel times a million.

Some of it might be the happy result of ever-globalizing TV options. As the world shrinks, more people of every background are losing themselves, via the hottest new escapisms, in foreign dialects and cultures. Chewing Gum, the British comedy set on a council estate in East London, sparkles with slang that blows right past most Americans. Without the right context, we don’t hear it.

But that’s an issue of comprehension, of understanding. My concern here is more the failure of literal, physical hearing. (Bernard speaks very slowly in Westworld, yet I hear very little.) You sense it, don’t you? More “Huh?” in conversation, more “Say again?” and “Beg pardon?” What’s so frustrating at home, in front of the TV, is that actors won’t repeat themselves. The problem is more acute.

Maybe the problem is our ears. Maybe, jabbed and stuffed as they are with so much sleek contemporary accessory, they’re simply overburdened. Except mine, I dare say, are not. I protect them from from the oontz-oontz of so-called music, along with any other unwelcome invasions; earbuds have been pressed into their softness maybe three times. (So pristine is my hearing, in fact, that I can count among my favorite sensory experiences the sound a semi-sautéed mushroom makes after it slips out of a French skillet and falls, by gravity’s good grace, to the kitchen floor. If the linoleum is just right and the room sensibly hushed, you’ll perceive a wet, perky slap—bpuhk!—as though some tiny winged creature with tinier hands has popped an interdimensional bubble. Hearing something so small enlarges your soul.)

Even aurally gifted as all that, however, I still find myself constantly asking of the television set: “Eh?”

Here’s what Stallone really says in Guardians 2: “After going around in circles with this woman I end up marrying. I said, ‘Aleta, I love you, girl.’” Of course, I only know that because I cheated. Clicked Menu, clicked Subtitles, clicked English CC. When I turn on those words, my body untenses. Not even the most inconsequential bit of throwaway dialog is safe from the rigorous, trustworthy pen of closed captioning. At last, I can hear everything.

Subtitles have been around since the early ’70s. (Julia Child was one of the first beneficiaries, her joyful warble rendered in sentences her audience of “servantless American cooks” could follow, both linguistically and culinarily, with ease.) Essential for deaf people and English language learners, and scientifically shown to promote reading comprehension and retention, subtitles have only recently become essential for many TV watchers, period. A smattering of online encomia tell you it’s the only way to watch. One Redditor asks in r/movies, “I like having subtitles with everything I watch. Anything wrong with this?” Almost everyone responds supportively, including this person: “I cannot fully enjoy any video without subtitles. At all.”

Many people I know IRL can relate, from bankers and meditators to jocks, UX designers, and writers. My anecdata turns up no gender preferences. Couples seem overrepresented, presumably because one influences the other. “Well, they insist on watching everything with subtitles,” one says of their partner. “But now I like doing it too.” Great, fine! But uh, why bother making excuses?

Because—there’s still something not quite right with the idea, is there? It doesn’t sit well, watching everything this way. Last year, Refinery29 ran a piece, “Get Over Your Fear Of Subtitles, Please,” in which the writer extols the benefits: you can appreciate the script, you know whose off-screen voice you’re hearing, you can chuckle at the poetic attempts by caption writers to convey background noises (“[bestial squall]”). To those others have added: you can watch at low volume, you can clean or eat or otherwise make general ruckus while watching. Inside the screen, diegetic minutiae—passerby conversations, a snippet of a TV news story—takes on new clarity, giving shape to the world of a story. The fuzziness solidifies, control overlaying chaos.

Thus the modern condition asserts itself. If there is something we can know, we do everything in our power to know it, regardless of our actual level of investment. When someone at the dinner table idly wonders, say, what Memorial Day memorializes, it’s a game of fastest Google-finger. Uncertainty causes gas; search is Tums. Now we can keep eating.

Except these are quick fixes. They provide only momentary relief. They also upset natural rhythms. The same is true of captions. They ruin anything dependent on timing, like jokes or moments of tension. (Imagine reading “Luke, I am your father” a half-second before hearing it.) We end up staring more at actors’ torsos than at their faces. As in life, we make less and less eye contact. Small bursts of text are how we comprehend the world now. We must see the printed words in order to believe them. Look, can you believe he said that? Yes, it’s right there!

Just as quickly, though, the words are gone, comprehensively forgotten. “After going around in circles with this woman I end up marrying. I said, ‘Aleta, I love you, girl.” What even is that? None of that filler matters to the Guardians 2 plot (such as it is). Half of those words are spoken off-camera. In a very real way we were not meant to know them, merely to register their hum. But like Google, closed captions are there, eminently accessible, ready to clarify the unclarities, and so, desperately, we, the paranoids and obsessive-compulsives and postmodern completists, click.

No, subtitles are not the solution. They flatten our perception. Sounds are more muted these days because there are too many of them, every utterance equally weighted and demanding of us total comprehension. Look at the words themselves. All too often they are meaningless. Yet we painstakingly rewind Netflix anyway, backward, backward, backward, stuck in a garbled loop. Bpuhk, pop—get me out.


More Great WIRED Stories

Traffic Doesn’t Hurt the Economy—But We Should Still Fix It

Behold the traffic-dammed and damned city: The very existence of gridlock would indicate that business is booming. But in the field of transportation planning, it’s well accepted that regions with persistent car congestion will lose economic steam. After all, congestion does things like slow down freight as well as stall commuters on their way to the places where they make or spend money.

The notion that congestion costs drivers money buttresses proposals to do everything from widening freeways to synchronizing traffic lights. But you’d expect these costs to manifest in region-wide, economy-leaking wounds. A new study, published last month in the aptly named journal Transportation, challenges this assumption.

By comparing historic traffic data against several economic markers, the authors found virtually no indication that gridlock stalled commerce. In fact, it looked like the economy had its own HOV lane. Region by region, GDP and jobs grew, even as traffic increased. This does not mean speed bumps should come standard on every new highway. Traffic still sucks, and things that suck should be fixed. What this study does is acknowledge that economically vibrant cities will always have congestion. So transportation planners should instead focus on ways to alleviate the misery rather than eliminate the existence of congestion.

Unfortunately, misery alone is difficult to quantify. Which is probably how some economist hit upon the idea of applying a cost-benefit analysis to sitting in traffic. The idea is fairly simple: Each driver’s time is worth some amount of money; that time is wasted if it is spent idling in a sea of taillights. One of the most public-facing cost-benefit estimates of car congestion comes from the transportation analytics firm Inrix. In 2017, the company estimated that the average US driver loses $1,642 a year sitting in traffic. The estimate varies by region. New Yorkers lose nearly $3,000 a year—can you imagine how many cartons of bootleg cigarettes you could buy with that? So you would expect to see that wasted time and money manifested as a slowdown in the economy.

The logic seems valid: Somebody forced to regularly wait in traffic might ask for a raise or take their talents to some other less-gridlocked city. The added cost of retaining and recruiting personnel might sway big companies to move operations. Car congestion also directly impacts commerce—for example, by delaying shipments. But here’s the important thing to consider: Are freight delays driving up the cost of living to untenable levels? Do demands from labor in congested cities actually force companies to take their business elsewhere? Does a region’s economy feel anything from all these ways congestion is supposed to cost drivers time and money?

That’s the thought that occurred to University of Colorado civil engineer Wes Marshall as he was reading one of those annual lists of the 10 most congested metropolitan areas in the US. Every year, the list contains the same shuffle of cities: Los Angeles, New York, Boston, Dallas, San Francisco, Atlanta—a who’s who of honking megalopoli. And wouldn’t you know, those same cities consistently rank highest for regional GDP.

So, he and coauthor Eric Dumbaugh began work on the study that they just published in Transportation. They started with data from the Texas Transportation Institute’s Urban Mobility Report, which has been tracking car congestion in 89 US cities for 30 years. They compared that with 11 years of overlapping numbers of both per capita GDP and job growth for each metropolitan area. They also had a fully overlapping data set of 30 years of per capita income.

Marshall acknowledges that no statistic can paint a perfect picture of reality, but he says he and his coauthor wrangled their analysis into coherence. Once they accounted for all the hanging chads, the overall trend was pretty clear: Traffic really didn’t do much to the economy. In fact, they found that if anything, places with higher car congestion seemed to have stronger economies. Specifically, per capita GDP and job growth both tracked upward as traffic wait times got worse.

Marshall and his colleague aren’t the first to look into the citywide economic impacts of bad traffic. In 2013, Ryerson University transportation professor Matthias Sweet found that very high levels of vehicle car congestion did negatively impact the economy. In Urban Studies, Sweet used the same Texas Transportation Institute car-congestion data but weighed it against only job growth and productivity growth per worker, over more constrained time periods. He found that car congestion did appear to drag on a region’s job growth once it gets to around 35 to 37 hours a year per commuter. (That’s roughly 4.5 minutes of delay a day, ya babies.)

But Sweet doesn’t take any issue with Marshall’s findings. In fact, he says they complement his own: “This adds to what I would characterize as a growing body of work that questions the role of car congestion alleviation as an economic policy act.” He calls out another finding from his 2013 study, which is that before reaching the 4.5-minute per day per commuter threshold, car congestion seemed to indicate stronger economic activity. Even in places with absurd traffic delays—think Boston during the Big Dig—car congestion never kills a metro’s economy outright. “Regions appear to be fairly adaptive, and can grow even when car congestion levels are really high,” Sweet adds.

Which is not to say that everyone should buckle in and accept their daily crawl through purgatory. What Marshall is suggesting is that maybe time isn’t money—not when it comes to commuting, at least. Besides, if congestion seems to accompany a booming economy, he says planners should focus less on the costs and benefits of alleviating it. Instead, they could put their energies into improving the quality of the commute—for instance, by providing people options besides inevitably flooded freeway lanes.


More Great WIRED Stories

Little Girl Who Misses Her Foster Kitten Gets The Best Surprise

It’s always been bittersweet to part ways with their foster kittens, but this time Bela found it especially difficult saying goodbye to Helen. As they dropped the kittens off at the rescue facility, she began to weep silently over Helen, trying but failing to hide that fact from her mother. After all, she had known their time together would be brief.

“I saw her puddle of tears,” Pabon said. “She has not bonded like that to any of the others.”

After three days, Bela still just wasn’t herself, so her mother decided to ensure Helen found a home — by welcoming her back into theirs for good.

Here’s the emotional moment Bela learned she and Helen would be together forever.

Heroic Little Police Dog Performs CPR To ‘Save’ His Fallen Partner

The very talented little dog is the star of a recent video shared by the Municipal Police of Madrid, Spain. In it, Poncho’s human partner is seen simulating cardiac arrest during a training exhibition — prompting the sweet pup to show off how he’s been taught to “resuscitate” him.

“‘Heroic’ performance of our four-pawed companion, Poncho, who did not hesitate a moment in ‘saving the life’ of the agent, practicing CPR in a masterful way,” the police department wrote.

Here’s Poncho in action, complete with a tiny blue siren on his back for good measure:

WPA3 Wi-Fi Security Will Save You From Yourself

There are more Wi-Fi devices in active use around the world—roughly 9 billion—than there are human beings. That ubiquity makes protecting Wi-Fi from hackers one of the most important tasks in cybersecurity. Which is why the arrival of next-generation wireless security protocol WPA3 deserves your attention: Not only is it going to keep Wi-Fi connections safer, but also it will help save you from your own security shortcomings.

The Wi-Fi Alliance, a trade group that oversees WPA3, is releasing full details today, after announcing the broad outlines in January. Still, it’ll be some time you can fully enjoy its benefits; the Wi-Fi Alliance doesn’t expect broad implementation until late 2019 at the earliest. In the course that WPA3 charts for Wi-Fi, though, security experts see critical, long-overdue improvements to a technology you use more than almost any other.

“If you ask virtually any security person, they’ll say don’t use Wi-Fi, or if you do, immediately throw a VPN connection on top of it,” says Bob Rudis, chief data officer at security firm Rapid 7. “Now, Wi-Fi becomes something where we can say hey, if the place you’re going to uses WPA3 and your device uses WPA3, you can pretty much use Wi-Fi in that location.”

Password Protections

Start with how WPA3 will protect you at home. Specifically, it’ll mitigate the damage that might stem from your lazy passwords.

A fundamental weakness of WPA2, the current wireless security protocol that dates back to 2004, is that it lets hackers deploy a so-called offline dictionary attack to guess your password. An attacker can take as many shots as they want at guessing your credentials without being on the same network, cycling through the entire dictionary—and beyond—in relatively short order.

‘They’re not trying to hide the details of the system.’

Joshua Wright, Counter Hack

“Let’s say that I’m trying to communicate with somebody, and you want to be able to eavesdrop on what we’re saying. In an offline attack, you can either passively stand there and capture an exchange, or maybe interact with me once. And then you can leave, you can go somewhere else, you can spin up a bunch of cloud computing services and you can try a brute-force dictionary attack without ever interacting with me again, until you figure out my password,” says Kevin Robinson, a Wi-Fi Alliance executive.

This kind of attack does have limitations. “If you pick a password that’s 16 characters or 30 characters in length, there’s just no way, we’re just not going to crack it,” says Joshua Wright, a senior technical analyst with information security company Counter Hack. Chances are, though, you didn’t pick that kind of password. “The problem is really consumers who don’t know better, where their home password is their first initial and the name of their favorite car.”

If that sounds familiar, please change your password immediately. In the meantime, WPA3 will protect against dictionary attacks by implementing a new key exchange protocol. WPA2 used an imperfect four-way handshake between clients and access points to enable encrypted connections; it’s what was behind the notorious KRACK vulnerability that impacted basically ever connected device. WPA3 will ditch that in favor of the more secure—and widely vetted—Simultaneous Authentication of Equals handshake.

There are plenty of technical differences, but the upshot for you is twofold. First, those dictionary attacks? They’re essentially done. “In this new scenario, every single time that you want to take a guess at the password, to try to get into the conversation, you have to interact with me,” says Robinson. “You get one guess each time.” Which means that even if you use your pet’s name as your Wi-Fi password, hackers will be much less likely to take the time to crack it.

The other benefit comes in the event that your password gets compromised nonetheless. With this new handshake, WPA3 supports forward secrecy, meaning that any traffic that came across your transom before an outsider gained access will remain encrypted. With WPA2, they can decrypt old traffic as well.

Safer Connections

When WPA2 came along in 2004, the Internet of Things had not yet become anything close to the all-consuming security horror that is its present-day hallmark. No wonder, then, that WPA2 offered no streamlined way to safely onboard these devices to an existing Wi-Fi network. And in fact, the predominant method by which that process happens today—Wi-Fi Protected Setup—has had known vulnerabilities since 2011. WPA3 provides a fix.

Wi-Fi Easy Connect, as the Wi-Fi Alliance calls it, makes it easier to get wireless devices that have no (or limited) screen or input mechanism onto your network. When enabled, you’ll simply use your smartphone to scan a QR code on your router, then scan a QR code on your printer or speaker or other IoT device, and you’re set—they’re securely connected. With the QR code method, you’re using public key-based encryption to onboard devices that currently largely lack a simple, secure method to do so.

“Right now it’s really hard to deploy IoT things fairly securely. The reality is they have no screen, they have no display,” says Rudis. Wi-Fi Easy Connect obviates that issue. “With WPA3, it’s automatically connecting to a secure, closed network. And it’s going to have the ability to lock in those credentials so that it’s a lot easier to get a lot more IoT devices rolled out in a secure manner.”

Here again, Wi-Fi Easy Connect’s neatest trick is in its ease of use. It’s not just safe; it’s impossible to screw up.

‘Right now it’s really hard to deploy IoT things fairly securely.’

Bob Rudis, Rapid 7

That trend plays out also with Wi-Fi Enhanced Open, which the Wi-Fi Alliance detailed a few weeks before. You’ve probably heard that you should avoid doing any sensitive browsing or data entry on public Wi-Fi networks. That’s because with WPA2, anyone on the same public network as you can observe your activity, and target you with intrusions like man-in-the-middle attacks or traffic sniffing. On WPA3? Not so much. When you log onto a coffee shop’s WPA3 Wi-Fi with a WPA3 device, your connection will automatically be encrypted without the need for additional credentials. It does so using an established standard called Opportunistic Wireless Encryption.

“By default, WPA3 is going to be fully encrypted from the minute that you begin to do anything with regards to getting on the wireless network,” according to Rudis. “That’s fundamentally huge.”

As with the password protections, WPA3’s expanded encryption for public networks also keeps Wi-Fi users safe from a vulnerability they may not realize exists in the first place. In fact, if anything it might make Wi-Fi users feel too secure.

“The heart is in the right place, but it doesn’t stop the attack,” says Wright. “It’s a partial solution. My concern is that consumers think they have this automatic encryption mechanism because of WPA3, but it’s not guaranteed. An attacker can impersonate the access point, and then turn that feature off.”

Switching On

Even with the added technical details, talking about WPA3 feels almost still premature. While major manufacturers like Qualcomm already have committed to its implementation as early as this summer, to take full advantage of WPA3’s many upgrades, the entire ecosystem needs to embrace it.

That’ll happen in time, just as it did with WPA2. And the Wi-Fi Alliance’s Robinson says that backward interoperability with WPA2 will ensure that some added security benefits will be available as soon as the devices themselves are. “Even at the very beginning, when a user has a mix of device capabilities, if they get a network with WPA3 in it, they can immediately turn on a transitional mode. Any of their WPA3-capable devices will get the benefits of WPA3, and the legacy WPA2 devices can continue to connect,” Robinson says.

Lurking inside that assurance, though, is the reality that WPA3 will come at a literal cost. “The gotcha is that everyone’s got to buy a new everything,” says Rudis. “But at least it’s setting the framework for a much more secure setup than what we’ve got now.”

Just as importantly, that framework mostly relies on solutions that security researchers already have had a chance to poke and prod for holes. That hasn’t always been the case.

“Five years ago the Wi-Fi Alliance was creating its own protocols in secrecy, not disclosing the details, and then it turns out some of them have problems,” says Wright. “Now, they’re more adopting known and tested and vetted protocols that we have a lot more confidence in, and they’re not trying to hide the details of the system.”

Which makes sense. When you’re securing one of the most widely used technologies on Earth, you don’t want to leave anything to chance.


More Great WIRED Stories

Cow Who Lost Her Mom Gets Adopted By Family Of Wild Deer

When a baby cow now known as Bonnie was just 4 months old, hardly ever leaving her mother’s side, her life totally changed.

The calf had been born on a farm in upstate New York. But when the owner of the farm died, the cows living there were slated to be sold. Bonnie and her mom, like so many millions of other cows, were going to end up separated.

One day last summer, as Bonnie’s herd was being loaded onto the truck, Bonnie decided to make a run for it, escaping into the forest.

Bulldog With Huge Underbite Wins ‘Ugliest Dog’ Contest

However, the bulldog, named for the actress and socialite Zsa Zsa Gabor, wasn’t always appreciated for her appearance. Zsa Zsa’s life began far from the spotlight at a Missouri puppy mill. For five years, the dog was used for breeding, until her owners decided to get rid of her in 2014.

She ended up in the care of Underdog Rescue, a rescue based in St. Louis Park, Minnesota.

“We got her spayed, got her dental [work] done and brought [her] back to health,” Shannon McKenzie, founder and director of Underdog Rescue, told The Dodo. “She is extra special because she is pretty unique.”