AR could be a privacy nightmare – or a chance to rethink IT

0

The increase doesn’t just mean adding things to the wearer’s environment. It is also letting a computer platform capture and analyze them without the consent of others.

Take facial recognition, a looming crisis at the heart of AR. Smartphone apps have used facial recognition for years to tag and sort people, and one of the more intuitive AR glasses apps is to simply remember people’s names (as well as other general information like the where you met them). It is also a potential disaster for privacy.

In 2020, The Electronic Frontier Foundation has sounded the alarm on the monitoring capabilities of the AR glasses. Any company offering facial recognition through glasses, wrote Katitza Rodriguez and Kurt Opsahl, “could have a live audiovisual window in every corner of the world, with the ability to locate and identify anyone at any time. “.

So far, most AR systems have avoided facial recognition. Phone platforms like Snapchat, as well as Facebook’s portal, use Facial ID, which can detect facial features to add special effects, but not to compare them to a database of specific people. Google has refused to approve facial recognition apps in its Glass Explorer Edition 2013 headset, though an unofficial app angered lawmakers by trying it.

But EFF’s concern was not premature. Andrew Bosworth, a Facebook and Meta executive, reportedly told employees the company is evaluating the costs and benefits of facial recognition for its Project Aria glasses, calling it perhaps the “thorniest issue” in AR. And outside of AR, some people are pushing for a near-total ban on the technology. Researcher Luke Clark compared facial recognition to “AI plutonium,” claiming that any potential benefits are largely overshadowed by its social harms. AR is a ready-to-go test bed for the widespread public use of facial recognition, and by the time the potential damage is evident, it might be too late to fix it.

It’s also unclear how AR systems will make money – and what kinds of behaviors the resulting business models will encourage. Some companies in the field, like Apple, sell traditional hardware. Others, like Facebook and Snap, have made a name for themselves on ad-supported social media.

Facebook claimed it was not yet reviewing business models for its glasses, and Snap said advertising was not the only option, promoting the value of experiences like retail powered by augmented reality. . But even businesses with no advertising experience see its power. In a 2017 patent filing, Magic Leap imagined Starbucks detecting when a helmet wearer was looking at a branded cup of coffee, then offering a coupon to the nearest Starbucks store.

Even basic AR apps, like mapping an apartment to place a virtual screen, could potentially collect a huge amount of information. (How big is your living space? What books are on your shelves? How healthy are the snacks on your kitchen counter?) Without strong privacy protections, it will be incredibly tempting to ” use this data for advertising. And the more companies collect and store, the more likely it is that it will be used for something even more invasive, like defining your insurance premiums – or fall prey to a security breach.

Some groups are trying to gain the upper hand on broader issues of AR policy. The non-profit XR Security Initiative offers policy frameworks for industry security and privacy, building on existing laws such as the European General Data Protection Regulation. In the corporate space, Facebook Reality Labs announced a set of responsible innovation principles designed to allay fears about trust, privacy, and consent that have plagued the business. This too awarded a series academic grants to study specific issues in AR, selecting proposals such as “Social tension with AR always available for accessibility” and “Anticipating virtual gossip – What are dark (un) intentional models in augmented reality ubiquitous? “

Early efforts on consumer hardware, however, did not overcome its pitfalls particularly well. Google found its Glass Explorer Edition 2013 headset banned from some bars and restaurants because the expensive device was seen as invasive and cocky, non-futuristic and liberating. This shouldn’t have been surprising: Around the same time, researchers at the University of Washington interviewed people in coffee shops where someone was wearing a fake AR headset, and the results were a mix of about 50-50 indifference and negativity. (Only one person had a positive reaction.) But instead of planning around some very predictable anxieties, Google CEO Larry Page called the privacy concerns “not that big of a deal.”


Source link

Leave A Reply

Your email address will not be published.