Tag Archives: privacy

Obama: If You Trust Your Congress, Trust in the NSA Data Collection

Obamashadow

On Friday afternoon, President Obama responded for the first time to the revelations of various National Security Agency data gathering programs—from recording all call records in and outside of the United States, to the PRISM program, which reportedly taps into the data streams of some of the largest data hosting companies in the country.

Here’s the gist: Although you, the citizens, have not heard of this, we have substantial oversight on these programs involving every branch of government. Legislators have been briefed (in regards to the telephone data, he said all members knew), and “if anybody in government wanted to go further than that top-line data … they would have to go back to a federal judge,” Obama said.

Basically, if you trust the system, you should trust us.

“In the abstract you can complain about Big Brother and how this is a potential program run amok,” the president said. “But if you look at the details … I think we have struck a nice balance.”

The president also reassured that “no one is listening to your telephone calls” and that although he came into office with “a healthy skepticism about these programs,” he is reassured that they don’t overreach. “The modest encroachments on privacy that are involved … it was worth us doing,” he said.

Image via Win McNamee/Getty Images

This article originally published at National Journal
here

Read more: http://mashable.com/2013/06/07/president-obama-nsa-response/

Navigating the Legal Pitfalls of Augmented Reality

Navigating-the-legal-pitfalls-of-augmented-reality-a9fbb76fb9Alysa Z. Hutnik is a partner in the advertising and privacy practices at Kelley Drye & Warren, LLP. Her co-author, Matthew P. Sullivan, is an advertising and privacy associate at Kelley Drye & Warren, LLP. Read more on Kelley Drye’s advertising law blog Ad Law Access, or keep up with the group on Facebook and Twitter.

In the past year, augmented reality (AR) has moved beyond a sci-fi novelty to a credible marketing tool for brands and retailers. While part of a niche industry, AR applications are being championed by tech players like Google and Nokia, and a host of mobile app developers have launched AR apps for the growing number of smartphones and portable computing devices. Tech analyst firm Juniper Research estimates that AR apps will generate close to $300 million in global revenue next year.

The power of AR, particularly for marketers, is its ability to overlay highly relevant, timely and interactive data about specific products or services within a user’s live physical environment. For example, companies are using AR to transform home or online shopping by bringing to life static, two-dimensional images ? see Ikea’s 2013 catalog and Phillips TV Buying Guide mobile app ? or leveraging geolocational data to augment users’ real-world retail experiences with instant data on pricing, reviews or special discounts (such as IBM’s personal shopping assistant).

If you’re considering whether to add an AR app to your marketing mix, be aware that traditional advertising law principles still apply, and that both federal and state regulators are keeping a watchful eye on AR’s potential impact on consumer privacy.

Traditional Advertising and Disclosure Rules Apply

A unique aspect of AR is that it allows retailers to give online or mobile shoppers a realistic, up-close, three-dimensional or enhanced view of their products prior to purchase (think virtual dressing rooms). If your AR app is used to promote or drive sales for a particular product, be sure to avoid overstating or exaggerating the features, functions or appearances of the product, or leaving out material information that could sway the consumer’s purchasing decision.

In September, the Federal Trade Commission (FTC) published a marketing guide for mobile app developers. It clarifies that long standing truth-in-advertising standards apply in the virtual world to the same extent as in the real world.

The key takeaway: Disclosures must be clear and conspicuous. That is, you should look at your app from the perspective of the average user and ensure that disclosures are big and clear enough so that users actually notice them and understand what they say. Another rule of thumb is to keep your disclosures short and simple, and use consistent language and design features within your app. Before launching your app, carefully consider how best to make necessary disclosures visible and accessible in the AR context.

You can expect more guidance on disclosures in the near future when the FTC releases its updated Dot Com Disclosures Guide.

Take Consumer Privacy Seriously

To unlock AR’s full potential, developers are integrating the visual elements of AR with users’ personal information, including geolocational and financial data, facial recognition information and users’ social media contacts.

Given the increased scrutiny over mobile app privacy practices (see here and here), the following four recommendations should serve as the starting point for your privacy compliance analysis as you develop your AR app.

  1. Disclose your privacy practices. As with advertising disclosures, privacy-focused disclosures must be clear and conspicuous, and they must be available before users download your app. In October, as part of an ongoing effort to improve privacy protections on mobile apps, the California Attorney General notified a number of developers that their mobile apps did not comply with state privacy laws. These laws require online operators that collect personal information to post a conspicuous privacy policy that is reasonably accessible to users prior to download. The developers have 30 days to comply or risk penalties of up to $2,500 for each time the non-compliant mobile app is downloaded.

  2. Obtain user consent before collecting location data. An increasing number of AR apps tap into geolocation data to provide the user with real-time information about their surrounding physical environment. The FTC’s guidance on mobile apps cautions developers to avoid collecting sensitive consumer data, such as precise location information, without first obtaining users’ affirmative consent.

  3. Create a plan at the outset to limit potential privacy issues. Companies like Viewdle, which was recently acquired by Google, are using facial recognition technology to enhance AR features used in mobile gaming, social networking and social media. In October, the FTC issued a report on facial recognition technology that includes the following best practices when collecting users’ personal data: (1) collect only the personal data that you need, and retain it for only as long as you need it; (2) securely store the data that you retain, limit third-party access to a need-to-know basis and safely dispose of the data; and (3) tell users when their data may be used to link them to third-party information or publicly available sources.

  4. Be careful with children. AR apps can be highly persuasive marketing tools, particularly with children, who may be unable to distinguish between the real and virtual worlds. Earlier this year, an FTC report found that few mobile app targeted to kids included information on the apps’ data collection practices. If you collect personal information from children under 13, you need to comply with the Children’s Online Privacy Protection Act (COPPA), which requires companies to obtain verifiable consent from parents before they collect personal information from their children. Under an FTC proposal now in review, “personal information” would include (1) location data emitted by a child’s mobile device; and (2) persistent identifiers such as cookies, IP addresses and any unique device identifiers, unless this data is used only to support the internal operations of the app.

Have you interacted with AR apps? Do you have concerns about the technology’s privacy and disclosure practices? Share your take in the comments below.

Image courtesy of Flickr, jason.mcdermott.

Read more: http://mashable.com/2012/11/21/augmented-reality-advertising-privacy-law/

Why Insiders, Not Hackers, Are Biggest Threat to Cybersecurity

Cybersecurity

The NSA leaks perpetrated by Edward Snowden will easily go down as one of the biggest revelations of the year, if not the decade. But the episode also raises new questions about the risk that insiders pose to government and corporate cybersecurity, in spite of the attention lavished on foreign hackers.

Snowden’s case is unique in that it uncovered a previously unknown surveillance apparatus that’s massive in size and scope. It’s not unique, however, in the way the whistleblower did his deed. Two-thirds of all reported data breaches involve internal actors wittingly or unwittingly bringing sensitive information to outsiders, according to industry analysts.

“It’s not an either-or proposition,” said Mike DuBose, a former Justice Department official who led the agency’s efforts on trade-secret theft. “But amidst all the concern and discussion over foreign hacking, what gets lost is the fact that the vast majority of serious breaches involving trade secrets or other proprietary or classified information are still being committed by insiders.”

DuBose is now the head of the Cyber Investigations unit at the risk-management firm Kroll Advisory Solutions. In February, his team authored a report warning that contractors, information-technology personnel and disgruntled employees—all descriptors that fit Snowden pretty well—pose a greater threat than hackers, “both in frequency and in damage caused.”

Not everyone agrees. Even though insiders generally play an outsized role across all reported data breaches, their role in confirmed data breaches is rather small, according to an annual study by Verizon. In 2012 specifically, internal actors accounted for 14% of confirmed data breaches. Of those, system administrators were responsible for 16%.

“Our findings consistently show,” the report read, “that external actors rule.”

However common they are, cases like Snowden’s show how devastating one insider can be. The extent of the damage depends on what’s being exfiltrated, and from where, and there aren’t many standards for calculating losses. Most companies estimate the value of their trade secrets based on how much money they sank into the research and development of that knowledge. But for the government, it’s the potential impact on security that takes precedence—and that turns the question into a matter of subjective debate.

Last month, The Washington Post reported that Chinese spies compromised the designs for some of the Pentagon’s most sensitive weapons systems, including the F-35 Joint Strike Fighter, the V-22 Osprey tiltrotor aircraft and the Navy’s new Littoral Combat Ship.

If true, the report could have major consequences for national security. But Snowden’s case is equally consequential, if for different reasons, and it bolsters DuBose’s point about the relevance of insiders. Snowden may have rightfully uncovered evidence of government overreach, but if a midlevel contractor can steal top-secret information about the NSA and give it to the public in a gesture of self-sacrifice, someone else could do the same and hand the intelligence to more nefarious actors.

Image via iStockphoto, kynny

This article originally published at National Journal
here

Read more: http://mashable.com/2013/06/10/insiders-hackers-cybersecurity/

Intel Fuels a Rebellion Around Your Data

Inteldata

The world’s largest chip maker wants to see a new kind of economy bloom around personal data.

Intel is a $53-billion-a-year company that enjoys a near monopoly on the computer chips that go into PCs. But when it comes to the data underlying big companies like Facebook and Google, it says it wants to “return power to the people.”

Intel Labs, the company’s R&D arm, is launching an initiative around what it calls the “data economy”—how consumers might capture more of the value of their personal information, like digital records of their their location or work history. To make this possible, Intel is funding hackathons to urge developers to explore novel uses of personal data. It has also paid for a rebellious-sounding website called We the Data, featuring raised fists and stories comparing Facebook to Exxon Mobil.

Intel’s effort to stir a debate around “your data” is just one example of how some companies—and society more broadly—are grappling with a basic economic asymmetry of the big data age: they’ve got the data, and we don’t.

Internet firms like Google and Amazon are concentrating valuable data about consumers at an unprecedented scale as people click around the Web. But regulations and social standards haven’t kept up with the technical and economic shift, creating a widening gap between data haves and have-nots.

“As consumers, we have no right to know what companies know about us. As companies, we have few restrictions on what we can do with this data,” says Hilary Mason, chief data scientist at Bit.ly, a social-media company in New York. “Even though people derive value, and companies derive value, it’s totally chaotic who has rights to what, and it’s making people uncomfortable.”

In February, for instance, legislators in California introduced the first U.S. law to give individuals a complete view into their online personas. The “Right to Know” bill would let citizens of the state demand a detailed report showing all the information about them that companies like LinkedIn or Google had stored, and whom they had shared it with.

That bill quickly got shelved under pressure from lobbyists for technology companies, who called it “unworkable” and financially damaging to Internet firms and said lawmakers don’t understand “how the Internet works.” Some of the data covered in the bill, like a computer’s IP address, or location, is so basic to communication between machines on the Internet that companies admitted they don’t even know where it ends up.

And that’s the wider dilemma: our personal data is inextricably tied to “big data”—those far larger data sets that now power many of the online services we use. If you don’t tell a navigation app where you are, it can’t tell you where to turn, or tell others there’s traffic ahead. One doesn’t work without the other. What’s more, the economic importance of products fueled with personal data is growing rapidly.

According to the Boston Consulting Group, as methods for basing transactions on a person’s digital records have spread from banks to retailers and other sectors, the financial value that companies derived from personal data in Europe was $72 billion in 2011. The consultants concluded that “personal data has become a new form of currency.”

Yet that doesn’t mean it’s a currency easily understood or traded on by individuals. Although a few startups have attempted to help individuals monetize their personal facts, the truth is that information about people’s identity and habits has financial value mostly in the aggregate. A single user’s value to Facebook, for instance, is only about $5 a year. Mason, the Bit.ly executive, says trying to put a value on one person’s data is like calculating the value of one unmatched shoe. “And here we are talking about sets of millions or billions of shoes,” she says. “I just don’t think that data plays by the economics of any goods we are familiar with.”

Some believe the market may have already found the right economic balance. “It seems like we have a working model where companies own our data and we’re okay with that because of the free stuff, personalization, and convenience we get in return,” says Gam Dias, CEO of First Retail, an e-commerce consulting company. “There’s not a lot I’m going to do with my extra data anyway. I already know who I am and what I want.”

Intel this year judged the questions swirling around personal data important enough to launch a “Data Economy Initiative,” a multiyear study whose goal is to explore new uses of technology that might let people benefit more directly, and in new ways, from their own data, says Ken Anderson, a cultural anthropologist who is in charge of the project.

Anderson, who once helped Apple develop the sliding application bar that appears on Mac computers (after studying how people organized their desks and stacked items on shelves), says Intel believes technology based on personal data may end up in the control of individuals, in much the same way that mainframe computers gave way to PCs. “It doesn’t matter what you look at in terms of technology. Usually, there is this move toward individualization,” he says.

Intel, which has started surveying consumer opinions, has also been supporting efforts like a competition in New York last fall in which developers wrote apps for the elderly and single mothers. It’s also underwriting the National Day of Civic Hacking, an event focused on new uses of municipal data being released by city governments, such as records of health inspections.

It’s too early to say just what kinds of products might result for Intel, Anderson says,. “When you talk about the data economy, it’s really something that doesn’t yet exist,” he says. “There are people who [are] trying to control a lot of your personal data. But that’s not an economy—that’s just profit for one company.”

Image via ROBYN BECK/AFP/Getty Images

This article originally published at MIT Technology Review
here

Read more: http://mashable.com/2013/05/20/intel-data-economy/

Man Refuses to Stop Drone-Spying on Seattle Woman

Drone

Walk onto someone’s lawn and you’re trespassing; fly over it in a helicopter and you’re in the clear — “the air is a public highway,” the Supreme Court declared in 1946. But what about the in-between space? Does the availability of unmanned aerial vehicles (aka drones, aka UAVs) throw a wrench in the old legal understandings?

Well, here’s where the rubber meets the road for this abstract line of questioning. The Capitol Hill Seattle Blog reports a complaint it received from a resident in the Miller Park neighborhood. She writes:

This afternoon, a stranger set an aerial drone into flight over my yard and beside my house near Miller Playfield. I initially mistook its noisy buzzing for a weed-whacker on this warm spring day. After several minutes, I looked out my third-story window to see a drone hovering a few feet away. My husband went to talk to the man on the sidewalk outside our home who was operating the drone with a remote control, to ask him to not fly his drone near our home. The man insisted that it is legal for him to fly an aerial drone over our yard and adjacent to our windows. He noted that the drone has a camera, which transmits images he viewed through a set of glasses. He purported to be doing “research”. We are extremely concerned, as he could very easily be a criminal who plans to break into our house or a peeping-tom.

The site adds, “The woman tells us she called police but they decided not to show up when the man left.”

But even given the Supreme Court’s finding that The Atlantic‘s Alexis Madrigal raised in October, it’s unclear whether this stranger’s drone-flight — not to mention his photography — was legal under current law.

John Villasenor, author of a recent Harvard Journal of Law and Public Policy article about the laws governing drones and privacy, explained to me over email that it’s difficult to analyze the legalities of the case without more information. What kind of drone was it? How was it flown? These questions would be instrumental to determining whether it was operated in accordance with FAA regulations.

As for the privacy concerns, one of the most important questions is what was being photographed. “If the camera on the drone was always aimed at the public street,” Villasenor writes, “then that’s very different than if it was capturing images into the home through the window.”

The First Amendment provides a right to gather information, but that right is not unbounded; it ends, Villasenor writes, “when it crosses into an invasion of privacy.” He continued, “Putting a stepladder up against someone else’s home without permission, climbing up the ladder, and then photographing into a second-floor window would be an invasion of privacy. Using a drone just outside the window to obtain those same photographs would be just as much an invasion of privacy.”

New technologies may present new ways of violating people’s privacy, but that doesn’t mean they’re legal. It will take courts years to figure out how to apply our laws to our age of drones (and years for legislators to revise them — they’re not, after all, perfect), but we’re not starting from scratch. That said, police (or other law-enforcement agents) will need to actually enforce existing laws, or they’re not all that helpful.

Editor’s Note: Man in photo is not in any way affiliated with the subject of this story. Image via ROBYN BECK/AFP/Getty Images

This article originally published at The Atlantic
here

Read more: http://mashable.com/2013/05/13/seattle-man-drone-spying-woman/

Mobile Payment Chips Could Let Hackers Into Your Phone

Mobile-payment-chips-could-let-hackers-into-your-phone-844d9aee2a

android-phone-600In a packed room at the Black Hat computer security conference in Las Vegas yesterday, an Android smartphone was tapped with a white plastic card, and within seconds it was running malicious code that allowed an attacker to remotely access the device.

The demonstration was given by high-profile hacker Charlie Miller, who was the first person to demonstrate a way to seize control of the iPhone, in 2007, and who has demonstrated many novel attacks on Apple devices since. He outlined a number of reasons why the contactless near-field communication, or NFC, chips appearing in smartphones will bring new security worries as well as convenient new features — a talk that was the result of nine months of research.

“There’s going to be a lot of phones coming out with this technology, and so it would be nice to know if there’s any security problems in it,” said Miller.

A smartphone with an NFC chip can be used to pay for items when tapped on a reader (see: A New Kind of Smartphone Connection). The device uses weak radio waves to communicate either with another NFC device in close range or with passive tags such as those used by some mass transit payment cards.

Google is positioning NFC as a major feature of its Android operating system, in support of its Google Wallet payments service (see: Google Wallet: Who’ll Buy In?). Several Android phones with NFC are already available; Nokia has released some models and has plans for more; and Apple is rumored to be adding NFC to future iPhones.

Miller believes the influx of NFC devices could bring problems. “NFC is cool [for hackers] because you don’t need to have the user do anything,” said Miller. In contrast, in order to compromise a computer or non-NFC phone, criminals typically have to trick users into doing something out of the ordinary, such as opening a Web page or e-mail attachment they shouldn’t.

Miller’s Android NFC hack was made possible by a feature called Android Beam, which allows phones with NFC chips to exchange photos and other data. An NFC-equipped phone can send a URL to another when the two are tapped together, and the receiving device will open the page without offering the user a chance to decline.

Miller created a passive NFC tag that mimicked a phone using Android Beam to send a Web address, and made use of a bug in Google’s browser previously discovered by researchers at security startup CrowdStrike to gain control (see: How a Web Link Can Take Control of Your Phone).

NFC interactions typically require being within four centimeters of a phone, says Miller, so using such attacks against a person in the street would be difficult. “A more realistic attack is replacing an NFC reader [for accepting payments], in a cab or somewhere else, with a malicious one,” he said. Passive NFC tags are increasingly being used in posters and other marketing materials and could be used for such attacks too.

Miller also presented evidence that sending a corrupt NFC signal to a contactless phone could cause it to access and run malicious code. He probed for weaknesses in Nokia and Google’s NFC software by sending tens of thousands of slightly modified signals to see if any would cause problems. Miller said that he has found several promising bugs that could allow the execution of code to steal data or take control of a device.

This article originally published at MIT Technology Review
here

Read more: http://mashable.com/2012/07/26/mobile-payment-hackers/