Tag Archives: Apps and Software

‘Esquire’ Launches Weekly iPad Publication

Esquire-weekly

A little something extra is now arriving on the iPads of Esquire subscribers every week.

The monthly men’s lifestyle magazine launched a weekly, ad-supported edition dubbed Esquire Weekly, which will be automatically delivered to tablet subscribers at no additional charge. The issue will arrive every Thursday, except the Thursdays when the monthly magazine is released. Non-subscribers can pick up a copy for $0.99 per issue.

Each installment promises to contain seven pieces of original writing spanning culture, politics, humor and food, alongside some repurposed content from Esquire.com. Most of that repurposed content will be fresh for tablet subscribers, since the overlap between tablet subscribers and Esquire.com readers is “nominal,” Joe Keohane, senior editor of Esquire Digital, told Mashable. (David Granger, editor-in-chief of Esquire, has previously said the overlap is less than 10%.)

The aim of the weekly edition, says Keohane, is to attract new subscribers and “help keep existing ones.”

The issue is surprisingly meaty and beautifully produced. The first installment contains a crowdsourced advice column, a review of Star Trek: Into Darkness, instructions on grilling a steak, speculation on how the latest White House scandals will play out this summer, a first-person account by a freelance war reporter who was hit by a grenade launcher and an article on the life of Brad Pitt and Angelina Jolie. There are also a mattering of smaller news items, slideshows, etc.

Images courtesy of Esquire Weekly

Read more: http://mashable.com/2013/05/20/esquire-weekly-ipad/

Navigating the Legal Pitfalls of Augmented Reality

Navigating-the-legal-pitfalls-of-augmented-reality-a9fbb76fb9Alysa Z. Hutnik is a partner in the advertising and privacy practices at Kelley Drye & Warren, LLP. Her co-author, Matthew P. Sullivan, is an advertising and privacy associate at Kelley Drye & Warren, LLP. Read more on Kelley Drye’s advertising law blog Ad Law Access, or keep up with the group on Facebook and Twitter.

In the past year, augmented reality (AR) has moved beyond a sci-fi novelty to a credible marketing tool for brands and retailers. While part of a niche industry, AR applications are being championed by tech players like Google and Nokia, and a host of mobile app developers have launched AR apps for the growing number of smartphones and portable computing devices. Tech analyst firm Juniper Research estimates that AR apps will generate close to $300 million in global revenue next year.

The power of AR, particularly for marketers, is its ability to overlay highly relevant, timely and interactive data about specific products or services within a user’s live physical environment. For example, companies are using AR to transform home or online shopping by bringing to life static, two-dimensional images ? see Ikea’s 2013 catalog and Phillips TV Buying Guide mobile app ? or leveraging geolocational data to augment users’ real-world retail experiences with instant data on pricing, reviews or special discounts (such as IBM’s personal shopping assistant).

If you’re considering whether to add an AR app to your marketing mix, be aware that traditional advertising law principles still apply, and that both federal and state regulators are keeping a watchful eye on AR’s potential impact on consumer privacy.

Traditional Advertising and Disclosure Rules Apply

A unique aspect of AR is that it allows retailers to give online or mobile shoppers a realistic, up-close, three-dimensional or enhanced view of their products prior to purchase (think virtual dressing rooms). If your AR app is used to promote or drive sales for a particular product, be sure to avoid overstating or exaggerating the features, functions or appearances of the product, or leaving out material information that could sway the consumer’s purchasing decision.

In September, the Federal Trade Commission (FTC) published a marketing guide for mobile app developers. It clarifies that long standing truth-in-advertising standards apply in the virtual world to the same extent as in the real world.

The key takeaway: Disclosures must be clear and conspicuous. That is, you should look at your app from the perspective of the average user and ensure that disclosures are big and clear enough so that users actually notice them and understand what they say. Another rule of thumb is to keep your disclosures short and simple, and use consistent language and design features within your app. Before launching your app, carefully consider how best to make necessary disclosures visible and accessible in the AR context.

You can expect more guidance on disclosures in the near future when the FTC releases its updated Dot Com Disclosures Guide.

Take Consumer Privacy Seriously

To unlock AR’s full potential, developers are integrating the visual elements of AR with users’ personal information, including geolocational and financial data, facial recognition information and users’ social media contacts.

Given the increased scrutiny over mobile app privacy practices (see here and here), the following four recommendations should serve as the starting point for your privacy compliance analysis as you develop your AR app.

  1. Disclose your privacy practices. As with advertising disclosures, privacy-focused disclosures must be clear and conspicuous, and they must be available before users download your app. In October, as part of an ongoing effort to improve privacy protections on mobile apps, the California Attorney General notified a number of developers that their mobile apps did not comply with state privacy laws. These laws require online operators that collect personal information to post a conspicuous privacy policy that is reasonably accessible to users prior to download. The developers have 30 days to comply or risk penalties of up to $2,500 for each time the non-compliant mobile app is downloaded.

  2. Obtain user consent before collecting location data. An increasing number of AR apps tap into geolocation data to provide the user with real-time information about their surrounding physical environment. The FTC’s guidance on mobile apps cautions developers to avoid collecting sensitive consumer data, such as precise location information, without first obtaining users’ affirmative consent.

  3. Create a plan at the outset to limit potential privacy issues. Companies like Viewdle, which was recently acquired by Google, are using facial recognition technology to enhance AR features used in mobile gaming, social networking and social media. In October, the FTC issued a report on facial recognition technology that includes the following best practices when collecting users’ personal data: (1) collect only the personal data that you need, and retain it for only as long as you need it; (2) securely store the data that you retain, limit third-party access to a need-to-know basis and safely dispose of the data; and (3) tell users when their data may be used to link them to third-party information or publicly available sources.

  4. Be careful with children. AR apps can be highly persuasive marketing tools, particularly with children, who may be unable to distinguish between the real and virtual worlds. Earlier this year, an FTC report found that few mobile app targeted to kids included information on the apps’ data collection practices. If you collect personal information from children under 13, you need to comply with the Children’s Online Privacy Protection Act (COPPA), which requires companies to obtain verifiable consent from parents before they collect personal information from their children. Under an FTC proposal now in review, “personal information” would include (1) location data emitted by a child’s mobile device; and (2) persistent identifiers such as cookies, IP addresses and any unique device identifiers, unless this data is used only to support the internal operations of the app.

Have you interacted with AR apps? Do you have concerns about the technology’s privacy and disclosure practices? Share your take in the comments below.

Image courtesy of Flickr, jason.mcdermott.

Read more: http://mashable.com/2012/11/21/augmented-reality-advertising-privacy-law/

Why Computer Animation Looks So Darn Real

Shrek-600

Walt Disney once said, “Animation can explain whatever the mind of man can conceive.” For Disney, this was animation’s magic — its power to bring imagination to life.

Disney died in 1966, 11 years before computer animation’s heralded debut in Star Wars, and he likely never imagined how life-like animation would become, or how pervasively it would be used in Hollywood. As viewers, we now hardly blink when we see a fully rendered alien planet or a teddy bear working the grocery store check-out counter.

Animation has largely stripped its reputation as a medium for children; it’s been used far too successfully in major films to remain confined to kids. After all, who hasn’t had the experience of going to an animated film and finding the theatre packed with adults? Who doesn’t secretly remember the moment they were a little turned on during Avatar?

Considering animation’s rapid evolution, it sometimes feels like we’re just weeks away from Drake and Chris Brown settling their beef via a battle of photorealistic holograms.

So how did we get here? How did computer animation come to look so darn real?

From the MoMA to Casper

Computer animation debuted in 1967 in Belgium, and soon after at the MoMA, with Hummingbird, a ten minute film by Charles Csuri and James Shaffer. The film depicted a line drawing of a bird programmed with realistic movements and was shown to a high art crowd, who probably weren’t fantasizing the medium’s potential to create a sassy talking donkey.

In 1972, Ed Catmull, future co-founder of Pixar, created the first 3D computer-animated human hand and face, which was incorporated into the 1976 sci-fi thriller Futureworld. Computer animation didn’t capture the mainstream’s attention, though, until the classic trench run sequence in Star Wars, which used 3D wireframe graphics for the first time. It was the product of a lot of guesswork and brilliance, particularly by animator Larry Cuba. If you have 10 minutes to kill, this old-school video of Cuba explaining how they pulled it off is fascinating:

The late seventies were a time, though, when innovation didn’t happen at the breakneck pace we’re accustomed to today. The next big moment for computer animation didn’t come until 1984, when a young member of George Lucas’ Lucasfilms team, John Lasseter, spearheaded a one-minute CGI film called The Adventures of Andre and Wally B, which pioneered the use of super-curved shapes to create the fluid character movement, a staple of future films by DreamWorks and Pixar, where Lasseter would serve as CCO.

1986’s Labryrinth introduced the first 3D animal — an owl in the opening sequence — and 1991’s Terminator 2: Judgment Day introduced the first realistic human movements by a CGI character, not to mention Arnold Schwarzenegger’s obsession with voter demographics.

In 1993, computer animation’s reputation soared with the release of Jurassic Park and its incredibly realistic dinosaurs. The creatures sent adolescent boys into fits of delight, even though the film only used computer animated dinosaurs for four of the fourteen minutes they were on screen.

Then came 1995 and the release of Casper, which introduced the first CGI protagonist to interact realistically with live actors, though that interaction was predominantly Christina Ricci trying to seduce a ghost.

But Casper was just a warm-up for Toy Story.

The Toy Story and Shrek Era

Six months after Casper, the first feature-length CGI film was released: Toy Story. It was an incredible four-year undertaking by Pixar’s John Lasseter and his team; the film was 81 times longer than Lasseter’s first computer animated film a decade before. They faced two fatal challenges: a relatively tiny $30 million budget, and a small, inexperienced team. Of the 27 animators, half were rumored to have been borderline computer illiterate when production began.

“If we’d known how small our budget and our crew was,” remembered writer Peter Docter, “we probably would have been scared out of our gourds. But we didn’t, so it just felt like we were having a good time.”

They thrived. The animators began by creating clay or computer-drawn models of the characters; once they had the models, they coded articulation and motion controls so that the characters could do things like run, jump and laugh. This was all done with the help of Menv, a modeling environment tool Pixar had been building for nine years. Menv’s models proved incredibly complex — the protagonist, Woody, required 723 motion controls. It was a strain on man and machine alike; it took 800,000 machine hours to complete the film, and it took each animator a week to successfully sync an 8-second shot.

There are more PhDs working on this film than any other in movie history,” Pixar co-founder Steve Jobs told Wired at the time. “And yet you don’t need to know a thing about technology to love it.”

Jobs was right. Audiences loved the film not just because of the impressive animation and three-dimensional realism, but also because of a superb script and voice work by Tom Hanks, Tim Allen and Don Rickles. It sparked computer animated films’ reputation for pairing stunning visuals with compelling stories. That reputation was key, as computer animation’s evolution hinged on the willingness of studios to invest in it.

In 1998, DreamWorks’ Antz and Pixar’s A Bug’s Life maintained computer animation’s stellar reputation, while briefly terrorizing countless entomophobic parents. The flood scene in Antz received widespread praise, particularly from those who couldn’t wait for the bugs to die.

Computer animation’s next breakthrough came in 2001 with Shrek. Shrek delved into true world building; it included 36 separate in-film locations, more than any CGI feature before it. DreamWorks also made a huge advancement by taking the facial muscle rendering software it used in Antz and applying it to the whole body of Shrek’s characters.

“if you pay attention to Shrek when he talks, you see that when he opens his jaw, he forms a double chin,” supervising animator Raman Hui explained, “because we have the fat and the muscles underneath. That kind of detail took us a long time to get right.”

Shrek brought a new age of realism. Hair, skin and clothes flowed naturally in the elements; the challenge of making Donkey’s fur flow smoothly helped animators render the realistic motion of grass, moss and beards (and other things hipsters like). Shrek grossed nearly a half billion dollars, won the first-ever Academy Award For Best Animated Feature, and established DreamWorks as an animation powerhouse, alongside Disney-Pixar.

Advancements in Photorealism and Live Action

In computer animation, there are two kinds of “realness.” First, there’s the “realness” of Shrek, where the animation is still stylized and doesn’t strive for photorealism. Then, there’s photorealistic animation, which aims to make computer animation indistinguishable from live action.

In the same year Shrek was released, we also saw the release of Final Fantasy: The Spirit Within, the first photorealistic, computer-animated feature film. It was filmed using motion-capture technology, which translates recorded movements into animation.

1,327 live action scenes were filmed to make the final animated product. Though the film flopped, the photorealistic visuals were a smash success. The film’s protagonist, Aki Ross, made the cover of Maxim and was the only fictional character to make its list of “Top 100 Sexiest Women Ever.” Aki was a painstaking advancement in photorealistic animation; each of her 60,000 hairs was individually animated, and she was made up of about 400,000 polygons. Entertainment Weekly raved that, “Calling this action heroine a cartoon would be like calling a Rembrandt a doodle,” while naming Aki Ross to its “It” girl list.

The advancements in photorealism and motion capture animation kept coming. In 2002’s The Lord of the Ring: The Two Towers, Gollum was the first motion-capture character to interact directly with live-action characters. Two years later, Tom Hank’s The Polar Express ushered motion-capture films into the mainstream.

Photorealistic animation’s quantum leap came in 2009 with Avatar, a project James Cameron had delayed nearly a decade to allow the technology to catch up to his vision. Cameron commissioned the creation of a camera that recorded facial expressions of actors for animators to use later, allowing for a perfect syncing of live action with animation. Cameron demanded perfection; he reportedly ordered that each plant on the alien planet of Pandora be individually rendered, even though each one contained roughly one million polygons. No wonder it took nearly $300 million to produce Avatar.

Cameron’s goal was to create a film where the audience couldn’t tell what was animated and what was real. He succeeded. Now, the question is, “What’s next?”

What’s Next

Most people think that the animated rendering of humans hasn’t been perfected yet; Cameron’s 10-foot blue animated Na’vi aliens in Avatar was seen as an easier venture than rendering humans, but Cameron doesn’t think that was the case.

“If we had put the same energy into creating a human as we put into creating the Na’vi, it would have been 100% indistinguishable from reality,” Cameron told Entertainment Weekly. “The question is, why the hell would you do that? Why not just photograph the actor? Well, let’s say Clint Eastwood really wanted to do one last Dirty Harry movie looking the way he did in 1975. He could absolutely do it now. And that would be cool.”

Cameron has repeatedly emphasized that he doesn’t view computer animation as a threat to actors, but rather as a tool to empower and transform them.

And if that means we get to experience 1975 Clint Eastwood’s career again, well, that would just go ahead and make our day.

Read more: http://mashable.com/2012/07/09/animation-history-tech/

Government Lab Reveals Quantum Internet Operated for 2 Years

Sparkleoptic

One of the dreams for security experts is the creation of a quantum Internet that allows perfectly secure communication based on the powerful laws of quantum mechanics.

The basic idea here is that the act of measuring a quantum object, such as a photon, always changes it. So any attempt to eavesdrop on a quantum message cannot fail to leave telltale signs of snooping that the receiver can detect. That allows anybody to send a “one-time pad” over a quantum network which can then be used for secure communication using conventional classical communication.

That sets things up nicely for perfectly secure messaging known as quantum cryptography and this is actually a fairly straightforward technique for any half decent quantum optics lab. Indeed, a company called ID Quantique sells an off-the-shelf system that has begun to attract banks and other organisations interested in perfect security.

These systems have an important limitation, however. The current generation of quantum cryptography systems are point-to-point connections over a single length of fibre, So they can send secure messages from A to B but cannot route this information onwards to C, D, E or F.

That’s because the act of routing a message means reading the part of it that indicates where it has to be routed. And this inevitably changes it, at least with conventional routers. This makes a quantum Internet impossible with today’s technology

Various teams are racing to develop quantum routers that will fix this problem by steering quantum messages without destroying them. We looked at one of the first last year. But the truth is that these devices are still some way from commercial reality.

Today, Richard Hughes and his team at Los Alamos National Labs in New Mexico reveal an alternative quantum Internet, which they say they’ve been running for two and half years. Their approach is to create a quantum network based around a hub and spoke-type network. All messages get routed from any point in the network to another via this central hub.

This is not the first time this kind of approach has been tried. The idea is that messages to the hub rely on the usual level of quantum security. However, once at the hub, they are converted to conventional classical bits and then reconverted into quantum bits to be sent on the second leg of their journey.

So as long as the hub is secure, then the network should also be secure.

The problem with this approach is scalability. As the number of links to the hub increases, it becomes increasingly difficult to handle all the possible connections that can be made between one point in the network and another.

Hughes and co say they’ve solved this with their unique approach which equips each node in the network with quantum transmitters—ie lasers—but not with photon detectors which are expensive and bulky. Only the hub is capable of receiving a quantum message (although all nodes can send and receiving conventional messages in the normal way).

That may sound limiting but it still allows each node to send a one-time pad to the hub which it then uses to communicate securely over a classical link. The hub can then route this message to another node using another one time pad that it has set up with this second node. So the entire network is secure, provided that the central hub is also secure.

The big advantage of this system is that it makes the technology required at each node extremely simple—essentially little more than a laser. In fact, Los Alamos has already designed and built plug-and-play type modules that are about the size of a box of matches. “Our next-generation [module] will be an order of magnitude smaller in each linear dimension,” they say.

Their ultimate goal is to have one of these modules built in to almost any device connected to a fibre optic network, such as set top TV boxes, home computers and so on, to allow perfectly secure messaging.

Having run this system now for more than two years, Los Alamos are now highly confident in its efficacy.

Of course, the network can never be more secure than the hub at the middle of it and this is an important limitation of this approach. By contrast, a pure quantum Internet should allow perfectly secure communication from any point in the network to any other.

Another is that this approach will become obsolete as soon as quantum routers become commercially viable. So the question for any investors is whether they can get their money back in the time before then. The odds are that they won’t have to wait long to find out.

Image via iStockphoto, muratkoc

This article originally published at MIT Technology Review
here

Read more: http://mashable.com/2013/05/06/government-lab-quantum-internet/

Nokia Announces Windows Phone 8 Version of City Lens App

Nokia-announces-windows-phone-8-version-of-city-lens-app-video--65410e98e3

After coming out of beta last week, Nokia‘s augmented reality app, Nokia City Lens, is due for another update with plenty of new features.

The upcoming Windows Phone 8 version of the app, which will work on Nokia’s Lumia 920 and 820 smartphones, is set to debut 3D icons, as well as the option of filtering search results to only show those in your line of sight.

Some WP8-specific features will also be added to the app, including the ability to pin to start any category, and to customize the menu by adding your favorite searches.

Perhaps most importantly, the app will work in both landscape and portrait modes.

Nokia has yet to comment on the exact release date of its newest version of City Lens.

Would you download the app? Tell us in the comments below.

Read more: http://mashable.com/2012/09/11/nokia-city-lens-windows-8/

Google to Launch New Devices, Android 4.2 at Oct. 29 Event

Google-to-launch-new-devices-android-4-2-at-oct-29-event-report--c1a5be0adf

Google will unveil several new devices and a software update at its scheduled Oct. 29 press event, according to a company video leaked from an all-hands meeting.

The Next Web is reporting Google has distributed an internal video that details and confirms speculations about what might be revealed at the upcoming event.

The video reportedly discusses the launch of a 32GB version of the Nexus 7 tablet, as well as one with 3G support. It also indicates Google is working with manufacturer Samsung to release a 10-inch tablet called “Nexus 10” that will run Android 4.2 (“Key Lime Pie”), and a Nexus smartphone manufactured by LG.

Meanwhile, the new Android 4.2 mobile operating system will include a panoramic camera option and “tablet sharing” capabilities, which would allow more than one user to access the device with his own set of email and apps — similar to how a family or business can switch between user settings on a Windows computer.

Earlier this week, Google sent invitations to the press for an Android event to be held in New York City. Although the invitation didn’t detail what might occur, the tagline — “the playground is open” — suggests it will have to do with Google Play, the company’s newly-rebranded Andriod Market.

The news came as Microsoft prepares for its Windows Phone 8 launch event, which will also be held on Oct. 29 — and Apple gears up to unveil its rumored 7.85-inch iPad on Tuesday, Oct. 23.

Google’s new Samsung tablet is reportedly being filed under the name “Codename Manta.” The device is expected to have a 2560×1600 pixel resolution and 300ppi, which is greater than the iPad’s 264ppi.

Meanwhile, the 4.7-inch Nexus smartphone manufactured by LG is said to tout a quad-core 1.5 GHz Qualcomm APQ8064 Snapdragon processor, a 1280×768 display, 2GB of RAM and 16GB storage.

BONUS: 10 Free Android Apps You’ll Use Every Day

Top 10 Tech This Week

Top-10-tech-this-week-942835b68bTop 10 Tech is presented by Chivas. Access a world of exclusive insider benefits – private tastings, special events and the chance to win a trip for you and three friends to the Cannes Film Festival. Join the Brotherhood.

Watch This Magazine Cover Transform Into an Interactive Game

Watch-this-magazine-cover-transform-into-an-interactive-game-video--db29f911db

The cover of ShortList, a weekly men’s magazine in the UK, took on a uniquely interactive quality this week thanks to Blippar.

Using Blippar’s augmented reality app for iPhone or Android devices, readers can scan the arcade game-style art on the cover to bring a fully playable version to life on their phones, as shown in the video above. Elsewhere in the issue, readers can use the app to pull up extra slideshows, vote in polls, take quizzes and more.

Blippar, a UK-based startup that set up its first U.S. office in Manhattan earlier this year, has been making increasingly frequent appearances in ads and even the cover of Justin Bieber’s last big album release, Believe. Meanwhile, augmented reality and other 2D-code-activated applications are being integrated into a broader array of magazine titles, including Allure, The Atlantic, Elle and Esquire.

Read more: http://mashable.com/2012/11/09/magazine-interactive-game-cover/

These Glasses Let You Play in 3D Virtual Worlds

Despite the endless gaming and interactive potential of augmented reality, the technology has been moving slow in terms of widespread awareness and adoption. But a new system called castAR aims to push augmented reality into the mainstream, starting with a Kickstarter campaign that launched Monday.

Founded by veteran developers and former Valve employees Jeri Ellsworth and Rick Johnson, Washington-based company Technical Illusions is offering a product that delivers both augmented-reality and virtual-reality experiences.

First introduced in May as a prototype, the castAR system is centered around a pair of glasses that house two micro-projectors over each lens. Each projector receives its video stream via an HDMI connection, and then beams a portion of a 3D image to a flat surface made out of retro-reflective sheeting material.

Situated between two the two lenses is a small camera that scans the surface for infrared markers. This dynamic allows the castAR to accurately track your head movements in relation to the holographic representations on the surface.

The product also comes with a clip-on attachment that allows the wearer to experience private augmented reality, layering virtual objects onto the real world, or virtual reality, during which all the imagery you see is computer-generated. Also included is a device called a Magic Wand that serves as a 3D input device and joystick.

Some of the potential applications for the castAR system include board games, flight simulators and first-person shooters; but the developers believe that it could also be used for interactive presentations in business.

While many companies have promised to deliver impressive augmented-reality experiences, video of the commercial version of the castAR (above) is impressive. “It’s gonna deliver on the dream of the holodeck,” Bre Pettis, CEO of Makerbot, said in the video.

For $355, early adopters can get their hands on the entire package of components, which includes the castAR glasses, the retro-reflective surface, the Magic Wand and the AR and VR clip-on. There are also several other packages offered at lower prices for those only looking to try the basics of the system.

Launched with a goal of $400,000, the team’s Kickstarter campaign has already earned over $210,000 as of this writing. Those who order the device now can expect to get it next September, according to Technical Illusions.

Image: Technical Illusions

Read more: http://mashable.com/2013/10/14/augmented-reality-glasses/

How to Detect Apps Leaking Your Data

How-to-detect-apps-leaking-your-data-0dad401d4d

One reason that smartphones and smartphone apps are so useful is that they can integrate intimately with our personal lives. But that also puts our personal data at risk.

A new service called Mobilescope hopes to change that by letting a smartphone user examine all the data that apps transfer, and alerting him when sensitive information, such as his name or email address, is transferred.

“It’s a platform-agnostic interception tool that you can use on your Android, iOS, Blackberry, or Windows device,” says Ashkan Soltani, an independent privacy researcher who created Mobilescope with fellow researchers David Campbell and Aldo Cortesi.

Their first proof-of-concept won a prize for the best app created during a privacy-focused programming contest, or codeathon, organized by the Wall Street Journal in April this year; the trio has now polished it enough to open a beta trial period. Access is steadily being rolled out to the “couple of thousand” people that have already signed up, says Soltani.

Once a person has signed up for the service, Mobilescope is accessed through a website, not as an app installed onto a device. A user can use the site to see logs of the data transferred by the apps on their device. They can also specify “canaries,” pieces of sensitive information such as a phone number, email or name that trigger an alert if they are sent out by an app.

Mobilescope can catch apps doing things such as copying a person’s address book to a remote server, as Path and several other mobile apps were found to do earlier this year. Soltani says the service is intended to level the playing field between mobile apps and the people that use them by arming users with more information about what those apps do.

As became clear when several popular apps were caught quietly copying contact data from users earlier this year, neither Apple’s nor Google’s mobile operating systems currently offer people much insight into or control of what apps are sharing.

(MIT Technology Review)

“Our focus is making really simple the process of interception,” says Soltani. “If you’re not an advanced user, you can still get at this data using Mobilescope.”

When a person signs up for Mobilescope, a configuration file is sent to his device. Once installed, this file causes all future Internet traffic to be routed through a Mobilescope server so that it can analyze the data that comes and goes to the device and its apps.

That arrangement is possible thanks to the way that smartphones are designed to be compatible with VPNs, or virtual private networks — encrypted communications that some businesses use to keep corporate data private. That design doesn’t add much delay to a person’s connection, says Soltani, in part because users are connected with a server as geographically close to them as possible.

Mobilescope can even examine data that is sent over the most common types of secure connection used by apps, similar to those used by banking websites, by intercepting the certificates involved. The service cannot decrypt other data, but Soltani says that few apps bother to use encryption. Data collected by Mobilescope is discarded after each session of use, and is only ever stored on a person’s own device.

Soltani says he doesn’t imagine Mobilescope will have the mass appeal of something like Angry Birds, but he hopes it will encourage journalists, activists, and ordinary smartphone owners to look into what apps do, and will help put more pressure on app developers to respect privacy.

“Added transparency for everyone — app developers, users, regulators — will help the whole mobile ecosystem.”

An earlier version of Mobilescope gave users the power to send fake data to certain apps, for example sending a spoof location. “We had to pull that out because the ecosystem is not ready for it,” says Soltani, who says this broke some apps, sometimes in ways that could harm other users. A separate project does make that tactic available to Android users willing to use a modified version of their operating system.

(MIT Technology Review)

In April, Xuxian Jiang, an associate professor at North Carolina State University, published a study showing that the ad systems included in many Android apps endanger users’ privacy. Around half of these systems monitor a user’s GPS location, and some also collect call logs and other sensitive data.

Jiang, who has uncovered other security and privacy flaws with mobile apps, said Mobilescope will be an “interesting” new tool for keeping tabs on apps. However, he adds that it can’t be guaranteed to catch everything, and says mobile privacy can only be improved with greater transparency from developers, improved privacy statements, and action from the creators of mobile operating systems.

“[We] need of mechanisms for users to actually control apps’ access to various personal information,” he says.

Justin Brookman, who directs consumer privacy activity at the Center for Democracy and Technology, says this will require changes to the law, which currently simply encourages companies to write very broad privacy policies to avoid the penalties for writing false ones.

“Detailed disclosures are actually deterred by the law,” he says. The CDT is attempting to get legislation introduced that instead requires companies to explicitly tell consumers what’s happening to their data, and to provide them with more control over it.

This article originally published at MIT Technology Review
here

Read more: http://mashable.com/2012/08/10/detect-apps-leaking-data/