Tag Archives: Entertainment

Oculus Rift Unveils New Virtual Reality Headset for Devs to Play With

Oculus_1

The second generation Oculus Rift will be available for game developers this summer.

Game developers interested in creating games in virtual reality will get an upgraded set of tools from the Oculus Rift team this summer, the company announced Wednesday morning.

The second-generation Oculus Rift development kit is available for preorder starting Wednesday for developers. The virtual reality headset, which began as a Kickstarter campaign in 2012, now has 50,000 units in the hands of developers interested in creating games for it.

Oculus VR Vice President of Product Nate Mitchell said doesn’t resemble anything like consumers will eventually see, but is much farther along the company’s vision for virtual reality than the previous Oculus Rift model. A consumer version is still not under discussion, he added.

“We’ve learned a lot of lessons from our original vision,” Mitchell said.

The new Oculus Rift headset solves many users’ latency issues; it eliminates the motion blur problems that were easy to spot if you moved your head too quickly. It features a brighter, higher-resolution OLED screen with a 960 x 1080p resolution over each eye, rather than a 640 x 800p resolution over each eye on the current kit.

Oculus_3

A straight on view of the updated Oculus Rift.

The new headset also boasts improved positional tracking, part of the Crystal Cove prototype the company showed off during CES 2014. Mitchell said that such new features will allow developers to bring many more complex elements into games they produce for virtual reality, including text and UI layouts. Previously, both were previously very difficult to add.

The new headset will cost $350 for developers and will ship sometime in July of this year.

Virtual reality may be the belle of the ball at the Game Developers Conference this week. Sony also used the conference to announce its own virtual reality headset for the PlayStation 4, currently called Project Morpheus. Sony remained mum on setting a date for its headset to reach consumers.

Read more: http://mashable.com/2014/03/19/oculus-rift-second-generation/

Why Computer Animation Looks So Darn Real

Shrek-600

Walt Disney once said, “Animation can explain whatever the mind of man can conceive.” For Disney, this was animation’s magic — its power to bring imagination to life.

Disney died in 1966, 11 years before computer animation’s heralded debut in Star Wars, and he likely never imagined how life-like animation would become, or how pervasively it would be used in Hollywood. As viewers, we now hardly blink when we see a fully rendered alien planet or a teddy bear working the grocery store check-out counter.

Animation has largely stripped its reputation as a medium for children; it’s been used far too successfully in major films to remain confined to kids. After all, who hasn’t had the experience of going to an animated film and finding the theatre packed with adults? Who doesn’t secretly remember the moment they were a little turned on during Avatar?

Considering animation’s rapid evolution, it sometimes feels like we’re just weeks away from Drake and Chris Brown settling their beef via a battle of photorealistic holograms.

So how did we get here? How did computer animation come to look so darn real?

From the MoMA to Casper

Computer animation debuted in 1967 in Belgium, and soon after at the MoMA, with Hummingbird, a ten minute film by Charles Csuri and James Shaffer. The film depicted a line drawing of a bird programmed with realistic movements and was shown to a high art crowd, who probably weren’t fantasizing the medium’s potential to create a sassy talking donkey.

In 1972, Ed Catmull, future co-founder of Pixar, created the first 3D computer-animated human hand and face, which was incorporated into the 1976 sci-fi thriller Futureworld. Computer animation didn’t capture the mainstream’s attention, though, until the classic trench run sequence in Star Wars, which used 3D wireframe graphics for the first time. It was the product of a lot of guesswork and brilliance, particularly by animator Larry Cuba. If you have 10 minutes to kill, this old-school video of Cuba explaining how they pulled it off is fascinating:

The late seventies were a time, though, when innovation didn’t happen at the breakneck pace we’re accustomed to today. The next big moment for computer animation didn’t come until 1984, when a young member of George Lucas’ Lucasfilms team, John Lasseter, spearheaded a one-minute CGI film called The Adventures of Andre and Wally B, which pioneered the use of super-curved shapes to create the fluid character movement, a staple of future films by DreamWorks and Pixar, where Lasseter would serve as CCO.

1986’s Labryrinth introduced the first 3D animal — an owl in the opening sequence — and 1991’s Terminator 2: Judgment Day introduced the first realistic human movements by a CGI character, not to mention Arnold Schwarzenegger’s obsession with voter demographics.

In 1993, computer animation’s reputation soared with the release of Jurassic Park and its incredibly realistic dinosaurs. The creatures sent adolescent boys into fits of delight, even though the film only used computer animated dinosaurs for four of the fourteen minutes they were on screen.

Then came 1995 and the release of Casper, which introduced the first CGI protagonist to interact realistically with live actors, though that interaction was predominantly Christina Ricci trying to seduce a ghost.

But Casper was just a warm-up for Toy Story.

The Toy Story and Shrek Era

Six months after Casper, the first feature-length CGI film was released: Toy Story. It was an incredible four-year undertaking by Pixar’s John Lasseter and his team; the film was 81 times longer than Lasseter’s first computer animated film a decade before. They faced two fatal challenges: a relatively tiny $30 million budget, and a small, inexperienced team. Of the 27 animators, half were rumored to have been borderline computer illiterate when production began.

“If we’d known how small our budget and our crew was,” remembered writer Peter Docter, “we probably would have been scared out of our gourds. But we didn’t, so it just felt like we were having a good time.”

They thrived. The animators began by creating clay or computer-drawn models of the characters; once they had the models, they coded articulation and motion controls so that the characters could do things like run, jump and laugh. This was all done with the help of Menv, a modeling environment tool Pixar had been building for nine years. Menv’s models proved incredibly complex — the protagonist, Woody, required 723 motion controls. It was a strain on man and machine alike; it took 800,000 machine hours to complete the film, and it took each animator a week to successfully sync an 8-second shot.

There are more PhDs working on this film than any other in movie history,” Pixar co-founder Steve Jobs told Wired at the time. “And yet you don’t need to know a thing about technology to love it.”

Jobs was right. Audiences loved the film not just because of the impressive animation and three-dimensional realism, but also because of a superb script and voice work by Tom Hanks, Tim Allen and Don Rickles. It sparked computer animated films’ reputation for pairing stunning visuals with compelling stories. That reputation was key, as computer animation’s evolution hinged on the willingness of studios to invest in it.

In 1998, DreamWorks’ Antz and Pixar’s A Bug’s Life maintained computer animation’s stellar reputation, while briefly terrorizing countless entomophobic parents. The flood scene in Antz received widespread praise, particularly from those who couldn’t wait for the bugs to die.

Computer animation’s next breakthrough came in 2001 with Shrek. Shrek delved into true world building; it included 36 separate in-film locations, more than any CGI feature before it. DreamWorks also made a huge advancement by taking the facial muscle rendering software it used in Antz and applying it to the whole body of Shrek’s characters.

“if you pay attention to Shrek when he talks, you see that when he opens his jaw, he forms a double chin,” supervising animator Raman Hui explained, “because we have the fat and the muscles underneath. That kind of detail took us a long time to get right.”

Shrek brought a new age of realism. Hair, skin and clothes flowed naturally in the elements; the challenge of making Donkey’s fur flow smoothly helped animators render the realistic motion of grass, moss and beards (and other things hipsters like). Shrek grossed nearly a half billion dollars, won the first-ever Academy Award For Best Animated Feature, and established DreamWorks as an animation powerhouse, alongside Disney-Pixar.

Advancements in Photorealism and Live Action

In computer animation, there are two kinds of “realness.” First, there’s the “realness” of Shrek, where the animation is still stylized and doesn’t strive for photorealism. Then, there’s photorealistic animation, which aims to make computer animation indistinguishable from live action.

In the same year Shrek was released, we also saw the release of Final Fantasy: The Spirit Within, the first photorealistic, computer-animated feature film. It was filmed using motion-capture technology, which translates recorded movements into animation.

1,327 live action scenes were filmed to make the final animated product. Though the film flopped, the photorealistic visuals were a smash success. The film’s protagonist, Aki Ross, made the cover of Maxim and was the only fictional character to make its list of “Top 100 Sexiest Women Ever.” Aki was a painstaking advancement in photorealistic animation; each of her 60,000 hairs was individually animated, and she was made up of about 400,000 polygons. Entertainment Weekly raved that, “Calling this action heroine a cartoon would be like calling a Rembrandt a doodle,” while naming Aki Ross to its “It” girl list.

The advancements in photorealism and motion capture animation kept coming. In 2002’s The Lord of the Ring: The Two Towers, Gollum was the first motion-capture character to interact directly with live-action characters. Two years later, Tom Hank’s The Polar Express ushered motion-capture films into the mainstream.

Photorealistic animation’s quantum leap came in 2009 with Avatar, a project James Cameron had delayed nearly a decade to allow the technology to catch up to his vision. Cameron commissioned the creation of a camera that recorded facial expressions of actors for animators to use later, allowing for a perfect syncing of live action with animation. Cameron demanded perfection; he reportedly ordered that each plant on the alien planet of Pandora be individually rendered, even though each one contained roughly one million polygons. No wonder it took nearly $300 million to produce Avatar.

Cameron’s goal was to create a film where the audience couldn’t tell what was animated and what was real. He succeeded. Now, the question is, “What’s next?”

What’s Next

Most people think that the animated rendering of humans hasn’t been perfected yet; Cameron’s 10-foot blue animated Na’vi aliens in Avatar was seen as an easier venture than rendering humans, but Cameron doesn’t think that was the case.

“If we had put the same energy into creating a human as we put into creating the Na’vi, it would have been 100% indistinguishable from reality,” Cameron told Entertainment Weekly. “The question is, why the hell would you do that? Why not just photograph the actor? Well, let’s say Clint Eastwood really wanted to do one last Dirty Harry movie looking the way he did in 1975. He could absolutely do it now. And that would be cool.”

Cameron has repeatedly emphasized that he doesn’t view computer animation as a threat to actors, but rather as a tool to empower and transform them.

And if that means we get to experience 1975 Clint Eastwood’s career again, well, that would just go ahead and make our day.

Read more: http://mashable.com/2012/07/09/animation-history-tech/

Oculus VR Focuses on Games, Not Hardware, at E3

Oculus_rift_inaction-4

The headset displays a fully immersive 3D experience that makes you feel like you are actually in the game.
Image: Mashable, Christina Ascani

The creators of the Oculus Rift virtual reality headset are spending E3 showing off games instead of new hardware, as the company looks at producing and publishing games for its platform.

The company showed off new titles from third-party developers that it was planning to publish, meaning it would provide financial and marketing support to games built for the Oculus Rift. It’s part of the continued growth into also ensuring quality games are available for the Oculus platform, despite the headset having no commercial release date yet.

““http://mashable.com/2014/06/07/girls-make-games-summer-camp/”” is not a valid see-also reference

The company most recently brought on Jason Rubin, co-founder of studio Naughty Dog, to handle its expansion into first party content.

Three of the games being shown at E3 — all by third-party development teams — all represented very different virtual reality experiences. One was for Super Hot, a game currently on Kickstarter that experiments with the perception of time. When the player moves, time moves regularly; when they stand still, time moves at a crawl, allowing them to dodge bullets.

A more whimsical virtual reality game was Lucky’s Tale, a third-person platformed where players hovered over the shoulder of a plucky fox. While virtual reality experiences usually offer a first-person camera view, Lucky’s Tale’s camera choice gave players a unique perspective over the fox’s shoulder.

On a darker note, Sega’s upcoming horror game Alien: Isolation announced virtual reality support at E3, and a playable demo was also on display. Players had to avoid being attacked by an alien on a derelict space ship, armed only with a monitor that showed them the monster’s presence.

While Oculus VR may be ramping up publishing without a firm release date, Vice President of Product Nate Mitchell said the company wanted to minimize risk to developers by promising a release date it couldn’t totally commit to — until it was absolutely sure it was the right time.

Mitchell did say that Facebook’s acquisition of the company in late March was instrumental in bringing on many hires, like Rubin, who could focus solely on what first-party development and publishing would mean inside of Oculus.

BONUS: This Oculus Rift Game Will Scare the Crap Out Of

Read more: http://mashable.com/2014/06/11/oculus-games-e3/

Indie Game ‘Contrast’ Turns the Shadows Into Your Playground

The power of a child’s imagination may be more powerful than any game engine. Compulsion Games hopes to harness that inventiveness and mystery in its upcoming game Contrast.

Contrast puts you in the shoes of Dawn, the imaginary friend of young Didi. Dawn helps Didi cope with her hectic home life — both her parents are performers in a dark and twisted vaudevillian world in the 1920s.

Dawn, luckily, isn’t bound by the laws of physics, and has the ability to turn into a shadow to solve puzzles and explore the world. The game revolves around illuminating areas, and then traversing or manipulating the shadows that are created.

Contrast is a puzzle-focused platformer, meaning the gameplay doesn’t revolve around combat. The conflict is generated by Didi’s family: her mother is a burlesque performer who works long hours, and her father is a struggling circus organizer. Compulsion’s PR and Community Manager Sam Abbott explains the family influences Dawn’s appearance; she’s a caricature of a burlesque dancer.

Contrast Shadows

The story was inspired by Guillermo del Toro’s dark Pan’s Labyrinth, Abbott said, a film in which imaginary beasts and fairies help a lonely young girl. Since Contrast is told through Dawn’s perspective and not Didi’s, the world circulates around Didi, and Dawn cannot see other characters except for their silhouettes.

In Contrast‘s PAX demo, Dawn needs to escort Didi to the club where her mother is performing. She must manipulate shadows made by a carousel to climb to new heights, then later turn on spotlights to illuminate Didi’s mother in the theater.

Abbott said Contrast‘s puzzle-based gameplay focuses on teaching players the inner workings of the light and shadow worlds without holding their hands. Each puzzle builds on new elements learned from the previous one.

Contrast will be out for the PlayStation 3, PlayStation 4, Xbox 360 and PC via Steam in late 2013.

Images: Compulsion Games

BONUS: See PAX Prime in Photos

Do Yourself a Favor: Ditch the March Madness Bracket

Michfans

Skipping your March Madness bracket this year could make you as happy as these Michigan fans were at last year’s Final Four.
Image: Charlie Neibergall/Associated Press

Every spring, the same scourge rises anew, threatening to dull March Madness‘s glorious shine. We speak not of incompetent refs, nor of inexplicable seedings. We’re not even talking about the fact that the opening orgy of games conflicts with what is commonly referred to as “job,” or “work.”

No, we gather here today to stand up to a threat that’s much more ominous — and widespread: the March Madness bracket.

You know the drill, the one repeated every March: Study the seedings; break down the teams and possible opponents; massage the crystal ball; then predict everything that’s going to happen over the next three weeks. That’s all fine and dandy — after all, nothing gets you primed for March Madness like poring over those match-ups.

But then the actual action starts and everything goes to hell. Game-winning shots and poignant moments — what make college hoops so addictive in the first place — cease to matter. Actual basketball and the inspirational achievements that go with it become of secondary importance to messy scribblings on a printed-out bracket (or, more likely, semi-thoughtless clicks in a digital version).

Of course, this is a familiar predicament for NFL fans of a certain mindset. I can’t count the number of Sundays I’ve wanted to grab someone — often a dear friend I’d never typically seek conflict with — by the collar and scream: “I don’t care that you ‘own’ the Packers defense! That catch by Calvin Johnson was delivered from the heavens, dammit! Your eyes are open but you sure don’t see, man!”

Fantasy football and March Madness brackets dominated the sports landscape long before Instagram was even a twinkle in Kevin Systrom’s eye. But they’re borne of a problem that has become increasingly relevant in the social media age: The thing itself is no longer the Thing; our reaction to the thing has become the Thing.

Just like the mind-blowing nature hike can only be as good as the filter we choose to share it with, the incredible March Madness buzzer-beater we know is coming can only be as important as its impact on our bracket. And that’s messed up.

Let’s face it: You’re not going to kick that Instagram habit anytime soon (and we won’t even mention Twitter). But this month, we’re faced with an opportunity to seize back our collective reality. We have an opportunity to, as the ever-sage Garth Algar once told Wayne, live in the now.

Skipping your March Madness bracket altogether is a big ask, I know. But if quitting cold turkey isn’t an option, please consider at least overall disengagement from your bracket picks. In fact, here’s an idea: Make your picks, put them away in a real or metaphorical drawer, then don’t look again until the actual tournament ends.

You’ll love your newfound lucidity. The people around you will love your Zen-like, total enjoyment of the game in front of you. March Madness will become that much madder, in the best way possible. The pull of the herd is tough to resist, but I have full confidence you can fight that sort of groupthink.

Now if you’ll excuse me, I’ve got a bracket to fill out. I see the path to higher basketball consciousness, but for now, I’m still stuck dreaming.

Let’s Talk About ‘Edge of Tomorrow’: It’s Great, But We Got Gripes

Edge.of_.tomorrow

Tom Cruise in a scene (and another scene, and another) from "Edge of Tomorrow."
Image: Warner Bros.

Every week, Mashable presents “Let’s Talk About…”, a Monday-morning look back at the biggest WTF moment from the weekend’s most talked-about new movie. If you haven’t seen the film, be warned: This doesn’t just contain spoilers — it’s ALL spoilers.


This week: Let’s talk about Edge of Tomorrow.

Tom Cruise may have gotten the box office tar knocked out of him by a sickly teen girl, but Edge of Tomorrow is still a rock solid success for the international A-lister.

More importantly, Doug Liman’s crafty and clever film (based on a novel by Hiroshi Sakurazaka and adapted by Christopher McQuarrie, Jez and John-Henry Butterworth) is a terrific bit of blockbuster summer cinema. The action is terrific, the performances have zip and the idea — even if it really is nothing but Groundhog Day Space War — feels fresh.

Edge of Tomorrow is such a delight that we almost feel bad mentioning the aspects we can’t wrap our heads around. And yet, that’s what we’re going to do. Keep in mind, this is the type of movie that could have had dozens and dozens of infuriating story holes and inconsistencies. The fact that we only have five things to gripe about is, unquestionably, a major win for #TeamTomorrow.

1. This guy look familiar?

In a surgical strike of opening exposition, we learn all about the attack of the Mimics and Humanity’s fight for survival. In addition to quick glimpses of known individuals (was that President Hillary Clinton?), we meet three important personalities: General Brigham (Brendan Gleeson), the leader of coalition forces, Rita Vrataski (Emily Blunt) the “Angel of Verdun” and/or “Full Metal Bitch” and Major William Cage (Cruise), the spin doctor tasked with disseminating the coalition’s talking points.

I’ll allow that Bill Paxton’s Sergeant Farrell, warned of Cage’s “desertion” and tricks, would stay stone-faced, but wouldn’t someone at the base say, “Hey, that’s the dude from Meet the Press?”

2. Where the non-white women at?

Emily Blunt is as badass as it comes in Edge of Tomorrow. She uses a helicopter blade as a sword, for heaven’s sake. She’s gorgeous, with shades of Catherine Zeta-Jones in Entrapment in that “worming up from the floor” move we see repeated 600 times, and she’s also quick-witted. We see the gears turning as she and Cruise try to use the time loops the movie to defeat the bad guys.

Elsewhere in the camp is Nance (Charlotte Riley), a crazy-eyed warrior part of Sergeant Farrell’s team. (Think of Amy Winehouse in a mech suit.) Clearly, this is a modern-day army in which women fight alongside men. But weirdly, these are the only two women we see. Like, anywhere. I think maybe a secretary brings Gleeson a folder. It’s odd that a movie that does such a good job of incorporating women into major roles would have such a jarring lack of representation elsewhere.

3. Killing me bluntly

There aren’t multiple timelines in Edge of Tomorrow, there are restarted days: loops where one reality winks out of existence and starts over depending on whether or not Tom Cruise survives.

Emily Blunt’s character knows this because it used to happen to her. (The explanation: if an “alpha,” a rare member of the alien breed, dies, the “omega,” the Queen Bee, will restart the day. If, as the “alpha” dies, it gets its blood on you, you inherit this rebooting ability, too. You lose this ability if you get a blood transfusion, lucky for the Mimics, I suppose.)

Anyway, to Cruise’s character, dying is a hassle (and likely a little painful), but not a big deal. He will always wake up again with a drill sergeant’s boot in his back. Whether or not the Cruise experiencing these loops needs to sleep or eat or even age isn’t really addressed.)

Here’s the thing, though. It is curious to think that Blunt, the only other person who can really understand what is happening, has no qualms about killing Cruise. Not because she should be worried about a court martial, but because she knows that in doing so she is, effectively, erasing her own life.

When Cruise dies, her experience ceases to exist. She’s basically killing herself, because from her point of view, reality will just vanish. A version of herself will live again in the next cycle, and she and Cruise will be able to steer that destiny — but the one right here, right now, will blink out as soon as she pulls that trigger.

She’s a badass, but she’s also a survivor. It’s interesting that she’s so willing to start over each time.

4. Get to the choppah!

At the farmhouse, Cruise tells Blunt that no matter how many times they try, she ends up dead when they fire up the helicopter. Eventually, he decides to go it alone to the dam in Germany, without even approaching her.

I suppose the idea is that near-infinite trial-and-error eventually gets him past that hurdle. Doesn’t quite make sense. If he’s able to do it alone, then why wasn’t he able to do it with Blunt by his side? In fact, wouldn’t it be easier with two fighting bodies?

A possible answer, and one that I like, is that Cruise simply couldn’t take the emotional impact of watching this person he now cares for die over and over again. I’m willing to take that, and I’m also willing to accept that with all his training, Cruise was able to get off the beach by himself — even though at first, he needed Blunt as an extra set of eyes and ears. But the movie does sort of blaze past this point with some quick cutting. My guess is that Liman and company were hoping no one would notice.

5. And, in the end, the love you take is equal to the love you make

Edge of Tomorrow is so great that I can hardly get annoyed by the ending. But I don’t think there’s anything in the text that explains what happens other than “Hey, that’s how it works.” Same as Groundhog Day.

Killing an “Alpha” means a restart on the day, and if, by some glitch, you get the blood on yourself, you can do it, too. Okay, sounds good.

Killing an “Omega” means, um, what exactly? That things work out nicely and the good guys win, but with enough vagueness that a sequel could be ready to go.

Maybe this is the perfect ending.

Jordan%20hoffman-1811

Jordan Hoffman

Jordan Hoffman is a writer and critic in New York City whose work appears in the New York Daily News, VanityFair.com, ScreenCrush and Times of Israel. Follow him on Twitter at @JHoffman.

Read more: http://mashable.com/2014/06/09/edge-of-tomorrow-movie-review/

Internet TV Isn’t Ready to Replace Cable Yet

Internet-tv-isn-t-ready-to-replace-cable-yet-53445bec7e

Roku founder Anthony Wood runs a startup that, along with companies like Apple and Microsoft, sells hardware that’s bringing web video to home television screens. It’s no wonder his nine-year-old daughter prefers to watch her favorite Disney shows on Netflix at her whim, rather than surf Disney’s own 24-hour cable channel.

This is one example of how traditional TV service providers are losing their hold on America’s eyeballs. Internet-connected TVs are becoming the norm on store shelves, and today represent 12 percent of those in people’s homes, according to a recent survey by NPD Group. These TVs, and devices like Roku’s, make it easier for viewers to cut the cord on their expensive cable bills, and instead simply watch content provided by companies including Netflix, Hulu, Apple, Amazon and Google on their big home screens.

Yet Wood hasn’t canceled his family’s TV service, and neither have the majority of his customers. In fact, several factors may make “cord-cutting” slower than anticipated.

Wood cited statistics at the Next TV Summit, held recently in San Francisco, that about 35% of its three-million-plus Roku set-top box owners, with access to 600 free and paid content apps, wind up either ending or reducing their pay TV packages. But 10 percent were never cable or satellite subscribers in the first place. And there are still more than 100 million cable and satellite subscribers in the U.S.

As the Roku figures suggest, cord-cutting is happening, so far, on a relatively small scale. For example, Nielsen reported that the number of households that have only broadband Internet and free broadcast channels increased by 631,000 in 2011. Meanwhile, 1.5 million homes ended TV service from cable, satellite, or telecommunications providers that same year.

In other words, the massive wave of migration is not materializing as fast as many Internet companies might hope, or as fast as cable companies and networks may fear. “So far, it doesn’t seem like it’s the tipping point,” says Fox Networks distribution president Michael Hopkins.

One major reason is that most Internet platforms don’t yet provide crucial live content — such as news and sports — nor the original programming that draws viewers in (rather than reruns or held-back content).

That is starting to change, though, and it is likely that the cord-cutting trend will continue to gradually pick up steam. Netflix is now developing its own exclusive and original content. Meanwhile, Microsoft recently paid PBS to shoot 50 percent more hours of content of Sesame Street, which it then developed into an early example of “interactive” TV for Xbox Kinect.

And Google-owned YouTube, which has sports, news, and entertainment divisions just like a network broadcaster, not long ago invested $100 million to seed the creation of high-quality content intended for cable-like channels. So far it is pleased with the results — 20 channels averaging more than one million monthly views, and 25 with some 100,000 “subscribers,” according to Alex Carloss, YouTube’s head of entertainment.

The cable and satellite TV incumbents are, unsurprisingly, determined to retain their subscribers, and at the Next TV conference, some believed they will feel pressure to give customers more freedom to pick and choose which channels they want to access instead of paying for a large bundle.

In 2009, for example, TimeWarner and Comcast launched “TV Everywhere,” an authentication technology that is allowing them to make their shows easily available on any kind of screen to paying subscribers. Broadcast TV, too, is also aiming to offer access to live content on other devices.

Wood believes it won’t be long before an “incumbent” launches Internet-based versions of their cable packages. “A lot of this is about getting access to the content,” says Wood. “That’s a business that requires complicated negotiations, requires a lot of money, and I think, a lot of experience.” HBO recently launched a channel for the Web, HBO Go, but customers must be existing subscribers to HBO through a cable provider.

As more people stream TV content on their home screens, infrastructure limitations could become a factor. Will Law, Akamai Technologies’ principal architect in its media division, says if there were a sudden spike in TV streaming far above today’s levels, “there would be massive congestion collapse.”

Image courtesy of Flickr, joannapoe

This article originally published at MIT Technology Review
here

Read more: http://mashable.com/2012/10/01/internet-tv-replace-cable/

Zynga in Slumps-Ville, But Social Games Are Still Hot

Zynga-in-slumps-ville-but-social-games-are-still-hot-7c312410e0

Social games have been synonymous with Zynga, the company that made FarmVille a sometimes maddening fixture on Facebook walls around the world. So given Zynga’s ongoing decline—the company’s stock has plummeted 75% since its December IPO, employees are fleeing, and its latest game titles are floundering—it’s easy to think social games are a quickly fading fad.

But plenty of investors and game developers are still writing checks and apps, betting that the average person isn’t bored with these kinds of games. The market for social and casual games is indeed still growing, but like many web applications, these games are moving to mobile devices—a transition that has brought new challenges.

“I don’t think they [Zynga] are a guide to the future,” says Andrew Marsh, a San Francisco-based game developer whose “indie” studio Fifth Column Games recently partnered with the rapidly expanding Japanese mobile-gaming company Gree.

While the investment bank Digi-Capital recently called the Zynga IPO the peak for “Social Games 1.0” investments, it also said investors are pouring money into games for smartphones and tablets. More than half of all game playing is on such devices, NPD Group, another research firm, recently estimated.

Yet major challenges remain for companies hoping to make their games more than make a fly-by-night hit on mobile devices. One longtime game developer, Tadhg Kelly, calls it “platform amnesia”—people get excited about the same old games presented on top of new technologies, but that wears out quickly.

And mobile-game companies have struggled to gain and keep users, especially ones willing to spend money. Zynga experienced this problem firsthand when it acquired OMGpop, the company behind the hit game Draw Something, earlier this year for more than $180 million, only to see the game quickly fade in popularity.

Gree, founded in 2004, is angling to succeed by creating a platform for smartphone games and courting developers like Marsh to create the next big hits. Where Zynga has struggled because it has largely depended on being seen in Facebook news feeds, Gree has created a specialized Facebook-like social network for gaming on mobile devices. Like DeNA, another Japanese company, it has found most of its current users in Asia, but it is seeking worldwide dominance with a recent spending spree and string of acquisitions.

Zynga, too, is trying to create an independent network. If it succeeds, this could help solve one problem that plagues mobile-game developers: how to get people to discover games among millions of apps in Google’s and Apple’s stores.

On Facebook, people naturally return day after day, but users aren’t as strongly inclined to open an app, says venture capitalist Charles Hudson, a partner with SoftTech VC who previously founded a company acquired by Zynga. As it is, since social elements are a less natural feature on smartphones, mobile-game companies are being forced to pour money into acquiring new users through advertising and cross-promotion. “Most mobile games are still discovered through the app store, and that’s not sustainable,” Marsh says.

Social games themselves may also need to become more creative as they migrate to mobile devices. To keep people playing, games can’t afford to get boring, says Kelly, who calls Draw Something a novelty that had no depth. Console game companies, such as Sony and Electronic Arts, and companies like Kixeye, which target more serious gamers, are starting to experiment with more casual social games for mobile devices.

Innovation could occur as developers take advantage of features particular to phones. Already, some are using feedback from touch interfaces and adapting to shorter gaming sessions—a couple of minutes while waiting for the train to come, for example.

Graphics on mobile devices are also nearing the quality of those on game consoles, especially in the case of tablets, and developers are starting to incorporate them into even simple social games. One example is NaturalMotion’s CSR Racing, a popular 3D drag-racing game that Apple CEO Tim Cook showed off at his company’s developer conference this year.

And while no company yet has found a lasting hit game that uses phone technologies such front-facing camera motion sensors or GPS location, Hudson says that could be coming as more startups experiment.

This article originally published at MIT Technology Review
here

Read more: http://mashable.com/2012/10/18/zynga-social-games-growth/

10 Weird iPhone Games You’ve Got to Try

10-weird-iphone-games-you-ve-got-to-try-fa532ca595

12 Memorable Mashable Videos of 2012

We’ve had a lot of interesting moments at Mashable in the past year — and luckily, we also happened to be filming many of them. In 2012, we followed an Internet celebrity singing with a turkey leg, talked to folks who lined up for the iPhone 5 for a week, and a tracked down a filmmaker who did documentaries for the Arab Spring. We also peered into Martha Stewart’s bag and survived a takeover attempt from Conan O’Brien.

1. Conan O’Brien Buys Mashable, Ousts Pete Cashmore as CEO

In an April Fools’ Day prank, TeamCoco and Mashable got together to create a zany story line that includes Conan buying the site and ousting founder and CEO Pete Cashmore. The Conan O’Brien takeover launched at midnight on April 1 in a 24-hour story arc that shows O’Brien’s attempts to make paper-based tweeting happen.

2. Mashable Goes ‘Behind the Launch’ With New Documentary Series

We follow the journey of Vungle, an in-app advertising startup, as they court investors, hire new employees, and finally launch their company. Catch up on all fourteen episodes here.

3. How to Learn a Language on YouTube

Reporter Neha Prakash embarks on an attempt to learn Russian from YouTube tutorials, which culminates in a trip to a Russian restaurant as she tries to order from the fluent waitress. Her favorite word? C??????, which means “thank you.”

4. What’s in Martha Stewart’s Tech Bag?

America’s pioneering DIY expert has three iPads. What else does she carry in her seemingly endless bag of gadgets?

5. Watch Calexico Play Two Songs Live At Mashable

Joey Burns and John Convertino of Calexico fill Mashable HQ with the distinctive sound of the American southwest. They talk about their new album, Algiers, being a musician in the digital era, and sticking to an analog recording experience.

6. What Does Siri Know About the iPhone 5?

Tech editor Pete Pachal grills Siri for inside information about the phone, only to get coy replies.

7. Who Lined Up For the iPhone 5?

Meet the colorful characters who camped out at the Apple flagship store for a week.

8. iPhone 5 Line: Unnecessary But Fun

We do an early morning stakeout at the Apple Store right before it opens its doors.

9. This Is What Happens When Comic Con Attendees Try to Impersonate Bane

At the New York Comic Con, we asked everyone from Wolverine to Batman himself to demonstrate their finest Bane super-villain vocal talents, with mixed results.

10. This Man Fights in and Films the Last Gasps of the Arab Spring

Activist and filmmaker Matthew VanDyke spent years traversing North Africa and the Middle East by motorcycle, befriending Libyans and other travelers along the way. We talk to him about his work in this two-part series.

11. Nicole Westbrook Talks Turkey With New Yorkers

Watch 12-year-old YouTube viral video star Nicole Westbrook as she interviews people on the street with a turkey leg. You can’t make this stuff up.

12. Internet Feline Luminary Talks Big Data and Facebook Stock

Watercooler Editor Annie Colbert picks the brain of one of the Internet’s foremost felines, Grumpy Cat.

BONUS: Mashable Isn’t the Only 7-Year-Old Digital Expert

When Mashable turned seven years old, it got us thinking: How does a 7-year-old media company compare with actual 7-year-olds? We talked to kids from 92nd Street Y’s Camp Yomi to discuss Facebook’s valuation, LinkedIn tips and the fate of the BlackBerry.

Read more: http://mashable.com/2012/12/06/12-memorable-videos/