Tag Archives: Dev & Design

Why Computer Animation Looks So Darn Real

Shrek-600

Walt Disney once said, “Animation can explain whatever the mind of man can conceive.” For Disney, this was animation’s magic — its power to bring imagination to life.

Disney died in 1966, 11 years before computer animation’s heralded debut in Star Wars, and he likely never imagined how life-like animation would become, or how pervasively it would be used in Hollywood. As viewers, we now hardly blink when we see a fully rendered alien planet or a teddy bear working the grocery store check-out counter.

Animation has largely stripped its reputation as a medium for children; it’s been used far too successfully in major films to remain confined to kids. After all, who hasn’t had the experience of going to an animated film and finding the theatre packed with adults? Who doesn’t secretly remember the moment they were a little turned on during Avatar?

Considering animation’s rapid evolution, it sometimes feels like we’re just weeks away from Drake and Chris Brown settling their beef via a battle of photorealistic holograms.

So how did we get here? How did computer animation come to look so darn real?

From the MoMA to Casper

Computer animation debuted in 1967 in Belgium, and soon after at the MoMA, with Hummingbird, a ten minute film by Charles Csuri and James Shaffer. The film depicted a line drawing of a bird programmed with realistic movements and was shown to a high art crowd, who probably weren’t fantasizing the medium’s potential to create a sassy talking donkey.

In 1972, Ed Catmull, future co-founder of Pixar, created the first 3D computer-animated human hand and face, which was incorporated into the 1976 sci-fi thriller Futureworld. Computer animation didn’t capture the mainstream’s attention, though, until the classic trench run sequence in Star Wars, which used 3D wireframe graphics for the first time. It was the product of a lot of guesswork and brilliance, particularly by animator Larry Cuba. If you have 10 minutes to kill, this old-school video of Cuba explaining how they pulled it off is fascinating:

The late seventies were a time, though, when innovation didn’t happen at the breakneck pace we’re accustomed to today. The next big moment for computer animation didn’t come until 1984, when a young member of George Lucas’ Lucasfilms team, John Lasseter, spearheaded a one-minute CGI film called The Adventures of Andre and Wally B, which pioneered the use of super-curved shapes to create the fluid character movement, a staple of future films by DreamWorks and Pixar, where Lasseter would serve as CCO.

1986’s Labryrinth introduced the first 3D animal — an owl in the opening sequence — and 1991’s Terminator 2: Judgment Day introduced the first realistic human movements by a CGI character, not to mention Arnold Schwarzenegger’s obsession with voter demographics.

In 1993, computer animation’s reputation soared with the release of Jurassic Park and its incredibly realistic dinosaurs. The creatures sent adolescent boys into fits of delight, even though the film only used computer animated dinosaurs for four of the fourteen minutes they were on screen.

Then came 1995 and the release of Casper, which introduced the first CGI protagonist to interact realistically with live actors, though that interaction was predominantly Christina Ricci trying to seduce a ghost.

But Casper was just a warm-up for Toy Story.

The Toy Story and Shrek Era

Six months after Casper, the first feature-length CGI film was released: Toy Story. It was an incredible four-year undertaking by Pixar’s John Lasseter and his team; the film was 81 times longer than Lasseter’s first computer animated film a decade before. They faced two fatal challenges: a relatively tiny $30 million budget, and a small, inexperienced team. Of the 27 animators, half were rumored to have been borderline computer illiterate when production began.

“If we’d known how small our budget and our crew was,” remembered writer Peter Docter, “we probably would have been scared out of our gourds. But we didn’t, so it just felt like we were having a good time.”

They thrived. The animators began by creating clay or computer-drawn models of the characters; once they had the models, they coded articulation and motion controls so that the characters could do things like run, jump and laugh. This was all done with the help of Menv, a modeling environment tool Pixar had been building for nine years. Menv’s models proved incredibly complex — the protagonist, Woody, required 723 motion controls. It was a strain on man and machine alike; it took 800,000 machine hours to complete the film, and it took each animator a week to successfully sync an 8-second shot.

There are more PhDs working on this film than any other in movie history,” Pixar co-founder Steve Jobs told Wired at the time. “And yet you don’t need to know a thing about technology to love it.”

Jobs was right. Audiences loved the film not just because of the impressive animation and three-dimensional realism, but also because of a superb script and voice work by Tom Hanks, Tim Allen and Don Rickles. It sparked computer animated films’ reputation for pairing stunning visuals with compelling stories. That reputation was key, as computer animation’s evolution hinged on the willingness of studios to invest in it.

In 1998, DreamWorks’ Antz and Pixar’s A Bug’s Life maintained computer animation’s stellar reputation, while briefly terrorizing countless entomophobic parents. The flood scene in Antz received widespread praise, particularly from those who couldn’t wait for the bugs to die.

Computer animation’s next breakthrough came in 2001 with Shrek. Shrek delved into true world building; it included 36 separate in-film locations, more than any CGI feature before it. DreamWorks also made a huge advancement by taking the facial muscle rendering software it used in Antz and applying it to the whole body of Shrek’s characters.

“if you pay attention to Shrek when he talks, you see that when he opens his jaw, he forms a double chin,” supervising animator Raman Hui explained, “because we have the fat and the muscles underneath. That kind of detail took us a long time to get right.”

Shrek brought a new age of realism. Hair, skin and clothes flowed naturally in the elements; the challenge of making Donkey’s fur flow smoothly helped animators render the realistic motion of grass, moss and beards (and other things hipsters like). Shrek grossed nearly a half billion dollars, won the first-ever Academy Award For Best Animated Feature, and established DreamWorks as an animation powerhouse, alongside Disney-Pixar.

Advancements in Photorealism and Live Action

In computer animation, there are two kinds of “realness.” First, there’s the “realness” of Shrek, where the animation is still stylized and doesn’t strive for photorealism. Then, there’s photorealistic animation, which aims to make computer animation indistinguishable from live action.

In the same year Shrek was released, we also saw the release of Final Fantasy: The Spirit Within, the first photorealistic, computer-animated feature film. It was filmed using motion-capture technology, which translates recorded movements into animation.

1,327 live action scenes were filmed to make the final animated product. Though the film flopped, the photorealistic visuals were a smash success. The film’s protagonist, Aki Ross, made the cover of Maxim and was the only fictional character to make its list of “Top 100 Sexiest Women Ever.” Aki was a painstaking advancement in photorealistic animation; each of her 60,000 hairs was individually animated, and she was made up of about 400,000 polygons. Entertainment Weekly raved that, “Calling this action heroine a cartoon would be like calling a Rembrandt a doodle,” while naming Aki Ross to its “It” girl list.

The advancements in photorealism and motion capture animation kept coming. In 2002’s The Lord of the Ring: The Two Towers, Gollum was the first motion-capture character to interact directly with live-action characters. Two years later, Tom Hank’s The Polar Express ushered motion-capture films into the mainstream.

Photorealistic animation’s quantum leap came in 2009 with Avatar, a project James Cameron had delayed nearly a decade to allow the technology to catch up to his vision. Cameron commissioned the creation of a camera that recorded facial expressions of actors for animators to use later, allowing for a perfect syncing of live action with animation. Cameron demanded perfection; he reportedly ordered that each plant on the alien planet of Pandora be individually rendered, even though each one contained roughly one million polygons. No wonder it took nearly $300 million to produce Avatar.

Cameron’s goal was to create a film where the audience couldn’t tell what was animated and what was real. He succeeded. Now, the question is, “What’s next?”

What’s Next

Most people think that the animated rendering of humans hasn’t been perfected yet; Cameron’s 10-foot blue animated Na’vi aliens in Avatar was seen as an easier venture than rendering humans, but Cameron doesn’t think that was the case.

“If we had put the same energy into creating a human as we put into creating the Na’vi, it would have been 100% indistinguishable from reality,” Cameron told Entertainment Weekly. “The question is, why the hell would you do that? Why not just photograph the actor? Well, let’s say Clint Eastwood really wanted to do one last Dirty Harry movie looking the way he did in 1975. He could absolutely do it now. And that would be cool.”

Cameron has repeatedly emphasized that he doesn’t view computer animation as a threat to actors, but rather as a tool to empower and transform them.

And if that means we get to experience 1975 Clint Eastwood’s career again, well, that would just go ahead and make our day.

Read more: http://mashable.com/2012/07/09/animation-history-tech/

Screens: Coming to a City Block Near You

Eyestop-600

Many of us spend several hours of the day face-down into our laptops. We navigate our cities and communities from the control panels of our smartphones. And at the end of the day, we cozy up with our flat screens or e-readers.

Although some people fight mankind’s preoccupation with and dependency on screen technology, it’s safe to say, the jig is up. We’re hooked.

And today’s major cities have begun not only to accept our gadget obsession, but to encourage it.

It doesn’t matter where you travel, these days. Where there’s electricity, there will be screens — waiting, encouraging and urging your interaction. Head out on the highway (so to speak) and you’ll encounter digital billboards, perfectly alternating advertisements to the flow of traffic. Take a brave trip to New York City’s Times Square, where you can interact with 40-foot-tall augmented reality LED displays. Hop in a TV-outfitted taxi and head out shopping, where store clerks await with mobile credit card readers attached to their iPads.

In fact, digital marketing strategies prove so successful that cities are integrating like-minded technology into their very infrastructures, whether through information services, artistic programs or transportation improvements.

No matter how long you’ve lived in a community, it’s next to impossible to memorize every bus route, subway stop and train schedule. And let’s not even get started on traffic detours.

Companies like Urbanscale aim to seamlessly integrate city services and information into interactive displays throughout cities. In partnership with Nordkapp, Urbanscale developed the concept for Urbanflow touchscreen stations, which appear like giant smartphones and beckon city dwellers and tourists with targeted city maps. But they’re far from limited to walking directions alone; the stations share hyperlocal services and ambient data, such as traffic density and air quality reports. Local experts can even contribute their own input and knowledge of the surrounding area, making for a rich digital stockpile of up-to-date information.

While solutions like Urbanflow provide information for a wide range of location-specific issues, many cities have opted for a more targeted approach, specifically, for improvements in transportation.

Developed by MIT SENSEable City Lab, EyeStop represents the cutting edge in “smart urban furniture.” The concept looks like a futuristic bus stop, complete with efficient and easy to read e-ink message boards, weather alerts and even email access. Powered by sunlight, the unit’s environmental sensors would also detect air pollutants and weather changes. Plus, the EyeStop glows at different intensities as nearby buses approach.

Prudence Robinson, partner strategist and research fellow at SENSEable City Lab, explains why the team chose certain design features for the EyeStop. “Parametric design has been foreseen so that every shelter perfectly fits its site,” she says, “maximizing sunlight exposure for photovoltaic cells and providing adequate shading to the users.”

While some cities are implementing completely new and innovative systems, others are looking to upgrade to intuitive tablet technology already ubiquitous in everyday life. New York City launched a pilot program to replace 250 nearly obsolete pay phones with tablet screens that provide information on local attractions, city maps, public transit updates and even Wi-Fi.

And mobile credit card payment service Square proposed that the New York City Taxi & Limousine Commission embed iPads into 30 of its cabs. Not surprisingly, the tablets would also be equipped with Square technology, which would enable passengers to pay with credit card, sign the screen with their fingers and even email the receipt to themselves.

But screen technology doesn’t always necessitate strict utility. A huge priority for many cities is public art that demonstrates aestheticism and usefulness.

Take MIT’s Light Bridge Project, composed of Panasonic Electric Works’ NaPiOn infrared motion/proximity sensors. The sensors activate colorful LED lights that interact with pedestrian movement. Depending on the type and amount of traffic, the lights alternate between different programs of patterns and colors, using proximity sensors, cameras, buttons, microphones and mobile phones. The project’s aim is to marry traditional lighting concepts with reactive urban screen solutions.

Increasingly, artists are also finding inspiration in digital. At this year’s Philadelphia International Flower Show, creative media agency Klip Collective partnered with GMR Design to design and build an ethereal “wave wall,” essentially a giant sloping dome of screens. On it, they projected a “calming display of undulating projection waves of sea creatures and flower blossoms.” Nearby, a Hawaiian temple featured multidimensional video-mapped animations that taught curious visitors about Pele, Hawaii’s female fire god.

But what happens when an artist requires a colossal canvas? (No, we’re not talking murals.) Increasingly, multimedia artists are turning to available city infrastructure to project their visions. And they’re not thinking small, that’s for sure.

In New York City, artists are using video mapping technology to project multidimensional scenes and characters onto skyscrapers, often using nothing more than a laptop, a portable generator and a projector. When passers-by spot giant dancing monkeys on the side of a wall, they instantly react — and sometimes interact, mimicking the animated movements. Furthermore, the contours and crannies of a building are far from a hindrance — they actually contribute to the 3D effect, as if an image were leaping off the “screen.”

Video mapping and projection technology are mobilizing large groups of people to get to know their surroundings. Some installations even encourage spectators to interact with these large-scale screens as if they were games. People in Lyon, France, celebrate the Festival of Lights with an installation called “The Urban Flipper,” a type of digital graffiti, which when projected on the side of a theater, creates a giant, interactive game of pinball.

In the Netherlands, 3D mapping company NuFormer debuted what it calls “mocapping,” a combination of 3D video mapping projection and live motion capture technology. It projects animated light onto a rectangular building, effectively transforming the structure into a futuristic spaceship-like scene. What’s more, the character in the scene asked questions of and responded to spectators. Thus, each performance was different.

Some video mapping art even seeks to change the perception of architecture itself, as if a building were made of hundreds of moving television screens. The following video shows how design collective URBANSCREEN created optical illusions in the “sails” of the Sydney Opera House. Motion graphics are projected onto the white surfaces, which dimple with movement like actual sails. It’s the festival’s most public event, inspiring attendees and visitors citywide.

Whether to inspire or educate, cities around the world are implementing smart screens for tech-eager residents. Hopefully, they’ll encourage us to take a breather from our self-isolating smartphones and tablets for a moment to interact with the communities and residents around us.

Have you encountered public screens, whether introduced by city governments or artists? How is your city welcoming the latest in responsive screen technology?

Read more: http://mashable.com/2012/09/12/urban-screens/

Google to Launch New Devices, Android 4.2 at Oct. 29 Event

Google-to-launch-new-devices-android-4-2-at-oct-29-event-report--c1a5be0adf

Google will unveil several new devices and a software update at its scheduled Oct. 29 press event, according to a company video leaked from an all-hands meeting.

The Next Web is reporting Google has distributed an internal video that details and confirms speculations about what might be revealed at the upcoming event.

The video reportedly discusses the launch of a 32GB version of the Nexus 7 tablet, as well as one with 3G support. It also indicates Google is working with manufacturer Samsung to release a 10-inch tablet called “Nexus 10” that will run Android 4.2 (“Key Lime Pie”), and a Nexus smartphone manufactured by LG.

Meanwhile, the new Android 4.2 mobile operating system will include a panoramic camera option and “tablet sharing” capabilities, which would allow more than one user to access the device with his own set of email and apps — similar to how a family or business can switch between user settings on a Windows computer.

Earlier this week, Google sent invitations to the press for an Android event to be held in New York City. Although the invitation didn’t detail what might occur, the tagline — “the playground is open” — suggests it will have to do with Google Play, the company’s newly-rebranded Andriod Market.

The news came as Microsoft prepares for its Windows Phone 8 launch event, which will also be held on Oct. 29 — and Apple gears up to unveil its rumored 7.85-inch iPad on Tuesday, Oct. 23.

Google’s new Samsung tablet is reportedly being filed under the name “Codename Manta.” The device is expected to have a 2560×1600 pixel resolution and 300ppi, which is greater than the iPad’s 264ppi.

Meanwhile, the 4.7-inch Nexus smartphone manufactured by LG is said to tout a quad-core 1.5 GHz Qualcomm APQ8064 Snapdragon processor, a 1280×768 display, 2GB of RAM and 16GB storage.

BONUS: 10 Free Android Apps You’ll Use Every Day

Top 10 Tech This Week

Top-10-tech-this-week-942835b68bTop 10 Tech is presented by Chivas. Access a world of exclusive insider benefits – private tastings, special events and the chance to win a trip for you and three friends to the Cannes Film Festival. Join the Brotherhood.

These Glasses Let You Play in 3D Virtual Worlds

Despite the endless gaming and interactive potential of augmented reality, the technology has been moving slow in terms of widespread awareness and adoption. But a new system called castAR aims to push augmented reality into the mainstream, starting with a Kickstarter campaign that launched Monday.

Founded by veteran developers and former Valve employees Jeri Ellsworth and Rick Johnson, Washington-based company Technical Illusions is offering a product that delivers both augmented-reality and virtual-reality experiences.

First introduced in May as a prototype, the castAR system is centered around a pair of glasses that house two micro-projectors over each lens. Each projector receives its video stream via an HDMI connection, and then beams a portion of a 3D image to a flat surface made out of retro-reflective sheeting material.

Situated between two the two lenses is a small camera that scans the surface for infrared markers. This dynamic allows the castAR to accurately track your head movements in relation to the holographic representations on the surface.

The product also comes with a clip-on attachment that allows the wearer to experience private augmented reality, layering virtual objects onto the real world, or virtual reality, during which all the imagery you see is computer-generated. Also included is a device called a Magic Wand that serves as a 3D input device and joystick.

Some of the potential applications for the castAR system include board games, flight simulators and first-person shooters; but the developers believe that it could also be used for interactive presentations in business.

While many companies have promised to deliver impressive augmented-reality experiences, video of the commercial version of the castAR (above) is impressive. “It’s gonna deliver on the dream of the holodeck,” Bre Pettis, CEO of Makerbot, said in the video.

For $355, early adopters can get their hands on the entire package of components, which includes the castAR glasses, the retro-reflective surface, the Magic Wand and the AR and VR clip-on. There are also several other packages offered at lower prices for those only looking to try the basics of the system.

Launched with a goal of $400,000, the team’s Kickstarter campaign has already earned over $210,000 as of this writing. Those who order the device now can expect to get it next September, according to Technical Illusions.

Image: Technical Illusions

Read more: http://mashable.com/2013/10/14/augmented-reality-glasses/

Want to Run Code on the ISS? There’s a Competition For That

Want-to-run-code-on-the-iss-there-s-a-competition-for-that-74eea3c833

Any high school-aged coders with a love for space and NASA out there? Read on.

Zero Robotics, a robotics programming competition set up through MIT, is entering its fourth year — and there’s still a day left to register.

Here’s how it works: Students can sign up in teams for free on the website. Over the course of the semester, they compete head-to-head with other teams in writing programs — sort of situational, scenario-based challenges. Gradually, the challenges get more difficult. Then, after several phases, finalists are selected to compete in running code for the International Space Station (ISS) — which is broadcast live by an astronaut on board the ISS.

Since 2009, the competition has allowed participants to compete in a series of coding challenges through an online platform.

“There’s a whole ranking system that tells them how well they’re doing as they’re going through it,” said Jake Katz, co-founder of the competition and research assistant in the Space Stations laboratory at MIT. “And throughout the course of the season, the game gets slightly more complex. They start out in two dimensions and then they will soon, around Oct. 5, be going into 3-D competition — then we add some additional challenges towards the end.”

The original kick off for this year’s competition was on Sept. 8. But, Katz said, there’s still a day left to register.

“There have been people participating so far, and are already off and running with it, but it’s still possible to join in and make a submission for the first phase,” he said. “We have 75 teams so far, and that’s just from the U.S.”

There are an additional 43 teams from 19 other countries, he said.

The competition is sponsored by NASA, DARPA, TopCoder, Aurora Flight Sciences, CASIS and MIT. TopCoder, a programming company, designed the platform the games are played on.

“In 2009, when we started, we had just two teams competing against each other,” Katz said. “Just two years later, we had about 100 teams from all over sign up.”

Check out the promotional video below:

What kind of code would you write to run on board the ISS? Let us know in the comments.

Read more: http://mashable.com/2012/09/26/zero-robotics-mit/

New Tool for Hurricane Trackers: Drones

New-tool-for-hurricane-trackers-drones-b6bd937cd9

Federal hurricane trackers this year will be experimenting with powerful new tools: unmanned boats and aircraft, including a massive drone more known for spying on battlefields than monitoring nature’s violence.

Researchers at NASA and the National Oceanic and Atmospheric Administration are hoping a pair of military-surplus Global Hawk spy drones can provide new insight into the storms that routinely ravage the Atlantic and Gulf coasts.

The aircraft, also known as unmanned aerial vehicles, or UAVs, won’t be keeping an eye on Hurricane Isaac, which barreled down on the Gulf Coast on Tuesday. That storm is being monitored by more traditional means, including manned “Hurricane Hunter” aircraft, but officials expect to have the drones up and running in time for the height of hurricane season.

On Friday, the first of two Global Hawk aircraft is scheduled to arrive at NASA’s Wallops Flight Facility in Virginia, with a shakedown flight scheduled to happen as soon as Monday. The second aircraft is expected to arrive in coming weeks with officials hoping for a first flight in mid-September.

The aircraft, built by Northrop Grumman, were among the first batch to be tested by the military. When the military sought upgraded aircraft, the drones ended up in the hands of NASA researchers, who, along with their counterparts at NOAA, have now fitted them with specialized sensors for monitoring storms.

Weather researchers have used or experimented with various unmanned vehicles for years (not to mention the original unmanned vehicles: weather satellites). But officials are now taking the technology to new levels.

NASA’s Global Hawks, for example, were first used for a limited number of experimental flights in 2010, but technical issues have kept them from gathering hurricane data until now, said Scott Braun, a NASA investigator who helps oversee the Global Hawk program.

The three-year program is just starting, and for now NASA’s plan is focused on basic research, rather than real-time forecasting. Still, with a 116-foot wingspan and an ability to stay in the air for nearly 30 hours, the Global Hawk promises to be extremely useful for observing hurricanes.

But don’t look for drones to replace the famous manned “Hurricane Hunter” aircraft that fly directly into the middle of hurricanes anytime soon.

Researchers have small UAVs that can survive the forces inside a hurricane, but they are too small to carry a wide range of sensors, Braun said. Larger aircraft like the Global Hawk, meanwhile, can’t handle such extreme weather. While manned flights into hurricanes can seem dangerous, only four such aircraft have been lost since 1943, the last one in 1974.

“We are still a long ways away from replacing manned flights,” he said. Instead, the UAVs will supplement manned flights by flying at altitudes of up to 60,000 feet, thousands of feet above the thrashing winds and rain. One aircraft is designed to gather data about the environment around a storm, while the other UAV will study the storm itself.

It’s not the first time NASA has turned to spy aircraft for weather research. Since the 1970s, the space agency has used a version of the military’s U-2 aircraft to conduct a range of observations on everything from wildfires to migratory birds, as well as hurricanes. (During the 1960s, NASA unsuccessfully tried to help cover up Francis Gary Powers’s failed U-2 spy mission in the Soviet Union by claiming he got lost while conducting weather research.)

Like the military, NASA and NOAA are now looking to unmanned vehicles to either replace or bolster more traditional vehicles.

While Global Hawks may soon be a regular fixture above hurricanes, NOAA is experimenting with small, unmanned watercraft to penetrate storms at sea level.

The Wave Glider is a solar-powered floating platform that can take measurements from both the air and sea. Wave Gliders have been used for a range of weather and climate research, but now NOAA is experimenting with placing the craft in the path of oncoming hurricanes.

Unlike other craft, in theory, the Wave Glider can stay out indefinitely thanks to its solar panels, said NOAA’s Alan Leonardi. “The idea is to position a string of these in the path of a hurricane and gather data in a way we haven’t been able to before,” he said.

Also in the expanding NOAA arsenal of unmanned research vehicles is EMILY, a 65-inch watertight unmanned surface vehicle outfitted with a range of sensors and a high-definition camera. This year, scientists hope to remotely guide the craft into the center of hurricanes to gather data in some of the most dangerous areas of the storms.

“With unmanned craft, we’re not risking anybody’s life,” said Justyna Nicinska, program manager for NOAA’s EMILY, which was originally developed to help lifeguards with tricky sea rescues. Like other officials, Nicinska expects EMILY to compliment, rather than replace current systems. “EMILY will really fill gaps in our observation,” she said.

Image of Hurricane Isaac courtesy of NASA.

This article originally published at National Journal
here

Read more: http://mashable.com/2012/08/28/hurricane-trackers-drones/

Tiny Satellite Will Grow Mold in Orbit

Dictyostelium-discoideum

University students in Japan are building a slime mold-housing micro-satellite that will orbit the Earth and send back photos of the microorganisms’ growth. The small satellite will transmit the pictures to Earth using amateur radio.

The Microbial Observation Satellite, TeikyoSat-3, is a project of Teikyo University and is a small satellite project of the Space System Society at the university’s Utsunomiya campus.

TeikyoSat-3 weighs 44 pounds (20 kilograms) and is designed to study the impact of space radiation and the microgravity environment on a mold called Dictyostelium discoideum. This species of soil-living amoeba belongs to the phylum Mycetozoa and is often given the less-than-highbrow biological label of “slime mold.”

TeikyoSat-3 is slated for launch on Japan’s H-IIA booster in Japanese Fiscal Year 2013, and will ride along with the Global Precipitation Measurement (GPM) main satellite, officials from the Japan Aerospace Exploration Agency (JAXA) public affairs department told SPACE.com.

JAXA and NASA collaborated on the development of the GPM spacecraft as part of an international network of satellites that provide next-generation global observations of rain and snow.

Amateur Satellites and Biology

TeikyoSat-3 is one of several small satellites set to piggyback on a launch scheduled for January 2014, said Hirotoshi Kubota, professor of a special mission, faculty of science and engineering at Teikyo University. “This satellite is now in the process of testing of [the] engineering model,” he told SPACE.com via email.

The TeikyoSat-3 group proposal stated, “Our micro satellite, TeikyoSat-3, takes a picture of the growth process of the slime mold, Dictyostelium discoideum, in space, and then downlinks the pictures to the ground station. We’ll release the pictures on our website to the public and radio amateurs. We expect the public and radio amateurs to promote their interest of the amateur satellites and biology.”

A ground station at the Teikyo University Utsunomiya campus will keep in contact with TeikyoSat-3. The plan is to actively make details about the tiny satellite available to the public in order to enable radio amateurs to receive images of slime mold directly from the spacecraft.

In building TeikyoSat-3, the university students are plotting out a low-cost “pharmacological mission,” one that makes use of microscope and miniature-camera technology. The students will also have to control the temperature on board the satellite to ensure an environment within which the slime mold can live.

Life of its Own

The value of studying microbial creatures in space has taken on a life of its own over the years.

During its 15 years of space travel, which ended when it deorbited in March 2001, Russia’s Mir space station was found to house colonies of organisms. They were found alive and well — growing on rubber gaskets around windows, space suit hardware and cable insulation and tubing.

Officials from NASA’s Human Research Program plan to gather and analyze biological samples to better investigate the International Space Station’s “microbiome” — the ever-changing microbial environment that can be found on the Earth-orbiting facility and its crew members.

Carrying out this work within the hectic environment of space is expected to give researchers data about whether alterations in the crew’s microbiome are harmful to human health.

Bio-Burden

China isn’t exempt from the bio-burden of protecting human space travelers, either.

Researchers have eyed the “Heavenly Palace” that is China’s Tiangong-1 space module as a microbial haven, too.

Despite an air purifier that cleans the module’s air and the astronauts’ practice of wiping away dust with wet tissues before leaving, there could be unknown risks, said Wang Xiang, chief commander of the space lab system. Microbes can pose a hazard to astronaut health, he told China Daily.

Wang said that mold was not only found on surfaces aboard Russia’s Mir Space Station; it has also been seen on the International Space Station. He spotlighted one “moldie oldie” report stating that fungus grew in cosmonauts’ ears during a mission on the former Soviet Union’s Salyut space station.

Mold also presents a threat to space module components, Wang said. “It is a subject we will keep studying until China builds its own space station,” he said.

Image: Freie Universität Berlin

This article originally published at Space.com
here

Read more: http://mashable.com/2013/07/19/space-mold/

Intel’s Tiny Wi-Fi Chip Could Have a Big Impact

Intel-s-tiny-wi-fi-chip-could-have-a-big-impact-2554ce98a0

This month, Intel unveiled a Wi-Fi radio almost completely made of the same sort of transistors that go into one of its microprocessors.

At the Intel Developer Forum in San Francisco, Yorgis Palaskas, research lead in radio integration at Intel and the company’s chief technology officer, Justin Rattner, also showed off a system-on-a-chip that sported this digital Wi-Fi radio nestled up next to a couple of its Atom processors for mobile devices.

The announcements make it clear that Intel believes Wi-Fi radios—traditionally relatively large devices that operate mostly outside the chip—will be integrated into the chips in coming years. This could mean three things: more electronic devices will be able to network wirelessly; these devices could be more energy-efficient; and ultimately, multiple digital radios could be combined on a single chip, something that could make gadgets, including mobile phones, cheaper.

“We are now looking at moving a lot of the parts on the periphery, like Wi-Fi, into the chip itself,” says Jan Rabaey, professor of electrical engineering and computer science at the University of California, Berkeley. “If wireless can move into digital and miniaturize at the same pace as digital, that’s a good thing.”

All radios, technically called transceivers, are made of a number of components. A transceiver is composed of a receiver that brings in a signal from the outside and a transmitter that sends out a signal to the world. Both receiver and transmitter contain components such as a baseband, which dictate the frequency the radio operates on, filters and mixers to fine-tune the signal, and amplifiers to make small signals larger.

Engineers have, for years, been slowly digitizing these components, so there are fewer analog components, which don’t operate well when miniaturized. Basebands, for instance, have long been digital.

There have already been demonstrations of almost completely digital Bluetooth radios. And Intel itself has digitized important radio components for 3G operation. But radios like Wi-Fi that operate across a wide range of frequencies and have been harder to convert from analog to digital.

While there have been no other public announcements from other companies about digital Wi-Fi radios, it’s likely ARM and Qualcomm are also tackling the challenge, says Rabaey. “You can bet those guys are doing digital structures as well,” he says. “It’s a whole industry trend.”

By making radios using the same process used to make microprocessors, Intel is streamlining manufacturing and making it easier and cheaper to add a Wi-Fi radio to any chip.

“Being able to add this functionality digitally means you can add a radio to pretty much anything you want to,” says Peter Cooney, an analyst at ABI Research. This could allow anything with a chip to communicate, from SD cards and dishwashers to television sets and the family car.

And as chips shrink, Wi-Fi radios will experience the same benefit of miniaturized processors, including a reduction in power consumption (see “A New and Improved Moore’s Law“).

Intel’s Palaskas explains that a digital Wi-Fi radio that takes up 1.2 millimeters of chip space will draw 50 milliwatts of power. The same radio design compressed into an area of 0.3 millimeters (manufactured with so-called 32-nanometer processes) will only sip 21 milliwatts. This is comparable to the best radios made mostly out of analog components, says Palaskas.

But battery life for gadgets themselves is a tricky thing to predict, says Rabaey, and the energy efficiency gained from shrinking transistors might not translate directly to fewer charges for your phone. Much depends on standards that dictate the design of radios. For instance, radios that constantly send signals when they’re not being used directly will drain a battery, no matter how many digital components they contain.

Perhaps the most compelling application of the digital Wi-Fi radio, though, is that it points to a future where more radios can be programmed with software, changing their functionality on the fly. A simple software upgrade to a device with a digital radio could potentially improve its performance. “Digital is fundamentally more programmable than analog,” says Palaskas.

Rabaey suggests that in the future, multiple digital radios could be combined into one, which could reduce the cost of making cell phones. Instead of separate components for 3G, 4G, Wi-Fi, Bluetooth, and other radios, a single chip could contain all of them. The device would flip between radios via software. “Truly programmable radio could be five or 10 years away,” says Rabaey. “But everyone sees the economic value in it.”

Image courtesy of YouTube

This article originally published at MIT Technology Review
here

Read more: http://mashable.com/2012/09/21/intel-wi-fi-chip/

Mashable Weekend Recap: 65 Stories You Might Have Missed

Mashable-weekend-recap-65-stories-you-might-have-missed-cab0c8fda5

The weekend started off with a bang, thanks to the dazzling opening ceremony of the 2012 Olympic Games. That was spectacular enough to get everyone super-ready for the athletic competition involving our entire planet.

There were plenty of stories about the Olympics, and at the same time, your intrepid Mashable team discovered so much more — happenings in the digital world, tech innovations that felt like they were from a future world, and GIFs, comics and weekend fun that seemed to be from another world entirely.

Best of all, we’ve gathered all those stories here for you, in one big easy-to-peruse package. So take a look at the latest Weekend Recap, where you can catch up with the entire weekend of delightful news and views, right here:

Editor’s Picks

James Bond and the Queen Parachute Into the Olympics [VIDEO]

Please, NBC and IOC, Learn How to Share the Olympics

13 Surprising OS X Mountain Lion Facts [SUNDAY COMICS]

Top 10 Twitter Pics of the Week

Mountain Lion Vs. Windows 8: Which One Is Better?

Best Pics Yet: This Could Be the Real iPhone 5

How to Watch the 2012 Summer Olympics Online

Spoilers: Angry Olympics Fans Tweet Their Protests, NBC Responds

Top 10 Tech This Week [PICS]

News & Opinion

Marissa Mayer Brings Free Food to Yahoo, Eyes Acquisitions [REPORT]

MTV’s ‘Teen Wolf’ Facebook Game Is Feast for Fans in First 5 Weeks

Where to Get Back-to-School Deals on Tablets, Computers

How Dictation Tools Can Help Speed Up Your Workflow [INFOGRAPHIC]

Russian Cargo Spacecraft Docks With Space Station on 2nd Try

Olympic Check-Ins: Hot Foursquare Deals and Badges for London 2012

Record-Setting Electric Plane Flight Almost Didn’t Make It [VIDEO]

Mysterious Billionaire Commissions World’s Largest Yacht [VIDEO]

Twitter Jokester’s ‘Bomb Threat’ Charges Dropped [VIDEO]

Olympic Popularity: Starcount Reveals Which Olympic Athletes Are Trending

Amazon Sales Tax — What it Means for You

Down to the Millisecond: All About Olympics Timing

Trioh! The Flashlight You Can See When The Power Goes Out

On Reddit, Rapists Say They’re Sorry

Latest Apple Ads Take a Turn for the Worse

Why the London 2012 Olympics Is the First Real-Time Games

The 9 Most Important Tablet Mysteries of 2012

Device Turns Eye Movement Into Handwriting

Apple Considered Investing in Twitter [REPORT]

Hidden Genius Project Provides Tech Mentorship for Young Black Men

What Higher Education Will Look Like in 2020 [STUDY]

Why Do We Keep Going Back to Mars?

This Is What the Olympians From 100 Years Ago Looked Like

Shedding Light on Mitt Romney’s Unexplained Twitter Surge

New Leaked Pics May Hint at iPhone 5 Design

Chick-fil-A PR Chief Dies as Company Battles Controversy

Hacking the Olympics Opening Ceremony

Romney Advisor Tweets ‘Follow Friday’ List of Potential VPs

Facebook’s Not the Only One Struggling With Mobile Advertising

Weekend Leisure

This Cute, Cubed Bamboo Speaker Packs Crazy Sound [VIDEO]

9 Nifty Laptop Feet to Keep Your PC Running Cool

Kickstarter Project Is a ‘Smartwatch’ for Your Smartphone

‘Fund Me Maybe’ Is Tech World’s Parody of ‘Call Me Maybe’ [VIDEO]

10 Stylish Onesies for Baby Geeks

12 Pictures of Animals Being Forced to Marry

It’s Official: This Is the Cutest Picture on the Internet

Twitter Doghouse Lets You Temporarily Dump Annoying Tweeps

Top 10 GIFs of the Week

Boys Will Be Boys In This ‘Girls’ Parody [VIDEO]

10 Brits Snubbed from the Olympic Opening Ceremony

You Have Upset The Tetris God [VIDEO]

Sneak Peek: Justin Bieber Teases ‘As Long As You Love Me’ Video

If ‘A Space Odyssey’ Were Remade as a Hollywood Blockbuster

Forget Traditional Tours; Vayable Helps You Discover New Ways to Travel

Listen to Talk Radio on Your iPhone? You’re Probably a Liberal

You’ll Grin and Bear it With This Wild Live Video Stream

Mr. Bean Gets Carried Away During Olympics Appearance

Get a Bird’s-Eye View of 25 Olympic Stadiums

Top 6 Comments on Mashable This Week

Helpful Resources

Everything You Need to Know About Foursquare’s New Merchant Tools

How to Structure Your Daily Job Search to Help Land Your Next Job

50 Digital Media Resources You May Have Missed

6 Key Software Updates You Should Be Doing

The Beginner’s Guide to Socialcam

4 Reasons Why Recruiters Should Stop Accepting Traditional Resumes

The Anatomy of a Killer Content Marketing Strategy

Read more: http://mashable.com/2012/07/30/weekend-recap-64/