All posts by Daniel

Astronauts Capture Dragon Filled With Brand New Science

The SpaceX Dragon resupply ship approaches the International Space Station
The SpaceX Dragon resupply ship approaches the International Space Station over the Atlantic Ocean.

International Space Station was traveling more than 262 miles over the south Pacific Ocean, Expedition 61?Commander Luca Parmitano of ESA (European Space Agency) grappled Dragon at 5:05 a.m. EST using the space station’s robotic arm Canadarm2 with NASA astronaut?Andrew Morgan?acting as a backup.

Ground controllers will now send commands to begin the robotic installation of the spacecraft on bottom of the station’s Harmony module.?NASA Television coverage of installation is scheduled to begin at 7:30 a.m. Coverage may be adjusted as needed. Watch online at?

Here’s some of the research arriving at station:

A Better Picture of Earth’s Surface
The Hyperspectral Imager Suite (HISUI) is a next-generation, hyperspectral Earth imaging system. Every material on Earth’s surface – rocks, soil, vegetation, snow/ice and human-made objects – has a unique reflectance spectrum. HISUI provides space-based observations for tasks such as resource exploration and applications in agriculture, forestry and other environmental areas.

Malting Barley in Microgravity
Malting ABI Voyager Barley Seeds in Microgravity tests an automated malting procedure and compares malt produced in space and on the ground for genetic and structural changes. Understanding how barley responds to microgravity could identify ways to adapt it for nutritional use on long-duration spaceflights.

Spread of Fire
The Confined Combustion investigation examines the behavior of flames as they spreads in differently shaped confined spaces in microgravity. Studying flames in microgravity gives researchers a better look at the underlying physics and basic principles of combustion by removing gravity from the equation.

Keep up to date with the latest news from the crew living in space by following, @space_station and @ISS_Research on Twitter, and the ISS Facebook and ISS Instagram accounts.

Dragon Attached to Station for Month-Long Stay

Dec. 8, 2019: International Space Station Configuration
Dec. 8, 2019: International Space Station Configuration. Four spaceships are parked to the space station including the SpaceX Dragon space freighter, the Northrop Grumman Cygnus resupply ship and Russia’s Soyuz MS-13 and MS-15 crew ships.

Three days after its launch from Florida, the SpaceX Dragon cargo spacecraft was installed on the Earth-facing side of the International Space Station’s Harmony module at 7:47 a.m. EST.

The 19th contracted commercial resupply mission from SpaceX delivers more than 5,700 pounds of research, crew supplies and hardware to the orbiting laboratory.

Here’s some of the science arriving at station:

Keeping Bones and Muscles Strong
Rodent Research-19 (RR-19) investigates myostatin (MSTN) and activin, molecular signaling pathways that influence muscle degradation, as possible targets for preventing muscle and bone loss during spaceflight and enhancing recovery following return to Earth. This study also could support the development of therapies for a wide range of conditions that cause muscle and bone loss on Earth.

Checking for Leaks
NASA is launching Robotic Tool Stowage (RiTS), a docking station that allows Robotic External Leak Locator (RELL) units to be stored on the outside of space station, making it quicker and simpler to deploy the instruments. The leak locator is a robotic, remote-controlled tool that helps mission operators detect the location of an external leak and rapidly confirm a successful repair. These capabilities can be applied to any place that humans live in space, including NASA’s lunar Gateway and eventually habitats on the Moon, Mars, and beyond.

After Dragon spends approximately one month attached to the space station, the spacecraft will return to Earth with cargo and research.

Next up, the station crew will be preparing for the arrival early Monday morning of a second resupply spacecraft. The Russian Progress 74 that launched Friday at 4:34 a.m. is expected to dock to the Pirs compartment on the station’s Russian segment at 5:38 a.m. Monday, Dec. 9. NASA TV and the agency’s website will provide live coverage of Progress rendezvous and docking at 4:45 a.m.

Keep up to date with the latest news from the crew living in space by following and @ISS_Research on Twitter, and the ISS Facebook and ISS Instagram accounts.

Someone Built A Fully-Functional 5 Foot Gameboy Color In WebVR

Return to a simpler time in gaming, no download required. 

It feels like only yesterday I was sitting at the foot of my childhood bed, my head buried deep in the screen of a yellow Gameboy Color as I attempted my 4th playthrough of the original Donkey Kong Country. Originally released in October of 1998, the pocket-sized device now stands as one of Nintendo’s most successful platforms, housing a vast selection of incredible titles that include Super Mario DX, Pokémon Yellow, and Worms: Director’s Cut, just to name a few.

Image Credit: _talkol_1

Thanks to the work of one independent developer, you can now enjoy your favorite 8-bit gaming adventures via a fully-functional 5 foot Gameboy Color brought to life in VR. Earlier this week, Reddit user _talkol_1 posted a link to a custom-built Gameboy emulator available in 6DOF VR. Built for specifically for WebVR, users can access the app directly through a compatible VR browser, such as Firefox Reality or the Oculus Browser, without the need for any downloads. 

In order to use the emulator, you’ll need to provide the ROM file (computer file containing the data of a specific game cartridge) for whichever game you’d like to play. Once you have your file on hand, simply open in your VR browser and follow the instructions provided on-screen. If you don’t have any Gameboy Color ROMs, there’s also a trial version of the experience that offers access to a handful of brief demos for The Legend of Zelda: Link’s Awakening DX, Spider-Man, Mario Golf, and Dragon Ball Z: Legendary Super Warriors.

Image Credit: _talkol_1

Is the experience practical? No. Staring up at the screen at a 70-degree angle while awkwardly slapping a series of massive buttons is far from a comfortable way enjoying the Gameboy Color catalog. Is it necessary? No, probably not. However, it’s still an excellent example of WebVR’s ability to offer engaging 6DOF immersive experiences straight from a headsets web browser.

With the release of other WebVR-based projects like VR Flappy Bird and Moon Rider, it’s clear that VR technology has the potential to breathe new life into browser-based gaming.

Feature Image Credit: Reddit u/_talkol_1

The post Someone Built A Fully-Functional 5 Foot Gameboy Color In WebVR appeared first on VRScout.

Life After Death In The Digital Dimension

– Facing our own mortality, or the death of loved ones, is never easy.

– With our lives becoming increasingly digital, there are new aspects, such as what happens to our online identities after we die, that we need to navigate.

– While there are practical steps an individual can take, online companies will need to provide sensitive options to deal with customer loss.

A couple of months ago I sat down at the kitchen table with my dad, and, with a camera propped on books and a jerry-rigged microphone, filmed him speaking about his life. It took a few hours over a couple of days, though I frequently had to stop the camera to edit out his annoyance at being told to stop moving out of focus. At the end, with a pile of raw files safely in the cloud, I felt a sense of comfort. As time marches on, and my parents age, I’d know I had a piece of my dad that I can revisit.

People deal with death in different ways, and increasingly, those ways are digital. But dying in the digital era comes with a new set of ethical and practical questions that people, and businesses, must reconcile. When emotions are heightened and errors can have real ramifications on people’s wellbeing, that requires nuance and empathy. But is the digital world equipped to meet these challenges?


The practicality of dying

It’s estimated that 1.7 million Facebook users died last year.1 We are seeing online social companies beginning to address this 21st century reality. Facebook now allows people to nominate a legacy contact who has the authority to either close or memorialise their account when they die. Gmail allows an Inactive Account Manager to be granted access in the event your digital data stops pulsing after a certain amount of time. LinkedIn has a policy in the works that will allow an account to be memorialised, and it has done so at the request of its customers.2

It’s rare though that loved ones can access all a deceased person’s data. Leslie Berlin, a historian at Stanford, found this out the hard way when, after her mother died, she could not get into her iPhone.3 She had the password — or she thought she did — but it didn’t work. If she tried too many times her mother’s last thoughts, emails and photos would be automatically erased. Even more upsetting, to access the many websites her mother used, such as her bank, she needed to access two-factor authentication via her mother’s phone.

While she eventually gained some access to her mother’s accounts, Berlin wondered about the public and private selves we live online. Should our private digital moments belong to the platforms we frequent after we die, or to those we would appoint as digital executors?

It brings up a practical problem too — after we die, who should have access to our digital selves? And do they need permission? Just as reading through old letters and documents might reveal things we wished to keep private, should family members still be able to do so digitally? Afterall, for some, sorting through a loved-one’s photos, letters and belongings brings great comfort.

The law, understandably, is not yet clear in these cases.4 There are things that can be done on an individual level, such as ensuring select people have your passwords (or password manager access) and know your wishes for your data.5 Already, people are enshrining digital death instructions and inventories of online accounts into their last will and testaments, but this is far from ironclad and potentially messy legally. In the end, it should not fall solely on backdoors created by individuals, but on the places our digital lives ‘live’ also: the businesses we interact with.


Life after digital death

It gets more complicated when we realise that in some ways, digital technology has changed the nature of dying altogether. In an interview with MIT Technology Review, researcher Hossein Rahnama speaks of, Augmented Eternity, an app which will turn a person’s digital footprint into an interactive avatar.6 He is working with a CEO who wants to be made into a ‘virtual consultant’, available to future employees seeking his advice on business decisions. And while Rahnama admits most people won’t have enough of a digital footprint amassed today to build fully working, contextual AI avatars with current technology, it won’t be long before the constraints are overcome and a realistic version of a person can be rendered digitally immortal.

Indeed, a burgeoning industry in preserving one’s image is also gaining steam in Hollywood.7 With the right amount of money (that is, a few million), technology can now alter the likeness of an actor to look decades younger or, after their untimely demise, recreate them altogether in film. The possibility of not only a life, but also career after death is now a reality for those who can afford it. But even for the rest of us, eternal ‘life’ is not that far-fetched.

Writer James Vlahos details his own forays into memorialising his terminally ill father.8 What started as a project to record and transcribe a life story into a book, took a more technological turn when he got the idea to use PullString, a conversational AI app. Vlahos created the Dadbot, an AI chatbot of his father capable of interacting via text. In the aftermath of his father’s death Vlahos is not sure how he feels about the Dadbot, he knows it isn’t his father — but he also knows that his father felt comfort that he wouldn’t be forgotten, and that his grandkids would remember him.9


The ethics of an afterlife

But do these versions of real people cheapen our relationships and memories? While AI programs will get better at context, semantics and emotional cues, recreating a human digitally will always require editing and reimagining. Vlahos wanted his Dadbot not only to say things his dad said, but do it in his personality too. What about representing the things his father didn’t say? That’s a much taller order.

Eugenia Kudya, an AI entrepreneur, was faced with exactly such a scenario when her friend died. Kudya decided to use his text messages and an artificial neural network to create a chatbot as a ‘digital monument.’10 Kudya recognises that her friend’s messages to her alone would lead only to a partial recreation of his greater self. And when only a part of a person lives on, the line between comforting and jarring — as his friends and family felt — can be tricky.

In both Vlahos and Kuyda’s cases, the subjects of these newfound digital afterlives either agreed, or would (it is believed) have been happy to be memorialised in such a way. But what if they didn’t? Questions such as who owns our data, our personalities and who can make money off them have not yet been adequately answered. If someone does not want to be digitally memorialised, should their wishes be respected? Or are the feelings, closure and connection to the deceased of those left behind more important? Should we let our online selves die a digital death? Or aspire for more?


The end point

It seems unfair that dealing with someone’s physical passing also means negotiating their digital death. There are no easy answers. Should we leave our digital selves ‘alive’ forever? Should we close the accounts of loved ones — effectively erasing them from existence — or leave them up as memorials, places for condolences to be left and people to reminisce? What should and shouldn’t family and loved ones be able to do when it comes to data after death? Is it ethical to ‘recreate’ people without their permission, or against the wishes of other friends or family?

Even as individuals deal with such issues, businesses too must address them. As the policies germinating within the major social networks indicate, the death of a user is not a simple matter of removing an account, or even leaving it dormant. Digital lives interact with the emotions of physical ones, and for those left behind, there is no one answer on what is or isn’t acceptable online, just as there isn’t offline.

This means for businesses involved with people, thought must go into how products, platforms and services will interact with users — or the loved ones of users — after the customer passes on, or a product is turned off. Only by doing so with empathy and sensitivity will we find the humanity within the 1s and 0s.


Students Learn About Medical Use of Virtual and Augmented Reality

Students at St. Margaret’s Episcopal School learned about the medical uses of virtual and augmented reality when Dr. Robert Louis made a recent appearance during a psychology class at the school.

Louis, the Empower360 endowed chair in skull base and minimally invasive neurosurgery, also is the program advisor for the Skull Base and Pituitary Tumor Program at the Pickup Family Neurosciences Institute at Hoag Hospital.

Louis cited addiction, mental illness and surgical planning as areas in which virtual and augmented reality can be used and applied.

“It can be used to change a channel in your brain from an unhealthy perception or thought pattern to a healthier one,” Louis said.

Virtual reality can mimic environments for patients undergoing exposure therapy to deal with phobias or other matters in a safe way—for instance, mimicking tall heights for patients with a phobia of heights.

Virtual reality has also been used to decrease physically abusive behavior in men, said Louis, who cited a study from Spain. “They did a virtual switch. They took a video of them being abusive, then reversed the situation and in VR put the men in the situation of being the woman and had to physically deal with being abused and not being able to get out of the situation,” Louis said. “Something like 60 or 70 percent of those men became reformed and no longer abusive as a result of that.”




Students Learn About Medical Use of Virtual and Augmented Reality (Video)

NASA TV Broadcasts Dragon’s Arrival at Station on Sunday

The SpaceX Dragon cargo craft
The SpaceX Dragon cargo craft is pictured on May 18, 2014, attached to the Canadarm2 robotic arm.

SpaceX Dragon is on track to arrive at the International Space Station tomorrow morning Dec 8, with an expected capture of the cargo spacecraft around 5:30 a.m. EST. NASA Television coverage will begin at 4 a.m. Watch live at

Expedition 61 Commander Luca Parmitano of ESA (European Space Agency) will grapple Dragon with NASA astronaut Andrew Morgan acting as a backup. NASA’s Jessica Meir will assist the duo by monitoring telemetry during Dragon’s approach. Coverage of robotic installation to the Earth-facing port of the Harmony module will begin at 7:30 a.m.

Dragon lifted off on Thursday, Dec. 5, atop a SpaceX Falcon 9 rocket from Space Launch Complex 40 at Cape Canaveral Air Force Station in Florida. The cargo spacecraft with more than 5,700 pounds of research, equipment, cargo and supplies that will support dozens of investigations aboard the orbiting laboratory. Dragon will join three other spacecraft currently at the space station

Keep up to date with the latest news from the crew living in space by following, @space_station and @ISS_Research on Twitter, and the ISS Facebook and ISS Instagram accounts.

NYT Uses AR To Compare The World’s Most Polluted Air With Your City’s

NYT app users can view the air pollution levels from their city’s worst day.

It’s a sad fact that our Earth is currently suffering from rampant pollution. It’s in our waters, our land, and in the air. It is one of the world’s leading risk factors for death, with air pollution alone being responsible for 9% of deaths globally; that’s 5 million people

In an effort to help readers more easily visualize the immense effects of air pollution, a recent NYT’s article entitled “See How the World’s Most Polluted Air Compares With Your City’s” uses an accompanying AR mobile experience to help you visualize the nearly-invisible pollution floating all throughout our air.

Image Credit: New York Times

To compare your city’s air pollution with that of the most polluted areas of the world, open the story in your NYT’s app and look for the AR activation button below the second paragraph. Your phone will automatically find your location and show you the microsized pollution particles that were floating around on your city’s worst day. The particle measurement uses a concentration of an air pollutant (eg. ozone) given in micrograms (one-millionth of a gram) per cubic meter air and is shown using the equation µg/m3. The lower the number in front of that equation, the better your air quality.

For example, Albany, NY is the closest city to me with particulate concentrations reaching 33 µg/m3, a number that is considered “moderate” compared to the air in the Bay Area last year when California was covered with a blanket of smoke from a large fire. The particulate pollution reached 200 µg/m3, which is considered “very unhealthy”. You can also explore the particulate concentrations of cities such as Chicago, Shenzhen, Rio de Janeiro, and many others.

Image Credit: NYT New Delhi

Of course, nothing compares to the air quality crisis in northern India where the particulate levels in New Delhi have reached over 900 µg/m3. To give you an idea of what this means, the E.P.A.’s definition of “hazardous” hits the top of the scale at 500 µg/m3, which is considered maxed out, putting New Delhi into “extreme” territory.

Thanks to the NYT’s AR experience, we can actually see what those microscopic pollution particles look like.

Graham Roberts was the creative lead for the NYT’s pollution AR app before leaving the publication to join Google as their Digital Design Lead at Google’s Brand Studio, an innovation lab that looks at how the company can create meaningful experiences that will connect Google products with customers.

Right after leaving the NYT, Roberts tweeted, “This was the last AR project I worked on at the Times, based on a belief that geolocated experiences that create a data layer over our physical environment has great potential. I think the team did a stellar job bringing it to life.”

Image Credit: New York Times

Many people think that our planet is in no immediate danger; that no matter how much we pollute it, nature will come along and fix things. After all, if we can’t see it, it doesn’t even exist, right?  By making the pollution particles of our environments visible to the human eye using AR, NYT’s app may help push individuals to take air pollution – or any pollution for that matter – more seriously.

This isn’t the first time the NYT’s has used AR to help tell a story. The paper used an AR experience to help readers visualize the openings rescuers had to swim through during the Thailand Cave Rescue, and even used VR to tell the stories of people displaced from their homes. 

The NYT’s app is available for both iOS and Android devices.

The post NYT Uses AR To Compare The World’s Most Polluted Air With Your City’s appeared first on VRScout.

The big problem with virtual reality? It’s almost as boring as real life

No one needs a virtual Toyota. We need to give users good reasons to leave their reality behind and immerse themselves in a new one.

Just a few years ago, virtual reality was being showered with very real money. The industry raised an estimated $900 million in venture capital in 2016, but by 2018 that figure had plummeted to $280 millionOculus—the Facebook-owned company behind one of the most popular VR headsets on the market—planned to deliver 1 billion headsets to consumers, but as of last year had sold barely 300,000.

Investments in VR entertainment venues all over the world, VR cinematic experiences, and specialized VR studios such as Google Spotlight and CCP Games have either significantly downsized, closed down, or morphed into new ventures. What is happening?

Recent articles in Fortune and the Verge have voiced disdain for VR technology. Common complaints include expensive, clunky, or uncomfortable hardware and unimaginative or repetitive content. Skeptics have compared VR experiences to the 3D television fad of the early 2010s. As a VR researcher and developer, I understand the skepticism. Yet I believe in this technology, and I know there are “killer apps” and solutions waiting to be discovered.


Last week, Western Sydney University hosted a global symposium on VR software and technology, at which academics and industry partners from around the world discussed possible ways forward for VR and augmented reality. Among the speakers were Aleissia Laidacker, director of developer experience at Magic Leap; University of South Australia computing professor Mark Billinghurst; and Tomasz Bednarz, director of the Expanded Perception and Interaction Centre at UNSW Sydney (the University of New South Wales).

One problem discussed at the symposium is the fact that VR experiences often cause health-related issues including headaches, eye strain, dizziness, and nausea. Developers can partially deal with these issues at the hardware level by delivering balanced experiences with high refresh and frame rates.

But many developers are ignoring usability guidelines in the pursuit of exciting content. Gaming industry guidelines issued by EpicOculusMarvel, and Intel recommend that games completely avoid any use of induced motion, acceleration, or “fake motion,” which are often the main cause of discomfort and motion sickness.

Yet the vast majority of available VR experiences feature some kind of induced motion, either in the form of animation or by basing the experience on user movement and exploration of the virtual environment.

I have met many first-time VR users who generally enjoyed the experience but also reported “feeling wrong”—similar to enjoying the clarity of sound in noise-canceling headphones but also having a “strange sensation” in their ears.


Queasiness is not the only turnoff. Another problem is that despite the near-limitless potential of VR, many current offerings are sorely lacking in imagination.

The prevailing trend is to create VR versions of existing content such as games, videos, or advertisements, in the hope of delivering extra impact. This does not work, in much the same way that a radio play would make terrible television.

A famous cautionary tale comes from Second Life, the virtual world launched in 2003, which failed spectacularly to live up to its billing. Real-world businesses such as Toyota and BMW opened branches in Second Life, allowing users to test-drive badly programmed versions of their virtual cars. They lasted mere months.

Why would we prefer a humdrum virtual experience to a real one? No one needs a virtual Toyota. We need to give users good reasons to leave their reality behind and immerse themselves in a new one.

There have been some notable successes. Beat Saber, made by Czech indie developers, is one of the few games that have explored the true potential of VR—and is the only VR game to have grossed more than US$20 million.

The VR Vaccine Project helps to take the sting out of childhood needles, by combining a real-world vaccination with a superhero story in the virtual world, in which the child is presented with a magical shield at the crucial moment.

I really hope VR is on its way to becoming more mainstream, more exciting, and less underwhelming. But we scientists can only present new technological solutions, to help make VR a more comfortable and enjoyable experience. Ultimately it is down to VR developers to learn from existing success stories and start delivering those “killer apps.” The possibilities are limited only by imagination.



Foto: Photo: MBI/iStock

The power of VR and immersive learning: effective and fascinating

A few years ago I was introduced to Virtual Reality and 360? video. When I saw the VR glasses, it seemed to me a lot of hassle to put that bulky device on your head and control it, but that turned out to be easy. More importantly: I was fascinated by the content I was shown! Within no time I felt present in the film and I was part of the situation in the scene.

As a director of corporate films and television programs, I am used to shaping a story within the familiar 16: 9 frame. Within this frame I can determine what a viewer gets to see and how: for example, do I opt for a total shot or a close-up or perhaps for both? In the editing I then decide when to show that particular close-up. The image frame and the editing, combined with a good script of course, are the most important tools to tell a film story. But what if the boundaries of a picture frame are completely missing and the editing is actually kept to a minimum?
Angry customer…

During one of my first VR experiences, I ended up in a British Telecom store. There a dissatisfied customer addressed me with a complaint about a product and I was asked what my reaction would be. I am not a store employee in daily life and never have been, so perhaps not completely illogical, I chose the wrong answer from the options offered to me. As a result, the customer was not happy to say the least and reacted irritably. On me personally! At least that’s how it felt. What now? The angry customer continued to look at me questioningly. I avoided her gaze by looking around me and as a bonus I got a meaningful look from another customer who was curious to see how I was going to solve this situation. Fortunately, I was able to correct it with the next, better chosen answer.

As mentioned, I was immediately fascinated, because I felt the effect. You can’t just look away from a screen or take a sip of coffee, you are completely focused. And your brain switches quickly: almost immediately you feel physically present in the virtual environment in which you are placed and you respond naturally to what is happening there.

In VR, or rather, in 360? video, it is not the frame or the montage that is the weapon, but the power of this new medium lies in the fact that you completely immerse someone in a very realistic feeling situation. It is interesting to see how you can use this specific power. Where can VR actually add value. Where does it go beyond the wow effect that most people will experience when they first put on such glasses. The technique itself is nice to get acquainted with, but personally I don’t really care. Very simple, I just want it to work. But admittedly, I am not really a gadget man. I am particularly curious about what the actual impact is on the viewer and how you can use it to bring about a change in this viewer. By the way, the question is whether „viewer“ is the right word, since you are not simply looking at something, but rather experience it (it is a bit far-reaching to say that you create experience experts with VR, but still …).

Training in VR

If there is one area in which VR is developing at lightning speed, it is in the field of training. In the US in particular, developments are now going fast. The most appealing use case at the moment is that of the Californian company STRIVR. Their customer Walmart, with 1.5 million employees the largest employer within the US, has distributed no less than 17,000 Oculus Go glasses across their 4,700 locations throughout the country and now offers all its employees a total of more than 50 training modules. This includes shop training in the areas of customer service and safety, onboarding programs, but also leadership programs and simulations of bad news conversations (see also: How VR is Transforming the Way We Train Associates). At another major STRIVR customer, Verizon, the entire store staff underwent armed robbery training last year. In this training, people are placed in a number of realistic, high-impact robbery scenarios and so experience what it is like to be treated with verbal violence and to get a potentially deadly weapon aimed at you. By having practiced this several times in this pervasive way, participants appear to be able to respond less stressfully to robbery situations and know how to follow the correct procedures at these dangerous moments.

In addition, developments in the Netherlands are also moving fast now. The Delft-based company WARP has developed an extensive interactive VR tool in which customers can develop their own training scenarios and post content. Customers such as BT, CSU and KLM make use of this.

VR assessment

MTVR produced the first fully-fledged, validated VR assessment in the Netherlands for and with eelloo / VanderMaesen|Koch / SlimAssessments, in which you, as a participant in a team meeting are placed in a realistic work situation. Here you experience the interaction of a group of people with different roles, interests and characters. You observe the group process by actively looking around you and you assess the behavior by answering questions. These answers are scored and processed into a report with standardized results. VR assessment is about individual behavior and group processes and can be used as a stand-alone instrument or as part of a larger assessment program.


Difficult to imitate, unsafe situations or circumstances can be practiced safely using VR. Consider, for example, employees of security services, who are placed in a simulation, escalating in an increase in violence, fire and the like, or store personnel who are confronted with aggressive customers or are threatened with a weapon. VR also offers another kind of safety, namely a degree of social safety: people can train individually, without having to do this in a role play, under the eyes of colleagues. The focus is directly on the content of the training and one is not distracted and negatively influenced by any feelings of embarrassment.


In VR people can be placed in a real-life work situation, even in logistically difficult to simulate or aforementioned dangerous circumstances. In VR, these circumstances only have to be set up or filmed once and then they can be practiced without limit and without risk. Consider training people within large construction companies. During training sessions, they can visit various large construction sites anywhere in the world, where they can look for safety procedure errors, for example, within the interactive VR experience.

In pre- or onboarding programs, new employees become acquainted with colleagues and can look around the company, while going through induction or compliance-related procedures. Employees can get to know colleagues at other locations, wherever they are in the world, look around and are confronted with cultural differences. Companies can reduce their ecological footprint because it is becoming less necessary to get people on a plane. Themes such as inclusiveness, integrity and sexual behavior can be brought to the attention in a very direct way in VR.

Completely submerged

Immersive learning means without distraction, 100% focus training with the result that the material offered is up to 40% better remembered. You make choices in realistic situations, in which you also directly experience the effect of those choices with the help of interactive scenarios. You learn by doing!

Scalable and cost efficient

VR makes it possible to train anywhere and whenever it suits you. Glasses at the work location can be used independently at a quieter moment of the day in terms of work. VR training courses can be repeated, if necessary unlimited. You can simulate difficult circumstances once. Situations that cannot be practiced in real life due to security risks or financial feasibility become widely accessible. That applies to training in VR anyway: because of the cost efficiency, many more employees can be trained, from trainees to management.

A lot of money is spent by companies on testing, onboarding and training their employees. Certainly at companies where staff turnover is high. The biggest gain from deploying VR is in time. People in VR need less time to learn the same material and staff do not necessarily have to travel to an external training facility. This saves considerably in the loss of workable hours and on travel costs. Moreover, one can do VR training at quiet times of the day and be flexible in that. Trainers need to be hired less and they also do not have to travel through the country. Role play actors are hired on a one-off basis and hiring multiple actors to create simulations in a group context is therefore more feasible.

Data insights

VR offers infinite possibilities in collecting valuable information: who has completed the training, how long has it taken to fulfill it and what are the results? How do women perform compared to men or MT members compared to other staff? Where do we see people improving? The use of eye tracking is also possible, in which it is measured what a participant is looking at and at what moment. Based on the data, follow-up discussions can be held about choices made or other findings, which gives training more value for employee and company.

What answers have come back to any feedback questions. What did people think of the training?

Don’t see VR as a trial

For the time being, VR is often used only as an experiment. An organization can show to the outside world, or internally to its employees and management, that it’s actively working on innovation. The emphasis is on innovation and not on the added value of the deployment of VR. Organizations do not really invest in the medium. And that’s a shame, because to use VR effectively you will have to take it seriously and especially look at a sustainable commitment (as with the aforementioned Walmart example).

Go for quality

A VR experience stands or falls on quality. To start the content. If you place people in a super realistic environment, then what is happening there must also look lifelike and credible. VR content, made by inexperienced directors, with actors of a high amateur level results in more substitute shame for the user (after all, you feel part of the situation …), rather than being immersed in the content of the scene in a compelling way. And VR is also not compelling if the sound is not optimally recorded and hollow, distant and therefore doesn’t sound realistic. In addition, it also doesn’t help if a film set is not properly lighted, so that you are not able to properly perceive facial expressions when people have a window in the background or are positioned in a shadow. So: good actors, good direction, lighting and audio are conditions for creating credible high-end content that can achieve its goals.

To guarantee quality, it is also necessary to look carefully at how VR is implemented in an organization. In larger organizations in particular, it is advisable to involve potential stakeholders in the process from the outset. Think of departments such as HR, IT, Legal, Marketing & Communication and Facility Services.

VR not the one and only solution

Of course, VR is not always the best option for all forms of training, onboarding or assessing people, but in many cases it will at least be a very valuable addition to existing programs and procedures, both in terms of content and in terms of cost savings. And in many cases, immersive learning will indeed be a better and more effective method than traditional one-on-one training or e-learning courses. So it’s worth looking at where time and money is being wasted within the organization. Which training courses would you like to be able to repeat and / or practice more often, but is that impossible or not feasible in practice? Or what training courses would you like as a company to have many more people do?

10 benefits in a row

  • Many more employees can follow the same training. So these are not only reserved for managers or a limited selection of employees. By offering modules, differences can be made per trainee in the content offered.
  • Repeatable: the possibility to go through a training several times and the chance to improve yourself.
  • No distraction, so 100% focus.
  • Actors only have to be hired once, making it more feasible to simulate situations in a group context with multiple actors.
  • Choices, answers and other interactions are measured and can be supplied in the form of reports. It shows who has gone through a training course and when. And if this has happened several times, whether there has been an improvement in the results.
  • Not looking at a screen, but actively undergoing, experiencing and doing.
  • Simulations that are difficult to imitate in real life training situations because of location (for example, a construction site, etc., amount of actors required, travel time) can be set and experienced once.
  • Safe, both practical and social.
  • People remember up to 40% better in less time.
  • Gain of time: training itself costs less time to do. Employees do not necessarily have to go to an external location and can do training when it suits them and this means less loss of workable hours.



Report: Magic Leap One Sales Significantly Lower Than Target, True Successor Still ‘Years Away’

Magic Leap One sold just 6,000 units in the first six months, and technology issues mean a true successor product is still “years away”, according to a report from The Information.

ML1 is a true augmented reality headset — perhaps the first broadly available product of its kind — aimed toward consumers rather than just enterprise. Content for the headset includes a number of projects including avatar chat, a web browser you can place anywhere in your room, a Wayfair app for seeing how furniture will work, various entertainment experiences, and two games from Insomniac Games. They even recently announced a Spotify app for the system.  

The system shipped starting in August 2018, priced at an eye watering $2,295. It contains a 6DoF controller, but also supports hand tracking.

High Expectations, Low Results

Magic Leap Founder and CEO Rony Abovitz told investors and employees he expected the headset to sell “at least” one million units in the first year, according to the report.

Abovitz is known for grandiose language when describing “spatial computing” — and in our first interview with him last year suggested he thought the company could one day go public. Executives reportedly eventually convinced Abovitz to “settle” for a target of 100,000 in the first year.

However, according to the report, the headset actually sold just 6,000 units in the first six months — and Magic Leap has not publicly refuted the report. When the product launched, screens at the company’s headquarters showed the expected sales, not the actual live sales figures.

The company’s headset inventory was reportedly so large that it started giving employees free headsets.

For comparison, Microsoft’s HoloLens AR headset reportedly sold around 50,000 units after two years. But Microsoft seems to have more realistic expectations about the current state of AR, and has been targeting enterprise rather than the consumer market.

In 2018, Magic Leap was apparently losing tens of millions of dollars per month. Later that year, the company lost a lucrative US Army contract bid to Microsoft. This year, it laid off “dozens” of employees and slowed hiring.

But Why?

So why did Magic Leap sell a small fraction of their 100,000 target?

The $2,300 price is a significant barrier to consumer interest. In the VR world, the standalone Oculus Quest from Facebook is $400 and representatives claim they are making them as fast as they can sell them.

Difficult to manufacture custom display required for AR likely contribute to high cost for Magic Leap, but we believe the company made some decisions which kept the price higher. The company also used the most powerful, and expensive, mobile chip it could find: the NVIDIA Tegra X2. Oculus Quest, for comparison, uses a more affordable chip from Qualcomm to try and deliver a compelling mobile VR experience.

The 6DoF controller bundled with Magic Leap One also uses electromagnetic tracking for positioning. Electromagnetic has the advantage of not being subject to occlusion, however, it is significantly more expensive than the LED tracking solutions we’ve seen from Facebook and Microsoft.

But even if Magic Leap One had been priced more competitively, the relatively low field of view and relative lack of content may still have limited its appeal. The ultimate promise of AR glasses is an all day outdoor wearable device which can create arbitrary screens, provide on-foot navigation, and translate other languages. Magic Leap simply doesn’t provide that yet.

Magic Leap 2: Years Away

The report also suggests the company is currently in the prototype stage for a successor to the Magic Leap One, codenamed ML2.

It is said to have a wider field of view, greater “depth perception” (likely more focal planes or a varifocal system), and higher quality graphics. It’s also reportedly smaller and lighter, and will come in multiple colors.

ML2 may also incorporate a cellular 5G connection. This would seem to indicate that it can be used outside — the current device is only recommended for indoor use. 

The report claims that “a person involved with the project” told employees that the device is still years away due to “fundamental technology constraints”. This could put the company in direct competition with Apple and Facebook, which are each reportedly planning to launch their own AR glasses in a few years.

All that said, the company is apparently close to a minor refresh of the Magic Leap One which could provide some improvements.

The post Report: Magic Leap One Sales Significantly Lower Than Target, True Successor Still ‘Years Away’ appeared first on UploadVR.