At 5:28 a.m. EDT, Expedition 59 Flight Engineer Anne McClain of NASA used the International Space Station’s robotic Canadarm2 to grapple the Northrop Grumman Cygnus spacecraft as David Saint-Jacques of the Canadian Space Agency monitored Cygnus systems during its approach. Next, ground controllers will command the station’s arm to rotate and install Cygnus, dubbed the S.S. Roger Chaffee, on the bottom of the station’s Unity module.
The station was flying over northeast France at an altitude of 254 miles when it was captured.
NASA Television coverage of installation will begin at 7 a.m., and installation of the Cygnus spacecraft to the space station is expected to be completed later this morning. Cygnus will remain at the orbiting laboratory for a three-month stay.
GYMNASIA is a dark world of theater, VR, animation, and puppets.
The 2019 Tribeca Film Festival is just around the corner and with it comes Tribeca Immersive Virtual Arcade, an entire exhibit dedicated to highlighting creators who explore how storytelling and technology can come together to deliver wondrous VR and AR experience.
One such experience is GYMNASIA, a groundbreaking cinematic VR experience developed as part of a collaboration between the EMMY® Award-winning immersive entertainment creators, Felix & Paul Studios, the National Film Board of Canada, and Oscar®-nominated Clyde Henry Productions.
GYMNASIA can best be described as an unsettling and wonderfully-bizarre dream-like experience that blends the world of theater, life-size puppets, animation, and cinematic VR into uncharted territories that stretch the art of immersive entertainment beyond further than ever before.
The experience begins the moment you enter the installation space. The room carries an eerie droning sound of children humming in a gymnasium that will disrupts your perception of the rooms size – it somehow hacks your brain into thinking the space is larger than it actually is.
The walls explode with life through animated projections of puppet children while a real, full-scale puppet teacher places musical sheets on an overhead projector. Scientists in lab coats then approach and escort you into those child-sized chairs that are found in every grade school cafeteria.
The room itself is an immersive theater experience that will muddle your own perception of reality.
The virtual part of this theater/VR mashup experience begins the moment you put on your VR headset. An out of tune piano sets the tone as you find yourself transported into the stillness of an abandoned school and enter a place where visions of a lost childhood await you. The immersive experience recalls the sights and sounds of a child’s world through echoes of games, school lessons, and choir recitals.
Altogether, GYMNASIA is a six-minute experience that brilliantly fuses 3D 360-degree video, stop motion, miniatures, and computer-generated graphics (CGI), and is the first stop motion VR experience to induce the elusive anxiety that occurs when the lines between “real” and “unreal” are blurred beyond belief.
This full-blown immersive exhibit was produced and installed by Phi Centre, a Montreal-based multidisciplinary arts and culture organization, and is supported by the Adam Mickiewicz Institute as part of the Niepodleg?a programme run by the Ministry of Culture and National Heritage of the Republic of Poland.
“Partnering to create new and innovative experiences is part of NFB’s DNA. Working with Clyde Henry and Felix & Paul Studios on this technical feat and genre pushing experience has been hard work but also a pure joy,” said Dana Dansereau, Producer for the NFB’s Digital Studio in an official press release. “Stop motion animation in VR is in its early days and it took the extreme talents of these three groups to pull off such a successful project. We are proud and excited to launch GYMNASIA at Tribeca Immersive 2019.”
GYMNASIA was produced by Stéphane Rituit (Felix & Paul Studios) and Dana Dansereau (NFB) and directed by the award-winning duo Chris Lavis and Maciek Szczerbowski (Clyde Henry Productions). Montreal-based musician Patrick Watson composed music for the experience with immersive sound design and capture provided by Headspace Studio.
Paul Raphaël, co-founder and creative director of Felix & Paul Studios explained his excitement of GYMNASIA saying, “It was both fascinating and rewarding to see how much the medium has evolved in the short five years since our studio produced that inaugural VR piece.”
The 2019 Tribeca Film Festival runs from April 24 through May 4 in New York City. Tickets to GYMNASIA, playing in the Virtual Arcade, are on sale now through this link.
GYMNASIA will also be available for download on the Oculus Store beginning April 26.
Programs that focus on content creation and extended classroom accessibility will help K–12 teachers get the most out of their AR and VR investments.
In K–12, educators have found ways to use augmented and virtual reality to enhance and support deeper learning in the classroom. However, evaluating the best immersive technology resources requires an understanding of current technology limitations and offerings.
The future looks promising as educational technology companies rapidly build new immersive tools for the classroom.
Conversely, some educators may find that the massive influx of resources makes the selection process confusing. For those teachers, understanding the beneficial characteristics can be a good place to start.
Mixed Reality Classroom Tools Should Capture the ‘Wow’
Teachers who incorporate mixed reality into the classroom will often hear students ask, “How did it do that?” or “Can it do this?” Capturing that “wow factor” is probably one of the most common reasons educators include AR and VR in their lessons.
A physical interactive AR tool — like the MERGE Cube — is a good choice for grabbing your students’ attention. The cube comes to life when using specific MERGE Cube apps, which can be downloaded and used on students’ tablets or phones. Through their devices, students can transform the cubeinto the Earth, to study weather patterns; the human body, to explore anatomy; a customizable aquarium, to study marine life; and even the solar system.
Ensure K–12 Immersive Lessons Are Device-Agnostic
A common problem with new immersive technology is the platform’s limitations. Most classrooms do not have many high-end AR or VR devices if any.
Considering the typical resources a classroom has, investing in applications that work across common classroom devices is critical. Cross-platform tools will help educators and students effectively implement immersive lesson plans.
When demonstrating VR, I always showcase Nearpod as a simple tool for any educator to include in their instruction.
Many virtual reality lessons include premade, 360-degree images for the students to experience, but Nearpod allows educators to customize their lessons with an enormous library of images that can take their students anywhere imaginable.
After teachers design experience to fit the curriculum, students can participate as a group using their Chromebooks, instead of having to wait one at a time to use the classroom VR headset.
Similarly, there is nothing more frustrating than spending a lot of time creating an exciting, immersive experience and not being able to share it with anyone.
In the Waypoint EDU app, students can explore custom augmented reality scavenger hunts. After teachers create their hunt, they can easily share it by selecting AirDrop to send to nearby devices, or share it by email or text message. This extends immersive content beyond one headset, allowing all students to participate.
I have seen engagement skyrocket when students are given an opportunity to explore AR applications like 3DBear, where they can be the masters of their own digital universe.
At one school, students were placed in small groups and given a device with 3DBear to create content. They enthusiastically placed dinosaurs, robots, animals and other 3D objects around the library. A favorite feature among students was the app’s ability to capture their creations in screen shots and videos.
Provide An Opportunity to Develop Students’ Creativity
The best immersive technology tools should be limited only by students’ imagination. When using augmented and virtual reality, students should be able to decide what is possible.
Immersive technology is advancing, moving away from independent work toward collaborative exercises.
For example, when connected to the same network, students can create experiences in the same virtual space. Among the benefits of group work in augmented and virtual reality: practicing communication, problem solving and building better products.
One example, the Moatboat app, allows students to build spaces together using a content library, directing character animations with typed text or voice commands.
All of these features can serve as a guide to understanding what immersive capabilities are out there, but the most important part of the selection process is defining goals and expectations for how mixed reality integration will benefit classroom instruction.
The interactive exhibition “Leonardo da Vinci: Artist – Inventor – Genius” arrived in Budapest on the 500th anniversary of his death.
You can find reproductions of Leonardo da Vinci’s paintings and also interactive mock-ups of nearly 60 of Leonardo’s technical inventions, reported turizmus.com about the Leonardo da Vinci: Artist – Inventor – Genius exhibition, located in Budapest until the beginning of September.
Leonardo (1452 – 1519) himself was in very close relationship with art and music, but also to different sciences like physics, anatomy, architecture and he also has designed military weapons.
At the exhibition, dozens of wooden inventions can be found. These mock-ups made of wood were put together by a group of scientists from Italy based on the original drawings of Leonardo and were made of materials that were available in the era of da Vinci. The visitors can also touch most of them and try them out with computer simulations.
We can also ‘step into’ some paintings with VR glasses, where we receive video and audio guideance. These virtual reality experiences will show us which of our everyday objects do we use on the basis of da Vinci’s inventions from hundreds of years ago.
The exhibition has so far been a great success in 23 countries. Hungarian organizers are expecting hundreds of thousands of visitors in six months.
Educators have long been some of the most intrepid users of AR technology, seeking out innovative ways to engage young people with the technologies and devices they most enjoy engaging with. Kim Maslin is no exception – with an impressive track record of integrating digital technology into her lesson plans and book releases, helping young people engage with a wide array of important topics.
Previously, we’ve spoken to her about her Zappar-powered book ‘The Tweeting Galah’, exploring how AR is opening up a world of new possibilities for publishers. This time, we chatted about her amazing follow-up, ‘The Surfing Penguin’, tackling the exceptionally relevant issue of cyber safety and the challenge of keeping young people safe in our hyper-connected digital world.
With pioneering research such as ‘The Layered Report’ showcasing the positive impact AR has on both visual attention and heightened memory retention, Kim’s work is a fantastic example of utilizing AR to both enhance the learning experience and open the doors to important conversations with young people and their guardians through fun interactive experiences.
Using AR to transform learning
James Wright: Hi Kim, great to speak to you again! Could you start us off with an overview of ‚The Surfing Penguin‘ and your background as an educator?
Kim Maslin: So ‚The Surfing Penguin‘ is a selection of four short stories, each featuring an Australian animal experiencing life as a primary school aged young person. It follows their experiences as they learn how to navigate the online world in a safe and healthy way.
Each story focuses on a key issue of online safety, such as how to deal with viewing inappropriate content when that inevitably crops up, cyberbullying, online gaming and the issues of trolling and the risks of sharing personal information online, such as your location. So it complements the series of issues discussed in ‚The Tweeting Galah‘ book.
I trained as a high school teacher and initially worked in high schools teaching technology and media. But I’ve also worked in primary schools with digital technologies and worked on workshops with adults too. All in all, I’ve taught 5-year-olds to 85-year-olds!
“I feel like the age group I’m targeting, particularly those in the seven to eleven-year-old group – they’re just so saturated with games and other forms of interactive media. So if you can find a way to bring that into the classroom in a positive way – into learning, into reading – it just really enhances their engagement and from that, the learning outcomes too.”
– Kim Maslin, Digital Technologies Educator
JW: How did you get started working AR into your lesson plans and books?
KM: So, my latest book, ‘The Surfing Penguin’, is the second one I’ve produced that incorporates AR. The AR from my previous project, ‘The Tweeting Galah’ was really well received by parents and students, so it was a no-brainer to keep including it. It’s illustrated by John Field and it’s a really cool way to bring his depictions of the characters to life.
I feel like the age group I’m targeting, particularly those in the seven to eleven-year-old group – they’re just so saturated with games and other forms of interactive media. So if you can find a way to bring that into the classroom in a positive way – into learning, into reading – it just really enhances their engagement and from that, the learning outcomes too.
JW: Is that related to you utilizing smartphone devices and leaning into the platforms young people enjoy using, rather than pushing them away?
KM: That’s right – it’s about finding ways for young people to use their devices in constructive, meaningful ways. I’m passionate about educating young people about how to do that.
JW: So what motivated you to use AR to deal with online safety?
KM: I was working in a school at the time and doing my own cyber safety workshops for parents. I felt at the time that parents were still feeling very overwhelmed and a lot of the content was really focused on children aged 13 and up – there was very little content on this topic for the kind of seven to eleven age range.
So I really wanted to find a more innovative and fun way to educate these groups at the same time. I felt an AR-enabled book was a good medium as it’s linking it to an activity hopefully young people and their parents were already engaging with at home or at school would be a new way of approaching the issue.
Using AR to illuminate print
JW: We’ve spoken before about your previous book – ‘The Tweeting Galah’ – how has the AR you’ve incorporated this time built upon that experience?
KM: It’s definitely evolved. With ‘The Tweeting Galah’, I made all the experiences myself and a shout to you guys – the interface with ZapWorks Designer was super easy to use! For this latest book, I really wanted to evolve the AR element – to make it more sophisticated and playful. I was fortunate enough to be able to hire an AR developer to work on this project with me. I had ideas about how I wanted the AR to be used and was able to make those a reality.
JW: How did you end up building a relationship with an AR developer?
I actually reached out to Caspar (Zappar CEO) for advice! He let know that Marc Najera, a former AR designer at Zappar had recently moved to Australia, so we linked up, shared ideas and he put together the experiences for me using ZapWorks Studio.
JW: Awesome! So what do you think it is about the inclusion of AR that makes this work so engaging for young people?
KM: I think there are a few things, really. For one, it’s an opportunity (or an excuse!) for young people to utilize the devices, like iPads, they tend to most enjoy using and are often desperate to be using in class as it is.
I think there’s still a very novel aspect to the technology itself, too. So even though the technology has been around for a little while now and experiences are evolving, it doesn’t change the fact that actually seeing print come to life through AR is still incredibly exciting and engaging.
Particularly in education here in Australia, we are seeing increasing use of digital technologies in the classroom, but AR is still something many young people have never got to experience in their own lives – so it’s novel and inspiring for them.
But the really key thing for me is that AR experiences take a young person from passively listening to actively engaging with that story. That is something that they really, really enjoy.
One of my favorite examples from the book is when the reader explores the penguin’s story and is then invited to scan the Zapcode and ‘become’ the penguin. So with face tracking, they can actually transform into the character they’ve been experiencing and that really brings them closer to the story, and therefore engage more effectively with the important safety messages that the character represents.
How AR is helping to teach cyber safety
JW: Is letting young people interact via a smart device intrinsic to the lessons you’re trying to teach in terms of online safety?
KM: I think it’s such a great way for kids to use the technology they love in a safe and constructive way within a classroom setting.
It links back to key themes tackled in The Surfing Penguin, but it also opens the door for further interactions at home with families. For example, being able to experience this via a smart device is an opportunity for parents to see how technology can be used in a cool way by young people that isn’t a violent game or something inappropriate. I feel that’s really important and that my book is quite unique in being able to do that. Compared to other books dealing with cyber safety topics, my AR-powered book actively demonstrates young people safely using the very technology that the book is talking about.
Something that Marc implemented with ZapWorks Studio that’s really emblematic of this is age-gating the social share function when readers are invited to take a selfie using the book’s face filters. You can save a photo to your phone but attempting to share will ask for the user’s age – making both young people and their carers aware of the permissions and restrictions associated with sharing content online. So I guess that’s a way of my book walking the walk, not just talking the talk!
“ …the really key thing for me is that AR experiences take a young person from passively listening to actively engaging with that story. That is something that they really, really enjoy.”
– Kim Maslin, Digital Technologies Educator
JW: So is educating parents and families a core part of the book’s message? Using AR to showcase constructive uses of smart devices with sensible restrictions, rather than just taking devices out of the hands of young people?
KM: Definitely – I think a lot of parents can understandably feel overwhelmed by the technology that young people have access to, so it can be hard for them to draw up sensible guidelines around their children using smart devices in the way you would in other circumstances.
For example, parents aren’t likely to let their children ride their bicycle down the middle of a street at 2am, yet many do not remove a smart device from a child’s room at night, where they could be interacting with strangers at any hour of the night. The risks can be just as apparent, but it’s harder to process when you’re not informed about the technology associated with it. So there can be some disconnect there in terms of understanding how exactly young people can use their smart devices.
I think it’s really important for parents and family members to be involved with young people in this learning process because a lot of online safety issues can be overcome, or at least mitigated, by empowering parents to make informed decisions while their child is at primary school age. Establishing a degree of control over their smart device and setting clear rules about its use is key, so using AR directly connects to that.
What I really hope is that this book is an approachable way for parents to learn safety rules themselves and that way, also embrace positive uses of devices.
JW: So it’s a collaborative learning curve with parents learning alongside their children?
KM: It is and that’s what initially drove me to start working on this series of AR-enabled books. So even though the books have largely been taken up by teachers in the classroom, they do feature reflection questions at the end to guide discussions. So wherever people are engaging with the book, there’s always a space to get everyone thinking and talking about the whole online landscape.
Using ZapWorks Studio to create engaging experiences
JW: So you had some assistance with creating this project in ZapWorks Studio – how have you found creating with ZapWorks previously?
KM: My first experience with ZapWorks was also my first ever interaction with creating AR, and I found it really positive! I started out using ZapWorks Designer and it was super easy to use, particularly as it followed a lot of the conventions I was familiar with from other creative tools, such as PhotoShop or Canva.
With my latest book, Marc used ZapWorks Studio to implement a lot more complex and exciting AR experiences, such as mini-games, online name generators and of course, leveraging the face tracking technology for the face filters. So it was a bit more sophisticated this time around, moving on from the videos and slideshows you’d have seen in The Tweeting Galah.
JW: How did that creative relationship with Marc work – how was it communicating your ideas to an AR designer?
We worked predominantly over email – I showed him the stories and explained what I wanted to achieve and he just made it happen with ZapWorks Studio! I wanted to be really creative with it and make something young people would have fun interacting with and Marc really got that.
“…they aren’t just writing an answer to ‘how does the character feel?’, they’re experiencing it and actioning that through AR. That builds empathy and a deeper connection with the characters.”
– Kim Maslin, Digital Technologies Educator
JW: So are you planning to build on that further with ZapWorks Studio?
KM: Because of the success of the penguin face filter we implemented in the book, we’ve actually been using ZapWorks Studio for spin-off projects too. We’ve gone and created face filters for all of the animals in the books, so we’re going to implement that into interactive trading cards, inspired by Pokémon and football sticker collecting. So that’s going off to the printers soon and I’m really excited about that.
Both of my previous books are available on my website, but recently they’ve been made available on a lot of popular worldwide online marketplaces, so we can reach a market beyond Australia. With the success it’s seen in primary schools in particular, I’m keen to push this further as a learning tool and linking it to my free online lesson resources for teachers, to keep building up the community around the books. So I’d be looking to include these AR-enabled trading cards with the teaching packs so they can be used as classroom rewards, or for early finishers – experiences to enjoy and have a bit of fun with.
But I’d also really like to see these being leveraged as a learning opportunity. For example, if young people have experienced a story with a character having to cope with cyberbullying, being able to take a selfie as that character and making an expression just like the character would have felt, is a really engaging way for young people to take that message on. So they aren’t just writing an answer to ‘how does the character feel?’, they’re experiencing it and actioning that through AR. That builds empathy and a deeper connection with the characters.
JW: And finally, where you can find out more and grab a copy of the book?
As the challenges facing young people evolve in our digital world, so too must the way we reach out and engage with them. AR is a fantastic way to do that – illuminating print at home or in the classroom, leveraging the smart devices young people are comfortable interacting with for a positive purpose. Getting started with AR creation is really easy with ZapWorks – whether it’s with our intuitive drag n’ drop Designer toolkit or our powerful and most feature-rich Studio product – no code required and with a 30-day free trial for you to see for yourself. Our huge range of handy documentation is on-hand to get you started, including detailed walkthroughs and in-depth video guides. There’s also our friendly ZapWorks community over on our Forum – a great place to get feedback, be inspired by our creators or reach out to our expert support staff.
Video: The ‚Layered Report‘, produced in collaboration by Zappar, Neuro-Insight and Mindshare UK, showcases research that demonstrates AR’s power to aid memory retention and heighten visual attention – key facets of learning.
A futuristic image shows an interventional radiologist doing a procedure in a model patient’s blood vessels with the VR rendering of the vessels they are seeing in their headset
– Inverventional radiologists use tiny catheter tubes to operate inside blood vessels
– They are guided by X-ray imaging displayed on giant screens in special operating suites
– Doctors and patients are exposed to radiation and have to wear heavy vests
– But new VR technology developed by University of Washington researchers lets doctors see inside the veins in 3D and real-time, immersing them in the veins
– If approved, it would allow them to ditch the Xrays and equipment and lead vests, take the tech on the road, and cut costs, researchers say
New virtual reality technology lets doctors see the insides of patients‘ veins in 3D while they operate.
Interventional radiologists use imaging to navigate through tiny blood vessels and precisely treat everything from clots to strokes to cancer and more.
But the delicate, non-invasive procedures require them to use X-rays imaging to guide their tools throughout the procedures, exposing patients and themselves to radiation.
A new catheter, developed at the University of Washington is decked out with electromagnetic sensors feeds real-time imaging from inside blood vessels to a virtual reality headset, making the procedures safer and more precise than ever before.
Virtual reality has generated a fair amount of excitement in the medical world, largely because it allows medical students or surgeons about to undertake complex procedures an opportunity to practice on something more responsive than a cadaver and less high-stakes than a living person.
For radiologists, the trendy tech has a more direct application.
Interventional radiologists use a thin flexible tube to guide tools through blood vessels and fix cardiovascular problems.
Their pathway through the complex network of veins and arteries is plotted in 2-D images, typically via X-ray imaging displayed on giant screens in the special operating room.
To make this possible, the X-ray machine has to remain on and over the patient’s body for the duration of the procedure.
X-rays emit relatively low doses of radiation, but extended and repeated exposures can raise cancer risks.
And radiologists are in the rooms designed for these procedures, called angiography suits for hours a day.
They have to wear lead vests to protect their organs from radiation exposure, but that makes the hours doctors spend on their feet for the procedures even more grueling.
Dr Wayne Monsky, a University of Washington interventional radiologist, knows exactly how difficult that is.
‚It’s an occupational hazard we all know of to be standing there wearing lead all day long,‘ he says.
Dr Monsky developed severe cervical spinal stenosis from doing so, and may eventually have to have surgery to relieve his back pain.
But he may not always have to wear a lead vest to work, thanks to his own team’s invention.
Electromagnetic sensors attached to catheters capture the size and shape of the blood vessels as the tool travels through them.
This data is captured by VR software that renders it as a 3D image in the radiologist’s VR goggles.
‚Imagine being in Fantastic Voyage‘ – the 1960s sci-fi movie about a team of scientists who are shrunk – ‚and how they traveled in the blood vessels of the body, and it was all around them – it’s like that,‘ says Dr Monsky.
‚The beauty of this approach is it really puts you inside the body to visualize even the smallest parts.
The technology itself is far smaller – consisting of just the special catheter and the VR goggles – eliminating the need for a special angiography suite, the X-ray machines and the giant screens.
Plus, it’s far cheaper, and even portable.
If it gets Food and Drug Administration approval, doctors could ‚use this technology to provide interventional radiology procedures away from big centers in big cities and out to rural and under-served areas,‘ says Dr Monsky.
You know those incredible videos full of dominoes and balls and levers and rails that all combine together perfectly to create a symphony of motion and reaction? They’re called Rube Goldberg machines and Gadgeteer lets you build them in VR with unmatched interactivity.
The closest thing I’ve seen to a game like Gadgeteer in VR so far would have to be either Fantastic Contraption or Bounce, but neither of those really let you build things to this kind of scale. The level of intricacy is just incredible here.
According to an email from Peter Kao, Co-Founder and CEO of developer Metanaut:
“Gadgeteer is a physics-based VR puzzle game where you build chain reaction machines to solve fun, intricate puzzles. Your machines will use gadgets to launch, bump, twist, and turn—creating chain reactions that may even end up tearing apart the fabric of space-time. It is a game that’s designed for players who are: Tinkerers, Builders, New-comers to VR who want: Replayability, Depth, and Well-thought-out design.”
Fortunately Gadgeteer also includes a wide variety of game modes so it’s not just a puzzle-focused sandbox for tinkering. There’s gonna be a story mode with 60 different physics puzzles to solve that follows a plot about a mad scientist and her daughter, as well as a sandbox mode to create the machine of your dreams with reportedly zero restrictions. With over 50 unique gadgets there is a lot to do and see in this one.
Gadgeteer hits SteamVR with support for HTC Vive on April 23rd and within a few weeks from then will support Oculus Rift, as well as both Index and Oculus Quest in the future. You can find out more about Gadgeteer on the game’s official Steam page.
Ghost Giant is, as we made clear in our full review, a remarkable game that handles the subject of depression with skill and care. It’s not a game that constantly discusses depression. Indeed, it initially appears to be little more than a standard cutesy VR adventure; and even after the struggles of the main character’s mother become apparent, jokes are made and most characters remain oblivious to her situation.
This nuance is precisely why it works so well.
The initial Ghost Giant prototype was set in a small American town, and the main character, Louis – at this point named Ned, and a bat rather than a cat – was being bullied. A radical change in direction (and setting) quickly took place once a writer was on board, however.
Sara Elfgren, an established author and screenwriter making her videogame debut with the Ghost Giant script, explains where the idea came from. “I read this article about children who grew up with addicts as parents, and how they very often have to take on responsibility from a very early age,” Elfgren says. “I was very moved by one of the stories in that article about a young boy who was taking care of his siblings when he was just a very small child himself. That made me think about Ghost Giant […] I got thinking what the story could be, and what he was hiding”.
The setting meanwhile changed from a small American town to a French one, as the color scheme in place reminded Elfgren of Les Parapluies de Cherbourg [The Umbrellas Of Cherbourg], a 1960s musical.
“I thought, when we tried putting together a mood board of French picturesque small towns, and old cars and clothing, it just really clicked better with the story,” says Olov Redmalm, Creative Director and Art Director. “A bit more melancholy, calm, cute setting. The colors got a little bit warmer, and we suddenly had more countryside instead of just rowdy streets”.
Whereas the few mainstream games that tackle mental illness, such as Hellblade, tend to put the player in the shoes of the person with the illness, Ghost Giant offers an outside perspective. If you are living, or ever have lived, with a loved one with severe depression, there are several story beats that will slam into your emotions with immense force.
When away from home, Louis will hide his feelings from the world. At home, his best efforts to cheer his mother are thwarted. She upsets him not through malice, and certainly not through intention — but through indifference and fatigue. As is so crushingly common in the real world, Louis blames himself for her condition. The moment he vocalizes this is a heartbreaking one. Reach out your virtual hands to comfort him, if you like; it won’t help.
While Louis’ mother isn’t on screen for most of the game, and you do not control or interact with her, her character is given depth. In one memorable scene, the player is treated to household pictures that recall happier times.
“I fleshed out Pauline [the mother] as a character in my mind, and also in the character bible, so I knew exactly what she was like when she was not depressed,” explains Elfgren. “I think that’s also very important, if you write a depressed character. We knew the story of her when she was not depressed too, so she is this whole character. She’s not just her depression, and I think that’s very important”.
It’s not always a cheery game, then, but it is precisely for this reason that it can do some good. Hopefully, there will be people who play this game and realize that they’re not alone, or they’re not to blame, or that maybe things can improve if they reach out and ask for help from others. It even struck a chord with the development team, something Redmalm is keen to discuss.
“The team immediately clicked with the script, and everyone was very, very enthusiastic about telling this story,” Redmalm recalls. “There were many occasions where we would stand together and look at an animation. I remember one occasion specifically where we were talking about the animation of Pauline in bed, and we were discussing how she was supposed to lie. Like, is she facing Louis, or is she facing the wall, and how would I lie, and suddenly we’re inadvertently beginning to talk about our own experiences without talking about it directly. It was actually kind of beautiful […] it really helped the team spirit to have this topic that everybody felt very passionate about, and wanted to do justice, and really give it our best and treat it with respect”.
“We didn’t want to treat it as a cheap twist, you know,” agrees Elfgren. “That was very important for all of us. And also, I consulted a child psychologist when I wrote it. I told her the story, and I read her the dialogue in the scenes that concerned Louis and his mother. I had done a lot of research on my own, and I had my own experiences and stories that I’d been told; but I felt it was important to get a professional’s view on it as well. She said that it was very accurate. I mean, every story of depression and mental illness is different, of course — there is no ‘This is the way it is’ – but she felt that it was an accurate portrayal”.
An accurate portrayal it is, regardless of the animal characters and toy-like environments. The game ends on a positive note, without simplifying the issue or pretending that there’s a magical salve for depression. Overall it is, as Elfgren tells me she felt at the time, an important story to tell. She goes on to describe Ghost Giant as it truly is. A wonderful, beautiful, and perfectly fitting description; a fairytale.
“This is not a children’s game, but of course young people and children can play it. But I think that in Sweden it’s not taboo to handle heavier subjects [in family stories]. And also I think that if a young person plays Ghost Giant and is not aware of depression, for example, I think they probably accept the story on their terms. With their experiences. Whereas an adult, or somebody who has dealt with these things, will experience it in a different way. That’s the power of the fairytale, I think. […] You’re not shut out if you don’t understand everything, you just understand it differently”.
VRTK is a VR framework designed to allow developers to add interactivity to their apps & games without coding the physics of these interactions from scratch. This month the beta for version 4 released. Version 4 is a complete rewrite of the framework. It brings numerous improvements including making it more modular and more hardware agnostic.
VRTK’s Humble Origins
In April 2016, Harvey Ball got his HTC Vive. But when he wanted to develop for it, he noticed that there was no general framework for VR interactions. From his bedroom in the UK, he decided to make one- he called it SteamVR Unity Toolkit. It let developers easily add teleportation and object grabbing to their games.
The toolkit quickly became the most popular Unity VR framework with thousands of developers using it. It had become so popular that during the launch of the Oculus Touch controllers Facebook sent Harvey a Rift and Touch for free in order to add support. With the toolkit now being cross-platform, it was renamed to VRTK.
As an open source project, the community added many features, such as climbing, new grabbing mechanics, and archery physics.
VRTK was starting to show fundamental architectural problems. Harvey had originally built it on top of the SteamVR Plugin- the Oculus integration for example was just an abstraction layer. If the SteamVR Plugin had a major update (and it did eventually) VRTK would break, and supporting future hardware would require ever more complex abstraction layers. It became clear that VRTK needed to be rewritten from the ground up to be easier to use, more modular, and truly hardware agnostic.
Such an enormous task would require hiring developers- and that requires money. Harvey tried launching a Kickstarter campaign, but it failed to meet its goals. Some even accused him of trying to “cash in”. Next, he tried Patreon- but that too failed to generate the level of funding needed, Harvey claims. He also claims that Valve Corporation declined to support VRTK due to being considered a competitor.
Throughout 2017, Harvey had to pour his own money into keeping up development on VRTK. The overwhelming task of documentation, tutorials, and supporting developers slowed the development to a crawl. Worse, VRTK was being blamed for enabling the many “asset flip” titles flooding the Steam marketplace.
In December, Harvey decided that he had had enough, deciding to stop development of VRTK. The lack of funding and scale of negativity had reached its limit.
Oculus To The Rescue
In January 2018, Harvey received an email from Oculus VR, LLC. They had heard of the demise of VRTK and wanted to provide the funding necessary to continue development. Harvey was skeptical, thinking that Oculus would want to decimate the principles of VRTK or make it exclusive.
His skepticism turned out unfounded however. Oculus offered a 6-month full grant with no conditions attached. With this funding, Harvey was able to continue development of VRTK, and so v4 was born.
The funding was used to take on dedicated community member Christopher-Marcel Böddecker as a full time developer.
v4: A Rewrite
VRTK v4 is a completely hardware agnostic rewrite. In fact, it’s theoretically now engine agnostic, so it could even support Unreal Engine in the future. Instead of a single script as in v3, v4 now uses prefabs containing simple scripts. Whereas v3 would often require custom code to achieve seemingly simple tasks, v4’s modularity means tasks such as a pump action shotgun can be achieved with just configuring existing components.
This new modularity also means that v4 can support augmented reality devices in future.
The old video tutorials, which became outdated quickly, have now been replaced with VRTK Academy, a full documentation wiki maintained by both VRTK developers and the community.
While v4 is in beta, the VRTK team claims it isn’t buggy and recommends any developers use it rather than v3 for current & future projects. It can be downloaded as a zip file from GitHub.
With the Oculus grant being only 6-months, VRTK is still in need of funding. If you want to support their project, you can contribute to their Patreon.
Microsoft researchers have created a tool that could make VR more accessible to users with impaired vision.
Called SeeingVR, the tool kit is designed to address a problem that affects users with low vision, or vision that can’t be corrected by prescription glasses.
As reported by Engadget, Microsoft researchers teamed up with Cornell Tech and Cornell University to create SeeingVR, a kit intended for Unity developers to integrate into their VR projects. SeeingVR includes 14 different tools that address a variety of vision issues.
The team’s purely visual tools include a magnification window, bifocal magnification, brightness and contrast enhancements, edge enhancement, an overlay that lets you see a miniature outline of you peripheral vision, configurable text (color, boldness, and size), depth measurement tool, a ‘guideline’ tool for better user orientation, a whole-scene recoloring tool, and an object highlight tool—similar to edge enhancement, but for specific objects in the scene.
The team also included audio-based enhancements including text-to-speech, audible object descriptions, and AI and human-assisted object descriptions leveraged from existent mobile apps such as VizWiz and Seeing AI.
“A user can select, adjust, and combine different tools based on their preferences, the team’s research paper states. “Nine of our tools modify an existing VR application post hoc via a plugin without developer effort. The rest require simple inputs from developers using a Unity toolkit we created that allows integrating all 14 of our low vision support tools during development.”
The researchers’ evaluation included 11 participants with low vision; the team concludes that their various methods show that SeeingVR “enabled users to better enjoy VR and complete tasks more quickly and accurately.”