VR & AR World 2016: takeaways and potentials for the next generation learning environment

At the recent VR &AR world 2016 that took place in the ExCel centre in London, I was really excited to explore the vibe and experiences around cutting edge technologies that 50 countries came to share from all around the world. With many established companies like Meta, HTC and Epson and many more showingcasing their development in Augmented, Virtual and Mixed Reality, it was exciting to see their latest developments. And while there was still gimmick led content, the majority were aiming to show what added value these technologies can offer. I will concentrate on their potential in the next generation learning environment being informed from my experience visiting VR & AR world 2016 and.

  1. Sense of presence and the impact on the learner’s identity:

The latest Toyota C-HR VR experience was showcased at the exhibition, it was an experience where you are able to digitally discover and personalise this specific vehicle before it is being released. This time, it was the fact that I haven’t yet tried getting into a car in VR, that sparked my curiosity to try the Toyota experience. Throughout the experience, I was able to walk around the car, open doors, sit inside and change colours and some settings inside the car.  However, what astonished me was a feedback that I got from one of the event’s visitors watching the demo of the experience, he said: “yes, it is definitely important to open the door to get off the car”. I stopped for a second thinking about the comment he made.

Sure I felt I was actually in the car, I opened the door when I finished experimenting with the car and wanted to get off.  If you look that chair in the picture below, it is obvious that I can get off the chair without the need to open any door.


Toyota C-HR VR expereince

Toyota C-HR VR expereince

It is funny, yet it is the same thing that makes VR really powerful. This is one of the main things that makes me so passionate about VR and think it has great potential. The sense of presence that VR enables, can make you psychologically feel that you are taking up a physical space within the virtual environment whilst you are physically located in a different place. It is not only bringing to us experiences we never thought we could have again or enabling us to be in places that are may be dangerous or costly to visit otherwise, it is actually enriching our own life experiences. If we think about how many opportunities VR can bring to us, neurally speaking it is contributing to shaping our identity and who we really are: “who we are depends on where we’ve been”.

This is great because with VR the experiences we will be able to try can be limitless and could be more unique as these experiences become more personalised to how we individually interact with it.

I believe that this is going to make tremendous impact on the learners’ experiences and their understanding about the world, as they will be able to reach to things and explore places far beyond their physical locations. Couldn’t this be transforming in the next generation learning environment to overcome limitations of travel to learn about specific places or particular period of time, and furthermore enabling the learners to get different perspectives.

Toyota uses the HTC Vive in this demo which is a favoured VR tech, and it was occupying the floor in the event due to its super accuracy, a lot faster and with little latency compared to other VR headsets.

In an earlier blog post, Matthew Ramirez talked about the techniques and tricks that the vive deployed to enhance the immersion and the sense of presence in VR overcome some problems with other headsets like nausea.


  1. Virtual Reality and Mixed Reality in Maintenance and Engineering:

If we take this example even further, Epson, Vuzix and Meta has been developing their own smart/ mixed reality glasses for the automotive industry and construction. Colleges and vocational courses are the ones that could benefit from these technologies the most particularly to facilitate and smooth the transition between colleges, apprenticeships, and real life jobs.  With VR, AR and MR students could be more prepared for solving real life problems as they could develop many of the required skills when working with cutting edge tech in a very safe and constructive learning spaces.

This in itself could empower learners and particularly girls to get them into engineering and technology careers when their lack of confidence resulted in shortage in the skills required for STEM related jobs as reported in the Guardian and the BBC news. Stimulating experiences that engage students in problems solving and challenging activities with the use VR and AR could help unlock the learners’ creativity to come up with solutions in a flexible and safe environment. They are able to make mistakes and get feedback, and this could increase girls’ confidence in their abilities and affecting their career choices.

This is a video on using Microsoft HoloLense Mixed Reality in architecture which I believe is revolutionary as it will play a vital role in rethinking design, construction and engineering for  the next generation learners of much needed engineers and designers.


  1. Virtual Reality and Storytelling:

Another attractive point to me in the event floor was more relevant to the content and somehow the simple technology. the London’s Stereoscopic Company (LSC) who has a long reputation of publishing 3D stereoscopic images from Victorian times to the modern day is now adopting VR to bring their digital 3D stereoscopic content and films in a more exciting way and making them available to anyone with a smart phone.  It is bringing historical periods of time, books, collections of original images, VR films to us using a small low cost VR kit as you can see in the image.

I immediately thought that this a great medium for storytelling that could turn telling a story in a classroom to a more fun and flexible way.



You might argue with this sort of simple kit you do not get the same high-end experience than with the more expensive VR headsets, however, I believe for entry level users or even kids, it could be a great way to bring information that is only available in texts in books or cards to life through VR and Digital 3D content: “When this content is viewed in stereo the scene leaps into 3-dimensional life” as Brian May describe it. I enjoyed having access to the fascinating world of Stereoscopy they have through the cards as it makes me feel like a child again.

This is OWL VR KIT viewer for any 3D content and VR films, it comes as a box designed with high quality focusable optics.

This sort of kits like the Google card board as well is allowing more people to at least have this sort of first experience with VR and so increasing the number of people who are more savvy and more educated about what VR could potentially offer them.

My take from this example is that the technology itself cannot guarantee a fun and interesting experience alone. What we need to have is more compelling stories, stories that trigger emotions and that sense of wonder we all had when we were children, these are the sort of things that make VR valuable as a storytelling medium.


4. Haptic feedback in VR and what is next?  

In the event, I tried for the first time a form of haptic feedback using a  hand-held controller that allows you to feel some sort of force while interacting with virtual objects on the screen. This sensible’ phantom haptics system is one of the machines VR and AR companies are experimenting with to explore what possibilities of real-like feedback a doctor could have when puncturing their patients’ skins. So the system acts as a handle to virtual objects providing stable force feedback for a single point in space.

Hand-held controller

Hand-held controller

I imagine this is definitely going to be essential to any VR and AR experience. We all use our hands intuitively to reach out for things, to interact with things and people, and to feel things. In this example of using the controller in remote surgery training, haptic feedback has been used and it contributed to create an illusion so trainees felt more present in the remote surgery room. This is contributing to make the surgeons or trainee doctors experience more believable enabling them to understand and perform tasks far more effectively and safely.  For me, as haptic and tactile technologies advance and become integrated into more of these experiences, particularly when led by  cognitive neuroscience research on how we interact with the world, the quality of education with VR becomes increasingly high. Medical students will then have great opportunities experiencing and operating in more real-life surgery situations safely and constructively. Haptic feedback could also be a way of better informing students in the  decision-making process.


Surgery in VR with Haptic

Surgery in VR with Haptic

The opportunities for using that in education are great to perform tasks more efficiently and effectively are great. Chris Chin the founder of HTC vive is expecting in his interview that education and medical experiences with VR are coming next in 2017, will that be with some advances in the Haptic technology? Shouldn’t we prepare learners for these new delivery methods or at least prepare the current learning environment to be equipped to deliver and support these sort of experiences?

Here are a few examples of our work at Jisc in Augmented Realuty and Virtual Reality to immerse, engage and eduate learners, if you are interested in embarking on your own project or need help, we’d love to share experiences with you.


Research and Development at SIGGRAPH 2016

An exciting part of the SIGGRAPH 2016 conference was the experiential area allowing attendees the opportunity to interact with a range of emerging technologies – from Virtual/ Augmented Reality to haptics and immersive realities. Below are my top five picks, although I could easily have doubled the list.

Automated Chair mover

Furniture that learns to move through vibration

Furniture that learns to move through vibration

An innovative way of changing room layout using small vibration bursts to reposition the pose and location of furniture. Controlled through Wifi imagine the potential for time efficiencies in education and classroom/ lecture rooms!

Ratchair: furniture learns to move itself with vibration

Titian Parshakova, Minjoo Cho, Alvaro Casinelli, Daniel Saakes



Redirected Walking for AR/VR

Redirected walking

Redirected walking

By manipulating user camera view on HMDs (Head Mounted Displays), this application can redirect pedestrians to points of interest without competing for space. No more fighting for the best view of the T-Rex in museums!

Graphical manipulation of human’s walking direction with visual illusion

  • Akira Ishii, Ippei Suzuki, Shinji Sakamoto, Keita Kanai, Kazuki Takazawa, Hiraku Doi, Yoichi Ochiai


Mask User Interface for manipulation of Puppets

VR based manipulation of animatronics

VR based manipulation of animatronics

Using depth perception sensors controlling body movements and separate lip sensor (think Darth Vadars mouthpiece) managing mouth action this VR setup brings a new dimension to animated puppetry.

Yadori: mask-type user interface for manipulation of puppets

  • Mose Sakashita, Keisuke Kalahari, Amy Koike, Kenta Suzuki, Ippei Suzuki, Yoichi Ochiai



Haptic suit

VR integrated Haptic feedback suit

VR integrated Haptic feedback suit

Integrating localised haptic feedback through VR game experiences (object contact, explosions etc.) the Synesthesia Suit adds to this visceral experience by allowing the user to ‘feel’ the sound/ music as if it runs through the body.

Synesthesia suit: the full body immersive experience

Yukari Konishi, Nobuhisa Hanamitsu, Kouta Minamizawa, Ayahiko Sato, Tetsuya Mizuguchi


Motion Predictor


Motion predictor

Motion predictor

Using complex physics algorithms, Laplacian Vision allows the user to better predict object trajectory information, displaying it in a users field of view.

Laplacian vision: augmenting motion prediction via optical see-through head-mounted displays and projectors

  • Yuta Itoh

Enabling a sense of presence in Virtual Reality

Virtual Reality has emerged from the ashes of the early 90’s as a potential technology to revolutionise our everyday lives. However, some major challenges still exist, both technical and aesthetic, before this can happen. In a series of short blog posts I will examine the major flaws preventing VR from becoming a mainstream technology and propose possible solutions. Let us start with the technical restriction of replicating realistic movement and sense of presence.


Currently, the majority of VR experiences/ games offer the user a seated experience where their movement is controlled with a gamepad, this can often lead to a disconnect between the users body and the movement viewed on the VR headset. In turn, this can contribute to a lack of presence in the game space but perhaps more importantly can lead to feelings of nausea. The HTC Vive looks to overcome this problem by using room scale experiences that at least provide the user with the ability to walk around small areas. Multi-directional treadmills can also provide realistic walking simulation although they work by canceling the user’s motion to mimic acceleration (equivalent to walking on slippery ice) so there is a mismatch as the body attempts to translate this new movement through muscle memory into one it has become accustomed to for years (walking).

But what happens when you are in a VR experience that has a bigger footprint? In order to explore truly realistic VR on foot such as large buildings or even cities this can create a major issue.  Teleporting or returning to a controller can provide a lightweight solution but totally shatters any immersion and your mind becomes immediately disengaged. Redirected walking is a method used by developers to trick the mind into thinking they are walking in a straight line when in fact they are slightly moving on a curved/different path in the real world. This enables up to a 26% gain or 14% loss in distance travelled or 49% gain or 20% loss in rotation (yaw) without user detection. The body perceives the movement in a certain direction but without visual cues is less accurate when it comes to measuring the distance/ rotation travelled. So we are essentially hacking our brain and exploiting loopholes in our circuitry.

The paper Estimation of Detection Thresholds for Redirected Walking Techniques  posits that a technique called curvature gain can allow the user to walk in a 22 metre circle in a real room while travelling in a straight line in the virtual world. This can open up huge opportunities for expanding the virtual environment that was previously impossible while retaining the sense of locomotion felt by the user.


Haptic response

Kevin Kelly, founder of Wired magazine commented that “we are moving from an internet of information to an internet of experiences”. VR can also include other physical stimuli through passive haptic feedback where physical objects are linked to visual cues in the virtual world such as a torch or sword. They can be further enhanced by adding other sensory properties such as temperature or texture. However, this is not scalable as for every virtual prop, a physical equivalent needs to be created.

Remarkably, a method called Redirected touching can provide a solution where a real object is placed in the physical environment and remapped in the virtual space to assume several different shapes. This is achieved by using a discrepancy in hand interaction between the real and virtual, evident in the example below. Again, the visual reference tricks the user into believing that the physical object has many more sides than it actually does.



Taking this one stage further TheVoid has set up situated installations (think Laser Quest with VR) ranging from ancient temples to futuristic alien worlds where the physical environments are augmented in VR with digital equivalents. At one point you noticeably feel the change moving from a research laboratory corridor indoors to a suspended walkway outdoors, feeling the breeze as you move along hundreds of feet above the ground, creating an presence that is hard to imagine. The lack of photorealistic graphics does not detract from the user having a heightened sense of agency facilitated by peripheral stimuli and the ability to interact and touch all the environments.



Of course, this particular setup is not scalable at the moment but you start to understand how by melding additional sensory elements to VR, immersion can be enhanced leading to a more convincing proposition. VR is by no means the finished article yet but the myriad of speed bumps that block its path to mainstream adoption are slowly being eroded by the ability to employ mind hacks, utilising innovative techniques to trick the user into a greater sense of presence.

How wearable EEG trackers can impact education

The 1982 film Firefox revolves around a plot to steal a Russian fighter jet that can be controlled by thought. Who would have imagined that just over 30 years later, the technology for this sort of technology to become Science Fact would be so readily available.

Jisc Futures Innovation Developer Suhad Al-Jundi testing the Emotiv Insight

Jisc Futures Innovation Developer Suhad Al-Jundi testing the Emotiv Insight

Even though wearables have been in the news for the past few years, the majority of attention has so far been focused on AR and VR implementations, but increasingly the quantifiable self or the ability to make sense of personal data has gained traction. A year ago I came across a couple of examples (Emotiv and Neurosky) where companies had developed brain reading headsets that could analyse neural activity to determine the emotional state of users. Immediately my mind was full of ideas and possibilities for how this could help benefit the educational space. Recently, I spent some time researching the Emotiv Insight, looking at possible use cases and examining the quality/types of analytics that could be extracted.

Some of my key thoughts are below:

Tracking analytics into other apps and providing intelligence on student engagement

Jisc has for some time been developing its Learner Analytics service for the sector, providing  analysis of student data to help inform institutions on student progress and allow students to maximise their learning potential. Wearables such as the Emotiv can classify EEG data into identifiable neural patterns suggesting whether users are engaged or excited or contrastingly if they are dissatisfied and stressed.

Screen Shot 2016-05-17 at 12.38.47

Performance metrics based on EEG brain activity

Although a provocative area, in terms of ethical considerations related to personal data, this could potentially be used to measure student experience or the tools that are used to support future learning. For instance, a students’ engagement level could be measured while wearing a VR headset to help assess the effectiveness of the resource as a digital aid.


Among the extravagant claims of brain reader wearables is that they can help focus attention and memory retention in the field of education. By engaging with learning games, similar to brain training apps like Lomosity, Neurosky’s suite of educational packages focus on using brain EEG activity to measure progress through mathematic, memory and pattern recognition. The main problem with non-clinical wearables at the moment is the amount of incidental “noise” can distort the data through electrical currents that inhabit physical muscle actions like frowning. Having said this, rapid innovations in hardware (such as providing more electrodes) has produced improved data, comparing favourably to earlier research studies using commercial technology. (https://www.researchgate.net/publication/241693098_Validation_of_a_low-cost_EEG_device_for_mood_induction_studies).

However you interpret the accuracy and impact of these wearables to learning, contemplating other learning methods can often force you to open up new perspectives and as the famed economist Peter Drucker was attributed to saying – “What gets measured, gets managed”. If you have a baseline score, simply by paying attention and analysing it can force you to think of ways in which to improve it.

Controlling Internet of Things (IOT) environments

The connected world is so popular at the moment, I have a Nest smart thermostat that constantly adjusts its heating regime, not only monitoring internal temperature but also by analysing weather, historical settings and whether Im physically home via motion sensors to efficiently manage energy costs. My wifi driven LIFX lights can be operated by my Apple Watch to turn on remotely or by using a simple app IFTTT I can create functions to control their behaviour based on the sunset or when I move within a mile of my house.

Apple Watch controlling connected lights in home

Apple Watch controlling connected lights in home

As we enter a world where even water consumption becomes smart, the next logical step seems to be using your thoughts instead of an intermediary app or voice recognition as a control mechanism. Amazingly this is already happening with some high profile examples such as controlling a Tesla with an EEG headset.

In another use case Neuroscience students from the University of Florida have connected the Emotiv to operate drones.

Using EEG reader to diagnose/ recognise patterns in behaviour for medical students.

In discussion with medical academics, one of main difficulties student encounter is the ability to diagnose and interpret patterns correctly, especially in the field of neurology. As commercial EEG readings can sometimes involves millions of data points, it can therefore be hard to add meaning and inference to the sheer scale of evidence. Often students move into hospital environments as qualified medics without having the hands-on opportunity to interact with expensive MRI equipment so have to make do with simulated 2D patient examples.

Visualisation of brain activity using wearable Emotiv Insight EEG reader

Visualisation of brain activity using wearable Emotiv Insight EEG reader

While consumer wearables are not comparable to their commercial counterparts, they provide enough accuracy to allow students a viable window into the use and visualisation of so much complex data in 3D space. Being able to practice and reinforce their skills using realistic equipment can enable them to build confidence to take into their professional lives.

Track wellbeing and mindfulness and synchronously change environments

Understanding how you are feeling and monitoring emotional triggers could be hugely important in the medium to long term. Being able to inform students  as to which learning style works best for them and changing learning spaces to adapt to this new paradigm of personalised learning could be integral to a more effective and relevant institution.

Imagine this scenario in an institution thirty years from now – a student enters a library study space, a wearable device recognises their stressed state due to their final essay deadline and begins to play classical music into their headphones to focus their mind, as they sit down the flexible memory chair aligns into a position conducive to positive thought (decreasing the stress inducing cortisol) and the lights change colour and dim to reflect a less harsh study environment encouraging creative thought. Objects and furniture become shape-shifting entities based on the individuals preferred interface and responsive to tactile interactions. Incredibly this technology is already starting to emerge from research labs such as Stanford University’s Mechanical Engineering Design Group where the desk environment morphs dependent on the device and materials students are using.


To support special needs students with communication and accessibility requirements.

Wearables have a tremendous opportunity to offer increased support for students that struggle to communicate effectively and have specific user requirements. :prose is an app to help nonverbal people communicate by tapping or swiping on a mobile device. It works in a similar way to sign language, attributing specific meaning or phrases to user gestures on a touchscreen. However, for some users affected by conditions like Parkinson’s or ALS, which inhibit a person’s motor skills, the movements required to use the app can be problematic.

To overcome this, using the Emotiv Insight the user thinks of the physical action assigned to the phrase (e.g.swiping up could mean “I want”) and the words are spoken aloud. Students can also build up a personal silo of custom phrases that work better for their memory retention. This has proved very effective in achieving results that using other mental training solutions would take years. In an education context this could help to build confidence, engagement, facilitate increased independence, convey emotions and aid learning timelines in a more fluid manner with little intervention.

In the words of Gil Trevino, Lead Direct Support Professional at PathPoint, “This advancement has allowed someone who once was a non-verbal communicator, the ability to communicate thoughts, feelings and answers in a way she never has before.”


The power of technology and innovation to support education in todays connected world cannot be undervalued and if used in the right way can be truly transformational. But its true success can be judged by how it can transparently deliver value by being a silent partner, facilitating radical change while allowing pedagogy and traditional frameworks to remain central.

Having said this, we should not underestimate the awe inspired reaction from students using technology that provide glimpses into the future, having the potential to give them a renewed appetite for learning.  Arthur C. Clarke compared “..Any sufficiently advanced technology… indistinguishable from magic.”  and ultimately, isn’t this the aim for all education, to be inspired and imbued with that wow factor to enable us to think critically for ourselves and instil a passionate desire to become life long learners?


IOT Smart London 2016: a reflection and applications in education

Following the Wearable Show that I attended last month in London, it was recommended that I attend the Internet of Things Smart London show. This introduced different and comprehensive forums around IoT and smart cities ranging from future technologies in IoT, business and strategies, Smart connectivity & locations service and smart applications, to big Data & Analytics, and Security of things. It was an invaluable opportunity for me to be inspired about the future of technologies and connected things.

I left the conference with a new knowledge and fresh perspectives about how the latest technologies could be applied to real life situations that I thought would be also applicable in education.

In my previous blog post, I wrote about how wearable technologies are bringing the biggest shift to our behaviors, making us more bonded with our devices and machines. Today we are carrying mobile phones and iPads everywhere, and increasingly we are adding to these with connected watches and glasses.

So what is the internet of things?

As its name suggests it is a limitless list of physical objects, connected through a network/ over wi-fi able to send and receive sensory data through embedded sensor and computing power.

Since 2008, the number of connected sensors to the internet exceeded the number of people on earth, throughout the show I realized that now the internet of things has reached a very exciting stage where it is able to save our lives, detect fraud and make customers more satisfied. Here are some examples of what I see as pioneering applications of IoT that could or already helping to make a difference to people’s quality of life, particularly in education and healthcare.

Assistive IoT technology for visually impaired people: 

 I was very impressed by the tools developed by Microsoft to empower visually impaired people to face their mobility challenges and difficulties.

Visually impaired people can use a Seeing Artificial intelligence app in their smartphones that function with Pivothead glasses while they are discovering the world around them. The glasses have a camera that transfers visual information into audio cues. In this video, you can see how a visually impaired person is now able to read the expressions of people around him, read what is in any restaurant’s menu using the Seeing AI app.
[youtube https://www.youtube.com/watch?v=R2mC-NUAmMk]
After this session by Microsoft, I realized more that the opportunities of IoT could be enormous in bringing together people, processes, data, and objects to make networked connection more valuable than before. It is incredible that we are now able to use Artificial Intelligence to interpret our emotions, passing it on to those who cannot see it and make them understand it.IMG_7148

Microsoft is using Cognitive services which is a collection of APIs that “allows the system to see, hear, speak, understand and interpret our needs using natural methods of communication”. If ever visually impaired students would be more motivated to engage in a collaborative discussion or activity, being able to read their colleagues’ facial expression and feedback throughout the discussion using such smart AI apps would certainly enable them to overcome many communication difficulties and make them feel equal in the conversation.

Wearable, devices and apps are good, but not enough!

Machine learning and Big Data are the giant leap forward in the IoT world. We have all seen that real things could become smart with more sensors and actuators connected to networks enabling sending and receiving that could be monitored remotely. However, in my view IoT applications can facilitate more personalized learning, able to glean deeper insight to us as individuals, using this knowledge to tailor our unique demands and needs.

In all of the talks I attended, machine learning and artificial intelligence was central and key. As the number of different types of IoT devices on the market is continuously growing, their variety leads to a higher and higher level of complexity in the IoT ecosystem, I believe AI and machine learning will be an essential component to achieve mainstream ubiquity through increased efficiency and productivity at enormous scale.

John Bates, the CEO of PLAT.ONE uses the example of Uber in his talk Thinglaytics and thignanomics: Disruptive IOT Business as a model that could be applied to other services to achieve real-time dynamic and cost-effective market. This is what John called it “Uberization”. Let’s say that we could have smart parking using smart traffic monitoring gate where the charge could be changed based on demand and traffic flow in the city!

The healthcare system could also use the same model interestingly to drastically improve its myriad of systems and save lives. Hospitals could be enabled to provide a level of care previously unimagined while reducing healthcare costs by “continuously analyzing locations, vital signs, drugs administrated, room sensors and many other knots and personalizing them to the medical situation” as John said.IMG_7168

Applications of IoT’s in healthcare could be really revolutionary, it could allow elders to stay in their homes and keep them from making unnecessary trips to the hospital. It is amazing if we could have a subscription based IoT-enabled monitoring at home that could give insights about elders’ daily activities at their homes. With the costs of sensors continue to drop down, and the technologies become more and more viable, the more monitoring devices will be an integral part of the patient’s daily life.

If data about our body and how it works could be streamed and analyzed continuously, doctors could be able to get alerts when you are likely to suffer a massive heart attack based on our heart rate, blood pressure and temperature. Surely, this would lead to get a better quality of life?IMG_7169

Big Data and Streaming Analytics:

With the increasing use of sensors and the massive amount of data generated every second about everyone’s life and activity, Mike Gualtier considered machine learning to be the brain for any IoT, but what he sees is also very powerful to achieve personalization is big data and streaming analytics which become mature enough to be used in any enterprise.


One of the projects Jisc is pioneering is learning analytics which will have already consume big data sets and streaming statistical inputs from students and institutions. If we could have an infrastructure that enables real time analytics, we would have so many big data analytics opportunities that will help in supporting each student’s individualised/ personalised learning.  Connected classrooms could be the future with smart IoT where students and teachers will be able to communicate across countries, get access to materials tailored to their needs that might be used in other schools or even districts.

Mike suggested that “enterprises must act on a range of perishable insights to get value from IoT data”. If using that we could envision how educational institutions could have timely insight and interventions to overcome challenges they are facing with the student experience.

IoT and gamification:

 This is true personalized learning experience achieved at scale and only made possible by AI integrated with a gamifieid tool to change people’s behaviors.  The smart Kolibrees’ multi-faceted software is used to educate children through a game about the how to brush their teeth and was able to transform the way they do it .

The game has 3D motion sensors to track movements of brushing behaviours which are tracked and checked by parents to see how well children have brushed.

The more data the system have on children’s brushing habits, the more reliable and accurate the software is. The game’s reward system encourages children to improve their habits based on gamification principles.

what is powerful about this is that brushing data obtained is made available via an open API to feed to  third party game designers who is able to develop new apps and add more fun components to further enhance brushing time.

In schools, the potential of gamified IoT is great is in maximising engagement while enabling real-time interaction and feedback that help shape user’s behaviour and deliver emotional rewards that encourage on-going engagement.

Let’s say we could use Neurosensors in schools during particular activities to provide insight into students’ cognitive activity using EEG technology that measures brain activity. This would allow teachers to dedicate more attention to students who need it not just for those who ask for support.


Focus headband used for gamers uses trace amount of electrical current to stimulate the prefrontal cortex, producing positive short-term effect in playing ability by Foc.us Labs

Furthermore, with AI, gamification elements could be personalised depending on how the student is motivated to enhance deep understanding of difficult concepts and have fun learning about it at the same time.

Considerations to bear in mind: IoT security and privacy and data protection:

Discussions about security and privacy issues was prevalent in the show and this is very necessary I suppose. with more devices become connected to different objects in our houses, bodies and workplace, IoT security becomes more complex.

I totally agree that when using and adopting IoT in different areas such as healthcare and education, there are a lot of issues to to do with privacy and data protection and information security that if not considered properly could raise questions to confidentiality, integrity and availability of information.

However, to some, it is also fear of new technologies that is not new. Fear of using new tech always exists. We all were frightened to submit our credit card information while now it is something we do not really think about as much.

I think it is important that we understand that it is not the technology that increases risks for privacy, security and data protection, however, it is the way it is used and applied.

To deal with these challenges, i believe in building trust in our digital world that could make us more confident when adopting and using new technologies.

  • A lot of information about data protection principles and privacy policies need to be stated clearly and translated to the end user. For example, the purpose of processing personal data and how it is processed need to be identified for the owner of the data.
  • Designers of IoT can do a lot if they keep in mind security issues earlier in the development and design stage of  any IoT project.
  • If we need to achieve that, we also need to empower people through involving them in making decisions about who owns the data and have control over it, how the data is shared and with which party, what elements to connect, and how to interact with other digital users and technology providers. This is what could build transparency and confidence to ensure trust and safety in my point of view. As it is suggested in the event, “eventually nobody is going to own personal data- there’s just going to be permissions and relevant questions”.

A Reflection on Wearable Technology Show 2016

The Wearable Technology Show 2016 (WTS2016) this year in London’s ExCel features connected technology alongside the Augmented Reality Show and IOT connect show under one roof.

Being able to be there for only one day, I was very keen to make the most out of it exploring cutting edge technologies across a multitude of fields, innovative start-up showcases from around the world as well as few keynote talks. To make this experience more productive, I thought it would be useful reflect on a one-day explorative journey and bring some useful insight on how education can benefit of these cutting edge technologies in the future.

The wearable show was segmented into fitness and sport performance, medical and healthcare, smart home, enterprise, fashion, and augmented reality. The market of wearable has always promised to be massive, yet until the last couple of years when we started to see increase in the adoption of smart watches fitness trackers and smart glasses in the consumer market, its market has started to grow significantly.

What was new at the show and what are the potentials of wearable in education?

  1. Monitoring students’ physical activities to enhance performance at schools and universities.

It appears to me that there is a clear move in the status of wearables form trial and tested projects into showing real applications of the technology in the industry.  Interestingly but not surprisingly, health, fitness and sport wearable were dominant in the exhibitions, and smart fabric products particularly had a lot of attention and interest at the show. These providers showcased how wearer of smart clothing can have insight about their physical training, sleep, personal daily activities and listen to their body. As these have been developed to meet our modern life needs, the technology embedded within the fabric is the more important if we think about our students and their demands.


For example, if we think of the EGG and HRV that are embedded in the Hexoskin Biometric smart shirt to collect and monitor data about our physical activity, these could also be used to tell a student when he is most fit to take an exam or help to calm down to prepare for one. Taking this to the next level by using wearable for gesture recognition for example, it could also be used to address different aspect of disorders like attention deficient disorder by allowing valuable feedback options.


If this collection data is also integrated with the data of the learning platform institutions use, there will be a lot of opportunities to support students in achieving best performance and empowering them to objectify their own emotional states in order to improve concentration, for example.

2. Wearable in training and vocational courses at colleges.

One of the keynotes that I really enjoyed, discussed the potential applications of Wearable in the Oil and Gas Sector. The session was very inspirational, talking about the potential values of using wearable technology to make the gas and oil industry more efficient and safe, immediatly bring to my attention the application of that in any engineering field or vocational training courses at colleges . The opportunities wearable could provide in conjunction with virtual reality and augmented reality glasses could be revolutionary.  In training, for example, workers/engineers who are exposed to many potential hazards in their every day life could find using a range of wearable devices very useful in providing them with information on the work activities in many different forms.  This could allow a more accurate and real time feedback on hazardous and challenging situations and as a result, the opportunity to make real time decisions and immediate responses will be more possible. In colleges, implementing virtual reality experiences that immerse students in dangerous situation could be implemented in training courses at college to allow students perform and react to problems in a safe environment.



What is next?

Considering the challenges for wearable form safety to power consumption, the technology will be at some point integrated into every facet of our lives.  However, in my view, giving more focus to the user’s experience is still needed.  We need to design more and more personalised experiences.  Ease of use of the technology is an important factor but in reality, what matters more is creating contextualised and personalised experiences for users.  For example, smart watches or fitness bands promised to enhance people’s productivity and effectiveness, if they are only collecting more and more data and accumulating information, that means they are still not mature enough to meet the desired needs of the users. We all need tools that can make predictive suggestions tailored to our individual needs.

Throughout this brief observation of the show, I have learned that we need to look beyond the tasks the targeted audiences want to achieve, and in order to do so we need to determine first what the user want and why.

What I believe could be also be useful at this stage is having some sort of partnership between researchers and designers/ providers to examine the efficiency of these devices in different industry applications. Evidence of success or failure is always useful of inform barriers and privacy concerns of users and so would help achieve sustained engagement.

In the future, every student will be a walking data centre, and going to university will be a real time information event.  If wearable devices become mature enough to provide personalised feedback to students, learning analytics will become more and more effective.

At the end, it is not only about the technology.

Finally, I was very amazed to see few school students around in the show chatting with delegates and very enthused to try new technologies.  My curiosity to know the reason for them to be at the show, drove me to chat with few of them.  What I realised was that, there needs to be plenty of opportunities for students to have creative spaces where they can engage in trying new techs and create things that they could bring back to class.  Not only does it help make STEM subjects, for instance, more interesting but will bring teachers and students all together to an informative space to let their imagination come to play.

I recently followed CES 2016 online – the world’s biggest technology show taking place in Las Vegas, to see what technologies and predictions from big tech brands will unravel in the next year.

Wearable technology and smart homes were the most dominant themes in the show, but among all the exciting tech and gadgets, the one that still ignites my curiosity and excitement is Virtual Reality (VR).

VR technology seems to be gaining momentum everywhere these days, especially in the last year with Facebook buying Oculus and Sony releasing Project Morpheus, expectation is high about 2016 being the year of Virtual Reality!

VR Tech gadgets like the Samsung Gear VR, Oculus Rift, HTC Vive, and PlayStation VR all had crowded rooms in the CES 2016 show with some of them announcing their gadgets’ release into the mainstream.

At this point I think that the technology has matured enough to be released in the market and have a massive potential impact!  VR does not only create incredible 3D environment but also can stimulate your mind bringing you into the experience as an active participant rather than a casual observer.  Being immersed in the virtual world in addition to having audio narration, instructing you to participate in physical gestures such as grabbing, selecting, and moving made possible with gesture controllers allows a sense of greater control of the virtual experience. A good example of that is Sixense STEM system by Samsung offering a full-body controlled presence and experience in Virtual Reality as it provides motion controls, haptic feedback, and additional spatial awareness in VR .


Virtual Reality Lightsaber Demo with Sixense STEM!


So is VR going to be a game changer and will there be any difference in the use of VR after it is made available for public?

I have been captivated by one of the very impressive examples of using VR in CES 2016, NASA giving visitors a virtual “ride on board an Orion spacecraft and a you-are-there perspective of the Kennedy Space Centre”.  Using headsets like Oculus Rift and Microsoft Hololens, people who were there at the show were fortunate to experience NASA’s latest rocket, the experience that people described as “blasting”.  Actually NASA has already started using VR to train astronauts for one of the most dangerous journeys of their lives, it allows astronauts to experience what it feels like to survive problems they might face at work.  The Martian VR Experience can transports users to Mars, a place they have never been to, and let astronauts experience what Watney had to go through to survive alone in Mars.A full  Martian VR experience will be realised later in the year and is expected to be  very compelling and immersive experience,  which is through exploiting the interactive features of any of the Rift, the Vive, or the PSVR, users will be engaged in the different scenarios recreated based on the film in VR . For me, this is game changer. NASA pushes VR to its limits.

“in space, an astronaut’s next, minutes are never guaranteed. They have to adjust to the drastically modified rules of physics and to a calmness and a slowness that makes danger”

Here is the video demo to the Martian VR experience:

For me “experience” is the key to the success of this example as there is no other way on Earth to replicate this experience other than VR.

How this applies to Education?

Let’s take this example and try to think forward how it can benefit our education and students because it is not enough to release these fancy gadgets every year, talk about them, try them, and then say “yea, they are cool”, and end up putting them on the shelf to gather some dust!  What really makes this technology and its function a game changer is the experience having a clear purpose and real context.

Thinking about the example of NASA and astronauts training this could also be applied in classes of astronomy.  For example, astronomy students can learn about the solar system and how it works by physical engagement with the objects within. They can move planets, see around stars and track the progress of a comet. This also enables them to see how abstract concepts work in a three dimensional environment which makes them easier to understand and retain.

I can see a lot of potentialities of VR in education, it can certainly bring value to education and training but what we really need to do is to start creating experiences – both virtual and augmented experiences – ones that are authentic and long lasting, aiming for more than just blowing students’ minds.  In addition, these experiences should be targeted at real problems.

VR experiences are no more for one person at a time!

The possibility of thought-controlled motion is particularly exciting.  VR is not anymore a way to experience environment and situations independently.  You can now invite people to the visual experience that you are in.  The toy box demo developed by Oculus is a way of testing its Touch Controllers and experimenting with the many ways people can interact virtually.


Mark Zuckerberg called the zero-gravity ping-pong “the craziest Oculus experience I’ve had recently.” And, he said, “What’s really amazing is sharing these experiences with your friends.

In the classroom, a lot of the activities are group work such as gaming activities.  What would make any VR experience unique and immersive for learners is immediacy of feedback and interaction, this can enable them to have control over the learning experiences.  See how Anatomic learning which is central to all areas of medical training or education can find extreme use of VR simulator especially with the use of touch controller.

Here is an example of heart anatomy developed by Leap Motion and is made available for free for users.


However, it is important to know how to make benefit and use of these immersive experiences that these technologies enable. How schools, universities and colleges can find value in these applications within their educational contexts.

Virtual reality has the capacity to support problem based and inquiry based learning enabling performance-based assessment particularly in STEM learning in a safe learning environment.  Encouraging educational institutions to create more room for STEM programs to expose young people to a more active and inquiry-based approach has become essential to me as part of my work at Jisc.  I believe that if we provide students with opportunities to master what they learn and really own what they learn for themselves, we will be better placed to support all students including those with accessibility needs to a high standard. This is the challenge I am currently engaged in as my job as a developer in Jisc Future Technologies, working side by side with universities and colleges, including their staff and students, to make the most out of teaching and learning.

Technology is here all around us and it is changing our lives day by day, however, if we do not embrace it to its fullest and create transformative opportunities, we will end up falling short of its grandiose aspirations.

For those of you who could not make it to CES 2016, here is a VR tour hosted by CNET of the CES show floor that you can watch in VR using Google Cardboard or Samsung Gear VR.


5 take aways from SIGGRAPH 2015


The annual SIGGRAPH conference is a five-day interdisciplinary educational experience in the latest computer graphics and interactive techniques including an exhibition, technical papers, industry talks and hands-on courses that attracts hundreds of exhibitors from around the world and many thousand delegates.

Having been recommended to attend by several 3D/VR/AR luminaries, it quickly surpassed my inflated expectations, leaving at the end of the week truly inspired and invigorated with fresh perspectives. I thought it would be useful to put together my top five takeaways from the conference and how they could be applied in an educational setting.

  1. Augmented Reality used in Hollywood film making

The director of Jurassic World (Colin Trevorrow) along with camera operators used an iPad app called Cineview developed by ILM (Industrial Light and Magic) to frame shots on location. They used the iPad in combination with a 3D structure sensor (http://structure.io/)  to measure camera depth where there was no visual reference, as most effects were added post production.

Tim Alexander, the Visual Effects Supervisor explains the process further.

“We would load our models into the program and stick them into the live image…The director and director of photography could look at a scene through the iPad camera and see where Indominus would be. They could see how tall she would be 20 feet away. They’d know if they would need to tilt the camera or move her back farther. Its a great tool for previs’ing.”

It is amazing to think that AR is being used in this way in the top production studios and the application for education is endless, imagine a similar tool in such diverse disciplines as theatre direction, lighting or product design, architecture and construction. Not only does it help to quickly present a stunning visualisation but reduces expense by allowing the learner to pre-empt future problems easily in a digital snapshot without  spending time creating complex physical models.

2. Virtual Reality WILL be massive

SIGGRAPH devoted a whole “Village” to the demonstration of bleeding edge Virtual Reality and I was lucky enough to have first hand experience of a few of them. The first experience placed you in the position of a crash test dummy accelerating towards a barrier, simulating crash trauma and the differences between safety features on a modern car and a similar vehicle in the 1980’s. The immersion was breathtaking – on the moment of impact the display went into slow motion with shards of glass flying towards you and your passenger, contorted as the crumple zone concertinaed and air bags deploy before your eyes. Even more frightening was the second experience in the earlier car model without what we would consider to be fundamental safety features. The VR resource was built for the road traffic agency in Australia to dispel the common myth amongst drivers that the old heavier cars were more protective in accidents.


VR Car crash dummy test

One of the other examples was being used by Ford executives, engineers and technicians to experience new car models without having to create expensive Clay Models (Commonly over £250K), a costly process of layering thousands of pounds of clay over a foam core, spending months shaping every curve by hand. Using gesture based controllers, it allows the user to peel back engine components, seeing detailed cross sections, check joins in the shell and run quality assurance testing.


Ford Mustang Virtual Reality experience

Being part of these immersive demonstrations really brought home the potential of VR within education for simulated environments (media caves) and interactive worlds. The early laggy, underwhelming graphical representations of VR are slowly being replaced by photorealistic and truly mind-blowing creations. Imagine being able to experience the awe and adventure of following in Howard Carter’s footsteps uncovering Tutankhamum’s tomb as an archaeology student; hearing the wind blow through a crypt sealed for millennia and taking in the priceless artefacts and mysterious hieroglyphs.

3. Pixar is Coming

The core 3D rendering engine used by Pixar in their recent animated films is now available for FREE, integrated into Blender software. It can be downloaded after initial registration at http://renderman.pixar.com/view/renderman and has some great features including a realistic hair renderer, denoiser, enhanced physical cameras simulated the imperfections of real world cameras and visual integrator. This is a real step change from traditional renderer pricing models (often thousands of of pounds per year), meaning that instead of purchasing a costly add-on, educational institutions and schools can now experiment with industry standard software for free.  This is especially useful for making students more employable when they finally graduate.

Render man Walking Teapot

Render man Walking Teapot


4. Science and Maths are cool!

In the latest blockbuster Interstellar, Chris Nolan wanted the main tenets of the film, namely interstellar travel, black holes, wormholes and other dimensions to be grounded in some semblance of theoretical fact. Hiring renowned astro-physisist Kip Thorne, Nolan based the films rich space visualisation on the mathematical algorithms and physical laws present in the universe, keeping it as real as theoretically possible.

Now, if ever students would be inspired to learn more about complex mathematical equations, presenting an authentic example like this would certainly hold their attention. Throughout the conference talks, academics talked through their scientific processes and Algorithms, across such diverse subjects as Multi-resolution Geometric Transfer – allowing animators to switch between high and low polygon dinosaur models and the Procedural Animation Technology behind Microbots in Big Hero 6.

I always struggled as a student of mathematics with pre-concieved notions of it being dry, devoid of excitement, not really relatable to anything my life. Put simply, I had no frame of reference. Hearing how it can move from the theoretical to the visual in an interesting and authentic way should be the template for more Maths and Science teaching to follow. It made a subject that was previously incomprehensible (to me, anyway), at least a little more digestible. We’ve all heard the criticism constantly thrown at the teaching of maths by disenfranchised students – “When will I ever use it in my life?”, this is a perfect instance of being able to point them in a direction that might interest them, after all there are a lot of visual learners out there. If you want to learn more about the Science/ Maths behind the film I would recommend you read Kip Thorne’s book – Science of Interstellar

5. Gamification 

Three of the main game engines – Unreal Engine, Unity and Cryengine were at the conference, much of their focus was on developing games and experiences for the educational community.

One of my earlier posts this year discussed my frustration at Metaio’s acquisition by Apple and the gap this left in the AR developer community. By using platform independent, FREE game engines to output to a variety of devices can help to overcome the fragmentation that currently prevails the AR landscape. Unity, for example has the ability to export AR for a number of different app based solutions including DAQRI, VUFORIA, Wikitude etc. Because the assets and workflow are not solely locked into proprietary AR experiences, it means they can be slightly edited to work within VR or act as standalone learning resources for web browsers/ apps.

One of my current projects (AR-Sci) involves developing an AR experience around Photosynthesis in nature for secondary school students. In parallel with this work, I am developing with Unity/ Unreal to re-use the same assets and storyboards to port a similar resource to a VR environment.

In my view, one of the reasons early educational forays into simulated/ Virtual environments such as Second Life weren’t more successful was because there was a vast chasm between the realism of these constructed worlds and console/ PC games. Students were used to playing graphically polished games such as Return to Castle Wolfenstein (2001) and Unreal (1998) meaning education based resources often fell short of  their high expectations leading to limited adoption. The ability to develop now in Unity for education or Unreal Engine 4 creating photorealistic  and authentic experiences has a huge potential to stimulate students that were brought up on  a diet of console/ pc/ mobile games.

Unreal Engine VR science resource screenshot

Unreal Engine VR science resource screenshot

Despite being prone to exaggeration when technology and graphics are concerned, I would unequivocally state that SIGGRAPH 2015 was the best conference Ive ever attended; it certainly lived up to its hype. It turned me into that excitable child on Christmas Eve, every morning reading through the conference schedule seeing what gems were on show today. I believe that being able to view CG development through the eyes of other industries can prove priceless in being more open to new approaches that education can truly benefit and learn from. Over the coming months I hope to put into practice what I have learned from SIGGRAPH, not only hoping to inspire those working in education with examples, but illustrating how this wow factor can enthuse and captivate students so their thirst for learning is never satisfied.

Can 3D Printing Really Revolutionise Your Traditional Lecture?

Over the last few years, there has been a drastic movement within the market towards making things with technologies, and as a result more and more industries started adopting these to compete by producing unique quality and end-to-end solutions.  One of the technologies that is evolving dramaticllay and has  matured to a point where it is expected to be a game changer in various industries is 3D printing. We are all aware that visualisation in 3D is a big step ahead of traditional 2D approaches, but 3D printing does not only enable three-dimensional visualisation, but also adds a dimension of tangibility to our experience. See the updated the news about the  3D printing industry.

So what is a 3D printer?

A 3D printer is a machine that is able to produce objects of any shape by creating layers of materials until the final three-dimensional object is formed.  Although, initially they started printing objects with plastic filament, now it is possible to use metal, ceramics, food, human tissue, and very recently cellulose which is the main constituent in plants.


A photo of the printing head of a FELIX 3D Printer in action by Jonathan Juursema

Looking at the latest development and growth of this futuristic technology field, especially with reduced cost for both machine and materials in the consumer market, a lot of industries like dental, medical, architecture, marketing, jewellery and Science education are seeing the benefit of it.  Have a look at the things being 3D printed so far in different industries.

 The NMC (New Media Consortium) Horizon Report in 2014 stated that 3D is expected to change the way teachers teach and students learn.  However, what 3D printing can bring into libraries, universities, schools and colleges is not yet to be identified.  Teachers and lecturers are experimenting how “making” things adds to the teaching and learning process.  Although these are individual cases, I think bringing more attention to the efforts made by early adopters can be very important at this stage as the technology and its applications seem to be increasingly promising.

This article is a reflection on an attempt made by the lecturer  Dr. David Smith at Sheffield Hallam Univeristy to “revolutionise” his traditional lecture/ classroom of Bioscience which usually relies on a PowerPoint presentation with some kind of multimedia like images and/or videos.

What the lecturer did differently was that he integrated a 3D printed model related to the core content of the lesson into a lecture with a very big number of biochemistry and biology students.

The question that needs to be proposed here is: can a 3D printer revolutionise the  teaching and learning?

In a class of 180 students, the lecturer in the webinar explained that in this attempt, he has been able to make a difference in terms of engaging students more and stimulating them to discuss core content both in groups and with the lecturer. This resulted in enhancing their understanding and retention about complex concepts that were not very easy to digest in a traditional lecture.


A 3D printed model of DNA by the lecturer.

So how was the lecture  delivered differently this time to achieve that?

A generated 3D printed model of DNA was introduced into this lecture through an activity, through which students can handle,  explore, and discuss how the shape of the object and how this shape leads to particular functions.

There is no doubt that the 3D output enables visualisation and conceptualisations in 3D dimensions in teaching and learning; however,  being able to touch and manipulate the object, opens up a new pathway of learning, enabling realistic perspectives of both simple and complex concepts.  3D printing can  be really a powerful tool,  think how this can be potentially realised to enhance spatial awareness in early education, helping to encourage students to pursue STEM careers for example.

Also, think how  influential a 3D model could be in enabling visualization of abstract ideas or say objects/ phenomenon that cannot be seen by the naked eye. This will further benefit the students’ learning more.

I think integrating 3D printing into any educational setting  can result in a pedagogical shift away from thinking about something to working with, tinkering, inventing, touching and sensing things.

3D printing seems to be a very exciting era with its so many possible applications in STEM, visual art and design and other subject areas.  From my point of view, its real power comes from having a purpose for creating a 3D output, a problem to solve through an activity which can involve students and teachers in the process of: designing, visualising, and making.

This model of the HIV was printed on the Object Eden 3D printer

This model of the HIV was printed on the Object Eden 3D printer

Some research indicated that 3D printing could be able to transform what the student imagines into a practical visual experience that can enhance the way they approach  problem solving.  Having said that, it does not mean that we can merely saturate schools/ institutions with 3D printers  and expect students to innovate.

Technology in itself cannot not be the main driver of any pedagogical shift.   The lecturer put significant effort on designing what he calls an “active experimentation activity” through which conversation is directed by some simple questions centred around  the object features and functions. These questions worked as prompts to simulate discussion which was then followed by personalised feedback from the teacher. This is an essential aspect, the printer can make the classroom a more engaging and creative space if it is implemented with some thoughtful and pre-planned learning activities facilitated by the teacher.


A 3D printed heart created by Melbourne Scientists to aid in medical education

It is really exciting to involve students in creating a 3D printed model of the human heart in a biology class for example, to learn about the human organs.

Student-centered teaching approaches can be more satisfying for students when they produce things rather than passively sit and observe.  As the 3D printer becomes more affordable, stories of it being used in schools and universities are increasing.  I hope this article  stimulates your creativity when thinking about the relevance of this technology to your subject matter and its potentialities in real wold projects.  Please see this 3D printing & Education Forum which could be very useful to start discussing your ideas and thoughts about any future projects of implementations.

Thoughts on Apple acquiring Metaio

Apple and Metaio join forces

Apple and Metaio join forces


Since the announcement of Apples acquisition of Metaio (http://techcrunch.com/2015/05/28/apple-metaio/) last Friday I have been onset with a number of thoughts – anger, dejection, pessimism and disbelief to name a few. But the overarching emotion, without wanting to sound dramatic, having existed inside the Metaio ecosystem for the past few years, is hope. Hope that AR could eventually realise the potential, opined in so many aspirational demos such as The Sixth Sense TED talk back in 2009.

[ted id=481]

Metaio has for so long been ahead of the curve in terms of AR innovation and functionality, think Cad 3D tracking and more recently their research into thermal touch technology,  turning every surface/ object into a potential touch screen. This set them apart from the rest of the AR software community; companies that were often narrowly focused on the non-technical enterprise market achieving great scalability but failing to push the mainstream adoption of AR into something that was beneficial to the user. Of course like other companies, Metaio was also guilty of focusing on marketing/ advertising use cases to create initial buzz around the technology, realising how it could monetise this new form factor into a lucrative business model.

Where it differed, was that it was always looking to push the boundaries in applications of AR –  helping to bring benefits to processes that had previously existed without digital support (e.g. 3d schematics assisting on-site air conditioning engineers). Nowhere is this more relevant than in education. Bringing students in contact with digital support in situated environments such as the Leeds College of Music resource is a USP that ensured students buy into the learning gains AR afforded them, without being led by shiny new technology.

So what impact, if any, will Apples foray into AR have on its mass adoption. Well, if nothing else, the fact that Apple has identified a unique opportunity/ potential in the technology (some are reporting integration with Apple maps and biometrics) to provide innovative new features in their product line can only be positive in terms of bringing it to the headspace of the average user. Even though its first iteration may be largely anecdotal (see Apple Watch), making users aware of new possibilities in their lives (learning, lifestyle, workplace) could add traction to its ubiquity. The obvious disadvantage is that Apples walled garden philosophy will mean that innovation outside these controlled   environments could suffer. Over 150,000 developers including myself worked with the Metaio SDK/ Junaio/ Creator and as previously stated, there is a dearth of suitable replacements in the market. I don’t know what the answer is in the short term (Wikitude are soon launching  SLAM functionality into their SDK) apart from waiting for other vendors to catch up and being more creative in placing the content, rather than style as the focal point.

Having said that, I can see the standards movement being fuelled by renewed interest in the AR/VR area, catalysing the technology to appeal to both the general user AND enterprise. Lack of recognised standards is sometimes the price we pay for working in innovative and  niche spaces with proprietary software, ultimately leading to fragmentation and limited adoption. Aligning AR with a framework may initially have to work to base level functionality and compromise on unreal expectations (HoloLens), evidenced in the 2014 interoperability demo with Metaio, Layar and Wikitude (http://www.wikitude.com/ogc-wikitude-layar-metaio-invite-mobile-world-congress-attendees-ar-interoperability-demo/) but this can only be positive in ensuring the longevity of the technology.

In May, Mark Zuckerberg heralded that VR/AR will facilitate the next technological paradigm shift in how we consume/ deliver information, he is rarely wrong especially given Facebook’s investment in Oculus Rift and Googles development of Project Tango and Magicleap.

So, in conclusion, while it is personally disappointing that Metaio have been subsumed by Apple, as an AR advocate for many years it appears, that finally, it is emerging from the primordial soup, escaping the shackles of the trough of disillusionment, to evolve into a technology that can finally realise its potential and go beyond the trivial to revolutionise the user experience.