“Alexa, ask Jysk” – Hacking new Alexa skills for Amazon Echo

An estimated 30 million smart speakers have been sold in the United States alone, with devices like Google Home and Amazon Echo (“Alexa”) nestling in the corner of many living rooms, kitchens and yes – even bedrooms. Amazon are betting big that you’ll want one in every room, even selling the Echo Dot in a six pack and twelve pack.

The digital assistants in these smart speakers seem like something from a science fiction movie – you can ask Alexa everything from general knowledge questions to very specific things like getting a weather forecast or checking to see if there are likely to be any delays on your commute into work. But how do they really work, and how might you go about teaching a digital assistant like Alexa some new tricks?

Amazon Echo - photo CC BY-NC-ND Flickr user michaeljzealot

Amazon Echo – photo CC BY-NC-ND Flickr user michaeljzealot

In this post I’ll look at how to create a new “skill” for Alexa, and hopefully demystify the technology a little bit.

At Jisc we operate dozens of services, for around 18 million people – from teachers and learners to researchers and administrators. Why log a call about a product or service when you could just ask Alexa? Maybe you could get your question answered there and then, from the information that we already have in our systems?

I thought it might be fun to work up a few practical examples, for instance:

  • I’m a network manager or IT director. What’s the status of the Janet network? – are there any connections down? Our  Janet status page and Netsight service has this info
  • I’m a researcher writing a new paper. What’s this journal’s open access policy? Our SHERPA services let you conveniently find out about publisher and funder OA policies
  • I’m an administrator reviewing a grant application before submission. Do we really need to build a hyperbaric chamber, or is there another institution nearby that has one? Our equipment.data service lets you find kit that institutions are sharing with each other and industry

What would that look (and sound) like? Here’s a short video that my daughter and I made to demo our prototype Alexa skill for Jisc:

The Alexa digital assistant is the Amazon product that underlies all of the Echo products, and we are also starting to see Alexa appear in third party products like the recently announced Sonos One speakers. If you don’t like saying Alexa, or perhaps someone in your house has a similar name, you can call it computer instead – very Star Trek!

Amazon provide developers a set of tools for building new Alexa applications. I’ll give you a quick overview here, but as you can imagine there is quite a lot of detail for those who want to take a deep dive into all things Alexa and Echo related.

First off, you have to tell Amazon what your new Alexa skill will be called – it needs to have a distinctive name because there are already over 15,000 skills out there. It turned out that Jisc doesn’t work as our skill’s name, and I had to resort to a phonetic spelling of our company’s name instead – Jysk. Once you find the right word or words to invoke the skill, people will be able to say Alexa, ask Jisc – or rather, Jysk!

Defining the Jisc ("Jysk") Alexa skill

Defining the Jisc (“Jysk”) Alexa skill

Secondly, you need to tell Amazon what kinds of questions people will be asking of your skill. This is where the magic and mystery of Alexa starts to unravel a little. It turns out that you have to be pretty precise about the wording of the phrases that you want Alexa to respond to, although there is a little wiggle room. For instance, if we tell Alexa to respond to questions about janet network status, it will also recognise that status of the janet network is the same question worded slightly differently.

Sample utterances for Jisc equipment data search

Sample utterances for equipment.data search

Thirdly, we need to tell Alexa how to find out the answer to the question. If it’s a simple question like janet network status, then this isn’t actually too hard either – we just need a place that we can go to for the requested info. And if it’s already available on the web somewhere, then we can even copy the info off the webpage without having to set up some kind of complicated database connection or Application Programming Interface (API).

Slot values for Jisc equipment database search

Slot values for equipment.data search

If our question has parameters, things do get a bit trickier – and this is where the final bit of mystique evaporates. Alexa doesn’t and can’t know all of the possible journal names or pieces of equipment that we might want to ask it about. Instead, when the skill is created, we tell it what the possible parameters are for the question. Amazon call these “slots”. When we’re asking about journal names we might include Nature, Computer Networks and so on in our slots. When we’re asking about equipment, our slots might contain mass spectrometer, spectroscope, hyperbaric chamber, and so on.

So where does Alexa go to get the answers to these questions? The answer is that it makes a simple HTTP request to a web server somewhere. This can be any server, but Amazon are quite keen for you to use their new Lambda system, which lets you run code on demand without the overheads of running (securing, patching etc…) a regular server. Lambda is a whole story in itself, and for demo purposes I’ve simply pointed Alexa at an existing Jisc test server.

What does the code look like to process a request from Alexa? Pretty simple, actually. Here’s the actual code that I use to make the Jisc (Jysk!) Alexa skill work…

Alexa PHP sample code

Alexa PHP sample code for the Jysk intent

Let’s spend a moment unpacking this – we’re using the Amazon Alexa PHP Library to process the incoming request. This makes an Alexa request object that contains the question and (if appropriate) the slot that Amazon think we were asking about. We can then decide what to do with the request. For the sample Jisc skill we fetch the Janet network status or journal policy information from a file that has already been populated separately, and for the equipment database lookup we go off to run an external program. Any external dependencies have to be pretty seamless, otherwise the user will be left waiting for ages wondering what is going on.

It’s important to note here that you don’t have to release your work-in-progress Alexa skill to the world until you are ready – in the Alexa console you can specify that the skill only works on Alexa devices linked to your own Amazon account, which is probably best for testing. You can also simulate interaction with an end user to test your back end code independently of Alexa’s speech recognition.

Sometimes it’s easier to see things rather than read about them, so I’ve made a short video that walks you through the Alexa developer console and shows you how this all fits together:

So now you know how to make your own Alexa skills. What will you make? Why not leave a comment and let me know!

 

AI can be DIY – a Raspberry Pi powered “seeing eye”

Scarcely a day goes by right now without a breathless newspaper headline about how artificial intelligence (AI) is going turn us all into superhumans, if it doesn’t end up replacing us first. But what do we really mean by AI, and what could we do with it? In this post I’ll take a look at the state of the art, and how you could build your own Do-It-Yourself “seeing eye” AI using a cheap Raspberry Pi computer and some free software from Google called TensorFlow.

If you want to have a go at doing this yourself, I’m following a brilliant step by step guide produced by Libby Miller from BBC R&D. I should also note that this is possible because of Sam Abrahams, who got TensorFlow working on the Raspberry Pi, and also literally wrote the book on TensorFlow.

Raspberry Pi logo screen printed onto motherboard

AI right now is mainly focussed pattern recognition – in still and moving images, but also in sounds (recognising words and sentences) and text (this text is written in English). A good example would be the Raspberry Pi logo in the picture above. Even thought it’s a little blurred and we can’t see the whole thing, most people would recognise that the picture included some kind of berry. People who were familiar with the diminutive low cost computer would be able to identify that the picture included the Raspberry Pi logo almost instantly – and the circuit board background might help to jog their memory.

While we talk about “intelligence”, the truth is that this pattern recognition is pretty dumb. The intelligence, if there is any, is supplied by a human being adding some rules that tell the computer what patterns to look out for and what to do when it matches a pattern. So let’s try a little experiment – we’ll attach a camera and a speaker to our Raspberry Pi, teach it to recognise the objects that the camera sees, and tell us what it’s looking at. This is a very slow and clunky low tech version of the OrCam, a new invention helping blind and partially sighted people to live independently.

Our Raspberry Pi powered "seeing eye" AI

Our Raspberry Pi powered “seeing eye” AI

The Raspberry Pi uses very little electricity, so you can actually run it off a battery, although it’s not as portable or as sleek as the OrCam! And rather than a speaker, you could simply plug a pair of headphones in – but the speaker makes my demo video work better. I used a Raspberry Pi model 3 (£25) and an official Raspberry Pi camera (£29). If you’re wondering what the wires are for, this is my cheap and cheerful substitute for a shutter release button for the camera, using the Raspberry Pi’s General Purpose Input Output (GPIO) connector. GPIO makes it easy to connect all kinds of hardware and expansion boards to your Raspberry Pi.

So that’s the hardware – what about the software? That’s the real brains of our AI…

Google’s TensorFlow is an open source machine learning system. Machine learning is the technology that underpins most modern AI systems, and it’s responsible for the pattern recognition I was talking about just now. Google took the bold step of not just making TensorFlow freely available, but also giving everyone access to the source code of the software by making it “open source”. This means that developers all over the world can (and do) enhance it and share their changes.

The catch with machine learning is that you need to feed your AI lots of example data before it’s able to successfully carry out that pattern recognition I was talking about. Imagine that you are working with a self-driving car – before it can be reasonably sure what a cat running out in front of the car looks like, the AI will need training. You would typically do this by showing it lots of pictures of cats running out in front of cars, maybe gathered during your human driver assisted test runs. You’d also show it lots of pictures of other things that it might encounter which aren’t cats, and tell it which pictures are the ones with cats in. Under the hood, TensorFlow builds a “neural network” which is a crude simulation of the way that our own brains work.

So let’s give our Raspberry Pi and TensorFlow powered AI a spin – watch my video below:

Now for a confession – I didn’t actually sit down for hours teaching it myself. Instead I used a ready-made TensorFlow model that Google have trained using ImageNet, a free database of over 14 million images. It would take a long time to build this model on the Raspberry Pi itself, because it isn’t very powerful. If you wanted to create a complex model of your own and don’t have access to a supercomputer, you can rent computers from the likes of Google, Microsoft and Amazon to do the work for you.

So now you’ve seen my “seeing eye” AI, what would you use TensorFlow for? Why not leave a comment and let me know…

 

Apple ARKit – mainstream Augmented Reality

Ever wandered through a portal into another dimension, or wondered what it would look like if you could get inside a CAD model or an anatomy simulation? This is the promise of Apple’s new ARKit technology for Augmented Reality, part of iOS11, the latest version of the operating system software that drives hundreds of millions of iPads and iPhones.

Turning the IET into a Mario level using Apple ARKit

Turning the IET into a Mario level using Apple ARKit

Augmented Reality has been around for years, but in quite a limited way – point your phone/tablet camera at a picture that has special markers on it, and the AR app will typically do something like activate a video or show you a 3D model.

But anyone wanting to develop an AR app in the past has had to fend with a couple of big problems – firstly the hardware in phones and tablets hasn’t quite been up to the job of real time image processing and position tracking, and secondly there hasn’t been a standard way of adding AR capability to an app.

With recent improvements in processor technology and more powerful graphics and AI co-processors being shipped on our devices, the technology is now at a level where real time position tracking is feasible. Apple are rumoured to be including a sensor similar to Google’s Project Tango device on the upcoming iPhone 8, which will support real time depth sensing and occlusion. This means that your device will be able to tell where objects in the virtual world are in relation to objects in the real world – e.g. is there a person stood in front of a virtual object?

Apple and Google are also addressing the standardisation issue by adding AR capabilities to their standard development frameworks – through ARKit on Apple devices and the upcoming ARCore on Android devices. Apple have something of a lead here, having given developers access to ARKit as part of a preview of iOS11. This means that there are literally hundreds of developers who already know how to create ARKit apps. We can expect that there will be lots of exciting new AR apps appearing in the App Store shortly after iOS11 formally launches – most likely as part of the iPhone 8 launch announcement. If you’re a developer, you can find lots of demo / prototype ARKit apps on GitHub. [[ edit: this was written before the iPhone 8 / X launch! ]]

As part of the Jisc Digi Lab at this year’s Times Higher Education World Academic Summit I made a video that shows a couple of the demo apps that people have made, and gives you a little bit of an idea of how it will be used:

How can we see people using ARKit in research and education? Well, just imagine holding your phone up to find that the equipment around you in the STEM lab are all tagged with their names, documentation, “reserve me” buttons and the like – maybe with a graphical status indicating whether you have had the health and safety induction to use the kit. Or imagine a prospective student visit where the would-be students can hold their phones up to see what happens in each building, and giant arrows appear directing them to the next activity, induction session, students union social etc.

It’s easy to picture AR becoming widely used in navigation apps like Apple Maps and Google Maps – and for the technology to leap from screens we hold up in front of us to screens that we wear (glasses!). Here’s a video from Keiichi Matsuda that imagines just what the future might look like when Augmented Reality glasses have become the norm:

How will you use ARKit in research and education? Perhaps you already have plans? Leave a comment below to share your ideas.

Unboxing the Mycroft AI open source digital assistant

Mycroft open source AI

Mycroft open source AI

Mycroft AI is the product of a Kickstarter campaign from Joshua Montgomery, who conceived back in 2015 of a voice activated digital assistant (like Apple’s Siri or Amazon Alexa) that was completely open source, built on top of an open hardware platform. Fast forward two years and $857,000 from crowdfunders and investors, and the first 1,000 units have just gone out to supporters around the world.

Watch me unbox the Mycroft AI Mark 1 “Advance Prototype” and take it through its paces:

Mycroft is really interesting for a variety of reasons:

  • Being open source software, you can see how the code works and tinker with it to make Mycroft do things that its creators never envisaged. This is a great way of learning to code, and understanding how to do speech recognition.
  • Mycroft capabilities, or ‘skills’, are typically written in the very accessible Python scripting language, and can easily be downloaded onto the device.
  • The Mark 1 itself is a clever combination of off the shelf hardware like Raspberry Pi and Arduino, but you can also run the Mycroft software on your existing Raspberry Pi, or on a conventional desktop/laptop. If you do have the Mark 1, then there are a wide range of hardware ports and interfacing options exposed on the back panel, including the full Raspberry Pi and Arduino GPIO pins, HDMI, USB and audio out.

So from an edtech perspective it’s easy to see Mycroft being used as a hook for teaching advanced hardware and software concepts and project work. And perhaps we’ll see DIY Mycroft kits turning up in maker families’ Christmas stockings before long too!

It’s also important to keep in mind that Mycroft’s developers see it as a white label digital assistant that (for example) organisations could customise for their own needs, retaining full control over the hardware and software – unlike the black box solutions from the tech giants. There could be quite a few use cases where this total control turns out to be a key requirement, e.g. from financial services to the defence sector.

I’ll have more to say about Mycroft soon, but in the meantime do leave a comment and let me know what you think about it, and how you might use it in research and education…

3D printing: Lessons and tips

Recently, I started experimenting with the Ultimaker+ extended 3D printer that we’ve got into our office as part of the digilab. I was very excited and enjoyed every little detail about it, from unpacking to assembly, to then making something out of it. As it happens that after a week of getting the printer, I was participating in the NFSUN (the Nordic Research Symposium on Science Education) conference to disseminate our work in the AR-Sci project. I thought it could be a great idea to show some of the 3D models we produced for the project in a new and different way. Throughout the project we produced a collection of interactive Augmented Reality experiences for science education, and 3D printing can be also great way to visualize things we don’t see normally with the naked eye.  3D printing enables you to bring your digital 3D representation into your real world as a tangible object.

In this blog post I will talk about my first print using the Ultimaker + extended and how I prepared the 3D model to be printable with other tips to fix yours easily.

I chose randomly to print a 3D model of chloroplast which you can see in the image below. This is a screen shot of the chloroplast model as part of the Augmented Reality experience.

Chloroplast in AR

Chloroplast in AR

Getting started with a new 3D printer means using a new software, Cura.  Cura is a slicing software maintained by Ultimaker. It prepares your 3D model by slicing it into layers to create a file known as G-Code which speaks the language of the 3D printer.

However, some 3D models need to be prepared before you bring them to the printer software as not all 3D models are ready to be printed straightaway. There are a set of criteria that need to be met to make your 3D model
printable.

There are two approaches to fix a 3D mesh; manually, or automatically.

For a quick fix, I recommend using this software that enable you to validate your 3D model, fix and convert it to the right format before sending it to printer. Meshlab, and Netfabb are the ones I used in several situations to prepare my 3D models, and I found both of them very easy and quick to use.

MeshLab:

  1. 3D model need to be in .stl format: you can use MeshLab to convert your design files from any format to .stl.
  2. Polygon reduction: It is advised that the files you need to print should be under 64 mb in size and have less than 1,000,000 polygon (triangles) faces: if your 3D model is larger than that, you can use MeshaLab for polygon reduction.

Netfab:

  • Netfab software can be very useful when checking your model very accurately and analyze all its features before you send it to a 3D printer.
  • I also found Netfab very useful to scale your 3D models to real world measurement.
  • The mesh needs to be closed, no gaps or wholes between faces and edges and Netfab can help you fix that using the automatic repair tool.

To fix the Chloroplast 3D model, I decided to prepare it manually using 3D modelling software called Blender :

  • Each 3D model needs to be a single seamless mesh:
  • Looking at the model, it can be printed in two different pieces that will eventually be put together and glued. This allows me to print the whole model in two different colors as well.
  • Wall thickness:
  • One thing to look for is that the wall thickness of the model needs to be above the minimum. This is an issue I had with the outer piece of the model. I had to increase its wall thickness by using the solidify modifier making sure that this doesn’t affect or change the shape of the model.
  • Last step was to look for holes in the model,
  • then convert it to .STL.

It is worth noting that Blender already has an add-on called 3D print toolbox which you can load it from blender preferences menu. This tool checks for

issues in your model like intersecting faces, distorted faces and thickness and over hanging before printing.

A final check for the model is always good by using Mesh Analysis panel which generate a heatmap of the problematic areas.

Finally, below is the end result of my model which I am very proud of. What I like about 3D printing is that you can rapidly prototype your design, visualise, and share it with other people who can test it or use it to stimulate a conversation around ideas or in facilitated lessons.

Chloroplast 3D print final result

Chloroplast 3D print final result

VR & AR World 2016: takeaways and potentials for the next generation learning environment

At the recent VR &AR world 2016 that took place in the ExCel centre in London, I was really excited to explore the vibe and experiences around cutting edge technologies that 50 countries came to share from all around the world. With many established companies like Meta, HTC and Epson and many more showingcasing their development in Augmented, Virtual and Mixed Reality, it was exciting to see their latest developments. And while there was still gimmick led content, the majority were aiming to show what added value these technologies can offer. I will concentrate on their potential in the next generation learning environment being informed from my experience visiting VR & AR world 2016 and.

  1. Sense of presence and the impact on the learner’s identity:

The latest Toyota C-HR VR experience was showcased at the exhibition, it was an experience where you are able to digitally discover and personalise this specific vehicle before it is being released. This time, it was the fact that I haven’t yet tried getting into a car in VR, that sparked my curiosity to try the Toyota experience. Throughout the experience, I was able to walk around the car, open doors, sit inside and change colours and some settings inside the car.  However, what astonished me was a feedback that I got from one of the event’s visitors watching the demo of the experience, he said: “yes, it is definitely important to open the door to get off the car”. I stopped for a second thinking about the comment he made.

Sure I felt I was actually in the car, I opened the door when I finished experimenting with the car and wanted to get off.  If you look that chair in the picture below, it is obvious that I can get off the chair without the need to open any door.

 

Toyota C-HR VR expereince

Toyota C-HR VR expereince

It is funny, yet it is the same thing that makes VR really powerful. This is one of the main things that makes me so passionate about VR and think it has great potential. The sense of presence that VR enables, can make you psychologically feel that you are taking up a physical space within the virtual environment whilst you are physically located in a different place. It is not only bringing to us experiences we never thought we could have again or enabling us to be in places that are may be dangerous or costly to visit otherwise, it is actually enriching our own life experiences. If we think about how many opportunities VR can bring to us, neurally speaking it is contributing to shaping our identity and who we really are: “who we are depends on where we’ve been”.

This is great because with VR the experiences we will be able to try can be limitless and could be more unique as these experiences become more personalised to how we individually interact with it.

I believe that this is going to make tremendous impact on the learners’ experiences and their understanding about the world, as they will be able to reach to things and explore places far beyond their physical locations. Couldn’t this be transforming in the next generation learning environment to overcome limitations of travel to learn about specific places or particular period of time, and furthermore enabling the learners to get different perspectives.

Toyota uses the HTC Vive in this demo which is a favoured VR tech, and it was occupying the floor in the event due to its super accuracy, a lot faster and with little latency compared to other VR headsets.

In an earlier blog post, Matthew Ramirez talked about the techniques and tricks that the vive deployed to enhance the immersion and the sense of presence in VR overcome some problems with other headsets like nausea.

 

  1. Virtual Reality and Mixed Reality in Maintenance and Engineering:

If we take this example even further, Epson, Vuzix and Meta has been developing their own smart/ mixed reality glasses for the automotive industry and construction. Colleges and vocational courses are the ones that could benefit from these technologies the most particularly to facilitate and smooth the transition between colleges, apprenticeships, and real life jobs.  With VR, AR and MR students could be more prepared for solving real life problems as they could develop many of the required skills when working with cutting edge tech in a very safe and constructive learning spaces.

This in itself could empower learners and particularly girls to get them into engineering and technology careers when their lack of confidence resulted in shortage in the skills required for STEM related jobs as reported in the Guardian and the BBC news. Stimulating experiences that engage students in problems solving and challenging activities with the use VR and AR could help unlock the learners’ creativity to come up with solutions in a flexible and safe environment. They are able to make mistakes and get feedback, and this could increase girls’ confidence in their abilities and affecting their career choices.

This is a video on using Microsoft HoloLense Mixed Reality in architecture which I believe is revolutionary as it will play a vital role in rethinking design, construction and engineering for  the next generation learners of much needed engineers and designers.

 

  1. Virtual Reality and Storytelling:

Another attractive point to me in the event floor was more relevant to the content and somehow the simple technology. the London’s Stereoscopic Company (LSC) who has a long reputation of publishing 3D stereoscopic images from Victorian times to the modern day is now adopting VR to bring their digital 3D stereoscopic content and films in a more exciting way and making them available to anyone with a smart phone.  It is bringing historical periods of time, books, collections of original images, VR films to us using a small low cost VR kit as you can see in the image.

I immediately thought that this a great medium for storytelling that could turn telling a story in a classroom to a more fun and flexible way.

OWL VR KIT

OWL VR KIT

You might argue with this sort of simple kit you do not get the same high-end experience than with the more expensive VR headsets, however, I believe for entry level users or even kids, it could be a great way to bring information that is only available in texts in books or cards to life through VR and Digital 3D content: “When this content is viewed in stereo the scene leaps into 3-dimensional life” as Brian May describe it. I enjoyed having access to the fascinating world of Stereoscopy they have through the cards as it makes me feel like a child again.

This is OWL VR KIT viewer for any 3D content and VR films, it comes as a box designed with high quality focusable optics.

This sort of kits like the Google card board as well is allowing more people to at least have this sort of first experience with VR and so increasing the number of people who are more savvy and more educated about what VR could potentially offer them.

My take from this example is that the technology itself cannot guarantee a fun and interesting experience alone. What we need to have is more compelling stories, stories that trigger emotions and that sense of wonder we all had when we were children, these are the sort of things that make VR valuable as a storytelling medium.

 

4. Haptic feedback in VR and what is next?  

In the event, I tried for the first time a form of haptic feedback using a  hand-held controller that allows you to feel some sort of force while interacting with virtual objects on the screen. This sensible’ phantom haptics system is one of the machines VR and AR companies are experimenting with to explore what possibilities of real-like feedback a doctor could have when puncturing their patients’ skins. So the system acts as a handle to virtual objects providing stable force feedback for a single point in space.

Hand-held controller

Hand-held controller

I imagine this is definitely going to be essential to any VR and AR experience. We all use our hands intuitively to reach out for things, to interact with things and people, and to feel things. In this example of using the controller in remote surgery training, haptic feedback has been used and it contributed to create an illusion so trainees felt more present in the remote surgery room. This is contributing to make the surgeons or trainee doctors experience more believable enabling them to understand and perform tasks far more effectively and safely.  For me, as haptic and tactile technologies advance and become integrated into more of these experiences, particularly when led by  cognitive neuroscience research on how we interact with the world, the quality of education with VR becomes increasingly high. Medical students will then have great opportunities experiencing and operating in more real-life surgery situations safely and constructively. Haptic feedback could also be a way of better informing students in the  decision-making process.

 

Surgery in VR with Haptic

Surgery in VR with Haptic

The opportunities for using that in education are great to perform tasks more efficiently and effectively are great. Chris Chin the founder of HTC vive is expecting in his interview that education and medical experiences with VR are coming next in 2017, will that be with some advances in the Haptic technology? Shouldn’t we prepare learners for these new delivery methods or at least prepare the current learning environment to be equipped to deliver and support these sort of experiences?

Here are a few examples of our work at Jisc in Augmented Realuty and Virtual Reality to immerse, engage and eduate learners, if you are interested in embarking on your own project or need help, we’d love to share experiences with you.

 

Research and Development at SIGGRAPH 2016

An exciting part of the SIGGRAPH 2016 conference was the experiential area allowing attendees the opportunity to interact with a range of emerging technologies – from Virtual/ Augmented Reality to haptics and immersive realities. Below are my top five picks, although I could easily have doubled the list.

Automated Chair mover

Furniture that learns to move through vibration

Furniture that learns to move through vibration

An innovative way of changing room layout using small vibration bursts to reposition the pose and location of furniture. Controlled through Wifi imagine the potential for time efficiencies in education and classroom/ lecture rooms!

Ratchair: furniture learns to move itself with vibration

Titian Parshakova, Minjoo Cho, Alvaro Casinelli, Daniel Saakes

 


 

Redirected Walking for AR/VR

Redirected walking

Redirected walking

By manipulating user camera view on HMDs (Head Mounted Displays), this application can redirect pedestrians to points of interest without competing for space. No more fighting for the best view of the T-Rex in museums!

Graphical manipulation of human’s walking direction with visual illusion

  • Akira Ishii, Ippei Suzuki, Shinji Sakamoto, Keita Kanai, Kazuki Takazawa, Hiraku Doi, Yoichi Ochiai

 


Mask User Interface for manipulation of Puppets

VR based manipulation of animatronics

VR based manipulation of animatronics

Using depth perception sensors controlling body movements and separate lip sensor (think Darth Vadars mouthpiece) managing mouth action this VR setup brings a new dimension to animated puppetry.

Yadori: mask-type user interface for manipulation of puppets

  • Mose Sakashita, Keisuke Kalahari, Amy Koike, Kenta Suzuki, Ippei Suzuki, Yoichi Ochiai

 

 


Haptic suit

VR integrated Haptic feedback suit

VR integrated Haptic feedback suit

Integrating localised haptic feedback through VR game experiences (object contact, explosions etc.) the Synesthesia Suit adds to this visceral experience by allowing the user to ‘feel’ the sound/ music as if it runs through the body.

Synesthesia suit: the full body immersive experience

Yukari Konishi, Nobuhisa Hanamitsu, Kouta Minamizawa, Ayahiko Sato, Tetsuya Mizuguchi


 

Motion Predictor

 

Motion predictor

Motion predictor

Using complex physics algorithms, Laplacian Vision allows the user to better predict object trajectory information, displaying it in a users field of view.

Laplacian vision: augmenting motion prediction via optical see-through head-mounted displays and projectors

  • Yuta Itoh

Enabling a sense of presence in Virtual Reality

Virtual Reality has emerged from the ashes of the early 90’s as a potential technology to revolutionise our everyday lives. However, some major challenges still exist, both technical and aesthetic, before this can happen. In a series of short blog posts I will examine the major flaws preventing VR from becoming a mainstream technology and propose possible solutions. Let us start with the technical restriction of replicating realistic movement and sense of presence.

Locomotion

Currently, the majority of VR experiences/ games offer the user a seated experience where their movement is controlled with a gamepad, this can often lead to a disconnect between the users body and the movement viewed on the VR headset. In turn, this can contribute to a lack of presence in the game space but perhaps more importantly can lead to feelings of nausea. The HTC Vive looks to overcome this problem by using room scale experiences that at least provide the user with the ability to walk around small areas. Multi-directional treadmills can also provide realistic walking simulation although they work by canceling the user’s motion to mimic acceleration (equivalent to walking on slippery ice) so there is a mismatch as the body attempts to translate this new movement through muscle memory into one it has become accustomed to for years (walking).

But what happens when you are in a VR experience that has a bigger footprint? In order to explore truly realistic VR on foot such as large buildings or even cities this can create a major issue.  Teleporting or returning to a controller can provide a lightweight solution but totally shatters any immersion and your mind becomes immediately disengaged. Redirected walking is a method used by developers to trick the mind into thinking they are walking in a straight line when in fact they are slightly moving on a curved/different path in the real world. This enables up to a 26% gain or 14% loss in distance travelled or 49% gain or 20% loss in rotation (yaw) without user detection. The body perceives the movement in a certain direction but without visual cues is less accurate when it comes to measuring the distance/ rotation travelled. So we are essentially hacking our brain and exploiting loopholes in our circuitry.

The paper Estimation of Detection Thresholds for Redirected Walking Techniques  posits that a technique called curvature gain can allow the user to walk in a 22 metre circle in a real room while travelling in a straight line in the virtual world. This can open up huge opportunities for expanding the virtual environment that was previously impossible while retaining the sense of locomotion felt by the user.

 

Haptic response

Kevin Kelly, founder of Wired magazine commented that “we are moving from an internet of information to an internet of experiences”. VR can also include other physical stimuli through passive haptic feedback where physical objects are linked to visual cues in the virtual world such as a torch or sword. They can be further enhanced by adding other sensory properties such as temperature or texture. However, this is not scalable as for every virtual prop, a physical equivalent needs to be created.

Remarkably, a method called Redirected touching can provide a solution where a real object is placed in the physical environment and remapped in the virtual space to assume several different shapes. This is achieved by using a discrepancy in hand interaction between the real and virtual, evident in the example below. Again, the visual reference tricks the user into believing that the physical object has many more sides than it actually does.

 

 

Taking this one stage further TheVoid has set up situated installations (think Laser Quest with VR) ranging from ancient temples to futuristic alien worlds where the physical environments are augmented in VR with digital equivalents. At one point you noticeably feel the change moving from a research laboratory corridor indoors to a suspended walkway outdoors, feeling the breeze as you move along hundreds of feet above the ground, creating an presence that is hard to imagine. The lack of photorealistic graphics does not detract from the user having a heightened sense of agency facilitated by peripheral stimuli and the ability to interact and touch all the environments.

 

 

Of course, this particular setup is not scalable at the moment but you start to understand how by melding additional sensory elements to VR, immersion can be enhanced leading to a more convincing proposition. VR is by no means the finished article yet but the myriad of speed bumps that block its path to mainstream adoption are slowly being eroded by the ability to employ mind hacks, utilising innovative techniques to trick the user into a greater sense of presence.

How wearable EEG trackers can impact education

The 1982 film Firefox revolves around a plot to steal a Russian fighter jet that can be controlled by thought. Who would have imagined that just over 30 years later, the technology for this sort of technology to become Science Fact would be so readily available.

Jisc Futures Innovation Developer Suhad Al-Jundi testing the Emotiv Insight

Jisc Futures Innovation Developer Suhad Al-Jundi testing the Emotiv Insight

Even though wearables have been in the news for the past few years, the majority of attention has so far been focused on AR and VR implementations, but increasingly the quantifiable self or the ability to make sense of personal data has gained traction. A year ago I came across a couple of examples (Emotiv and Neurosky) where companies had developed brain reading headsets that could analyse neural activity to determine the emotional state of users. Immediately my mind was full of ideas and possibilities for how this could help benefit the educational space. Recently, I spent some time researching the Emotiv Insight, looking at possible use cases and examining the quality/types of analytics that could be extracted.

Some of my key thoughts are below:

Tracking analytics into other apps and providing intelligence on student engagement

Jisc has for some time been developing its Learner Analytics service for the sector, providing  analysis of student data to help inform institutions on student progress and allow students to maximise their learning potential. Wearables such as the Emotiv can classify EEG data into identifiable neural patterns suggesting whether users are engaged or excited or contrastingly if they are dissatisfied and stressed.

Screen Shot 2016-05-17 at 12.38.47

Performance metrics based on EEG brain activity

Although a provocative area, in terms of ethical considerations related to personal data, this could potentially be used to measure student experience or the tools that are used to support future learning. For instance, a students’ engagement level could be measured while wearing a VR headset to help assess the effectiveness of the resource as a digital aid.

Learning 

Among the extravagant claims of brain reader wearables is that they can help focus attention and memory retention in the field of education. By engaging with learning games, similar to brain training apps like Lomosity, Neurosky’s suite of educational packages focus on using brain EEG activity to measure progress through mathematic, memory and pattern recognition. The main problem with non-clinical wearables at the moment is the amount of incidental “noise” can distort the data through electrical currents that inhabit physical muscle actions like frowning. Having said this, rapid innovations in hardware (such as providing more electrodes) has produced improved data, comparing favourably to earlier research studies using commercial technology. (https://www.researchgate.net/publication/241693098_Validation_of_a_low-cost_EEG_device_for_mood_induction_studies).

However you interpret the accuracy and impact of these wearables to learning, contemplating other learning methods can often force you to open up new perspectives and as the famed economist Peter Drucker was attributed to saying – “What gets measured, gets managed”. If you have a baseline score, simply by paying attention and analysing it can force you to think of ways in which to improve it.

Controlling Internet of Things (IOT) environments

The connected world is so popular at the moment, I have a Nest smart thermostat that constantly adjusts its heating regime, not only monitoring internal temperature but also by analysing weather, historical settings and whether Im physically home via motion sensors to efficiently manage energy costs. My wifi driven LIFX lights can be operated by my Apple Watch to turn on remotely or by using a simple app IFTTT I can create functions to control their behaviour based on the sunset or when I move within a mile of my house.

Apple Watch controlling connected lights in home

Apple Watch controlling connected lights in home

As we enter a world where even water consumption becomes smart, the next logical step seems to be using your thoughts instead of an intermediary app or voice recognition as a control mechanism. Amazingly this is already happening with some high profile examples such as controlling a Tesla with an EEG headset.

In another use case Neuroscience students from the University of Florida have connected the Emotiv to operate drones.

Using EEG reader to diagnose/ recognise patterns in behaviour for medical students.

In discussion with medical academics, one of main difficulties student encounter is the ability to diagnose and interpret patterns correctly, especially in the field of neurology. As commercial EEG readings can sometimes involves millions of data points, it can therefore be hard to add meaning and inference to the sheer scale of evidence. Often students move into hospital environments as qualified medics without having the hands-on opportunity to interact with expensive MRI equipment so have to make do with simulated 2D patient examples.

Visualisation of brain activity using wearable Emotiv Insight EEG reader

Visualisation of brain activity using wearable Emotiv Insight EEG reader

While consumer wearables are not comparable to their commercial counterparts, they provide enough accuracy to allow students a viable window into the use and visualisation of so much complex data in 3D space. Being able to practice and reinforce their skills using realistic equipment can enable them to build confidence to take into their professional lives.

Track wellbeing and mindfulness and synchronously change environments

Understanding how you are feeling and monitoring emotional triggers could be hugely important in the medium to long term. Being able to inform students  as to which learning style works best for them and changing learning spaces to adapt to this new paradigm of personalised learning could be integral to a more effective and relevant institution.

Imagine this scenario in an institution thirty years from now – a student enters a library study space, a wearable device recognises their stressed state due to their final essay deadline and begins to play classical music into their headphones to focus their mind, as they sit down the flexible memory chair aligns into a position conducive to positive thought (decreasing the stress inducing cortisol) and the lights change colour and dim to reflect a less harsh study environment encouraging creative thought. Objects and furniture become shape-shifting entities based on the individuals preferred interface and responsive to tactile interactions. Incredibly this technology is already starting to emerge from research labs such as Stanford University’s Mechanical Engineering Design Group where the desk environment morphs dependent on the device and materials students are using.

 

To support special needs students with communication and accessibility requirements.

Wearables have a tremendous opportunity to offer increased support for students that struggle to communicate effectively and have specific user requirements. :prose is an app to help nonverbal people communicate by tapping or swiping on a mobile device. It works in a similar way to sign language, attributing specific meaning or phrases to user gestures on a touchscreen. However, for some users affected by conditions like Parkinson’s or ALS, which inhibit a person’s motor skills, the movements required to use the app can be problematic.

To overcome this, using the Emotiv Insight the user thinks of the physical action assigned to the phrase (e.g.swiping up could mean “I want”) and the words are spoken aloud. Students can also build up a personal silo of custom phrases that work better for their memory retention. This has proved very effective in achieving results that using other mental training solutions would take years. In an education context this could help to build confidence, engagement, facilitate increased independence, convey emotions and aid learning timelines in a more fluid manner with little intervention.

In the words of Gil Trevino, Lead Direct Support Professional at PathPoint, “This advancement has allowed someone who once was a non-verbal communicator, the ability to communicate thoughts, feelings and answers in a way she never has before.”

 

The power of technology and innovation to support education in todays connected world cannot be undervalued and if used in the right way can be truly transformational. But its true success can be judged by how it can transparently deliver value by being a silent partner, facilitating radical change while allowing pedagogy and traditional frameworks to remain central.

Having said this, we should not underestimate the awe inspired reaction from students using technology that provide glimpses into the future, having the potential to give them a renewed appetite for learning.  Arthur C. Clarke compared “..Any sufficiently advanced technology… indistinguishable from magic.”  and ultimately, isn’t this the aim for all education, to be inspired and imbued with that wow factor to enable us to think critically for ourselves and instil a passionate desire to become life long learners?

 

IOT Smart London 2016: a reflection and applications in education

Following the Wearable Show that I attended last month in London, it was recommended that I attend the Internet of Things Smart London show. This introduced different and comprehensive forums around IoT and smart cities ranging from future technologies in IoT, business and strategies, Smart connectivity & locations service and smart applications, to big Data & Analytics, and Security of things. It was an invaluable opportunity for me to be inspired about the future of technologies and connected things.

I left the conference with a new knowledge and fresh perspectives about how the latest technologies could be applied to real life situations that I thought would be also applicable in education.

In my previous blog post, I wrote about how wearable technologies are bringing the biggest shift to our behaviors, making us more bonded with our devices and machines. Today we are carrying mobile phones and iPads everywhere, and increasingly we are adding to these with connected watches and glasses.

So what is the internet of things?

As its name suggests it is a limitless list of physical objects, connected through a network/ over wi-fi able to send and receive sensory data through embedded sensor and computing power.

Since 2008, the number of connected sensors to the internet exceeded the number of people on earth, throughout the show I realized that now the internet of things has reached a very exciting stage where it is able to save our lives, detect fraud and make customers more satisfied. Here are some examples of what I see as pioneering applications of IoT that could or already helping to make a difference to people’s quality of life, particularly in education and healthcare.

Assistive IoT technology for visually impaired people: 

 I was very impressed by the tools developed by Microsoft to empower visually impaired people to face their mobility challenges and difficulties.

Visually impaired people can use a Seeing Artificial intelligence app in their smartphones that function with Pivothead glasses while they are discovering the world around them. The glasses have a camera that transfers visual information into audio cues. In this video, you can see how a visually impaired person is now able to read the expressions of people around him, read what is in any restaurant’s menu using the Seeing AI app.
[youtube https://www.youtube.com/watch?v=R2mC-NUAmMk]
After this session by Microsoft, I realized more that the opportunities of IoT could be enormous in bringing together people, processes, data, and objects to make networked connection more valuable than before. It is incredible that we are now able to use Artificial Intelligence to interpret our emotions, passing it on to those who cannot see it and make them understand it.IMG_7148

Microsoft is using Cognitive services which is a collection of APIs that “allows the system to see, hear, speak, understand and interpret our needs using natural methods of communication”. If ever visually impaired students would be more motivated to engage in a collaborative discussion or activity, being able to read their colleagues’ facial expression and feedback throughout the discussion using such smart AI apps would certainly enable them to overcome many communication difficulties and make them feel equal in the conversation.

Wearable, devices and apps are good, but not enough!

Machine learning and Big Data are the giant leap forward in the IoT world. We have all seen that real things could become smart with more sensors and actuators connected to networks enabling sending and receiving that could be monitored remotely. However, in my view IoT applications can facilitate more personalized learning, able to glean deeper insight to us as individuals, using this knowledge to tailor our unique demands and needs.

In all of the talks I attended, machine learning and artificial intelligence was central and key. As the number of different types of IoT devices on the market is continuously growing, their variety leads to a higher and higher level of complexity in the IoT ecosystem, I believe AI and machine learning will be an essential component to achieve mainstream ubiquity through increased efficiency and productivity at enormous scale.

John Bates, the CEO of PLAT.ONE uses the example of Uber in his talk Thinglaytics and thignanomics: Disruptive IOT Business as a model that could be applied to other services to achieve real-time dynamic and cost-effective market. This is what John called it “Uberization”. Let’s say that we could have smart parking using smart traffic monitoring gate where the charge could be changed based on demand and traffic flow in the city!

The healthcare system could also use the same model interestingly to drastically improve its myriad of systems and save lives. Hospitals could be enabled to provide a level of care previously unimagined while reducing healthcare costs by “continuously analyzing locations, vital signs, drugs administrated, room sensors and many other knots and personalizing them to the medical situation” as John said.IMG_7168

Applications of IoT’s in healthcare could be really revolutionary, it could allow elders to stay in their homes and keep them from making unnecessary trips to the hospital. It is amazing if we could have a subscription based IoT-enabled monitoring at home that could give insights about elders’ daily activities at their homes. With the costs of sensors continue to drop down, and the technologies become more and more viable, the more monitoring devices will be an integral part of the patient’s daily life.

If data about our body and how it works could be streamed and analyzed continuously, doctors could be able to get alerts when you are likely to suffer a massive heart attack based on our heart rate, blood pressure and temperature. Surely, this would lead to get a better quality of life?IMG_7169

Big Data and Streaming Analytics:

With the increasing use of sensors and the massive amount of data generated every second about everyone’s life and activity, Mike Gualtier considered machine learning to be the brain for any IoT, but what he sees is also very powerful to achieve personalization is big data and streaming analytics which become mature enough to be used in any enterprise.

IMG_7221-1

One of the projects Jisc is pioneering is learning analytics which will have already consume big data sets and streaming statistical inputs from students and institutions. If we could have an infrastructure that enables real time analytics, we would have so many big data analytics opportunities that will help in supporting each student’s individualised/ personalised learning.  Connected classrooms could be the future with smart IoT where students and teachers will be able to communicate across countries, get access to materials tailored to their needs that might be used in other schools or even districts.

Mike suggested that “enterprises must act on a range of perishable insights to get value from IoT data”. If using that we could envision how educational institutions could have timely insight and interventions to overcome challenges they are facing with the student experience.

IoT and gamification:

 This is true personalized learning experience achieved at scale and only made possible by AI integrated with a gamifieid tool to change people’s behaviors.  The smart Kolibrees’ multi-faceted software is used to educate children through a game about the how to brush their teeth and was able to transform the way they do it .

The game has 3D motion sensors to track movements of brushing behaviours which are tracked and checked by parents to see how well children have brushed.

The more data the system have on children’s brushing habits, the more reliable and accurate the software is. The game’s reward system encourages children to improve their habits based on gamification principles.

what is powerful about this is that brushing data obtained is made available via an open API to feed to  third party game designers who is able to develop new apps and add more fun components to further enhance brushing time.

In schools, the potential of gamified IoT is great is in maximising engagement while enabling real-time interaction and feedback that help shape user’s behaviour and deliver emotional rewards that encourage on-going engagement.

Let’s say we could use Neurosensors in schools during particular activities to provide insight into students’ cognitive activity using EEG technology that measures brain activity. This would allow teachers to dedicate more attention to students who need it not just for those who ask for support.

focus_headband

Focus headband used for gamers uses trace amount of electrical current to stimulate the prefrontal cortex, producing positive short-term effect in playing ability by Foc.us Labs

Furthermore, with AI, gamification elements could be personalised depending on how the student is motivated to enhance deep understanding of difficult concepts and have fun learning about it at the same time.

Considerations to bear in mind: IoT security and privacy and data protection:

Discussions about security and privacy issues was prevalent in the show and this is very necessary I suppose. with more devices become connected to different objects in our houses, bodies and workplace, IoT security becomes more complex.

I totally agree that when using and adopting IoT in different areas such as healthcare and education, there are a lot of issues to to do with privacy and data protection and information security that if not considered properly could raise questions to confidentiality, integrity and availability of information.

However, to some, it is also fear of new technologies that is not new. Fear of using new tech always exists. We all were frightened to submit our credit card information while now it is something we do not really think about as much.

I think it is important that we understand that it is not the technology that increases risks for privacy, security and data protection, however, it is the way it is used and applied.

To deal with these challenges, i believe in building trust in our digital world that could make us more confident when adopting and using new technologies.

  • A lot of information about data protection principles and privacy policies need to be stated clearly and translated to the end user. For example, the purpose of processing personal data and how it is processed need to be identified for the owner of the data.
  • Designers of IoT can do a lot if they keep in mind security issues earlier in the development and design stage of  any IoT project.
  • If we need to achieve that, we also need to empower people through involving them in making decisions about who owns the data and have control over it, how the data is shared and with which party, what elements to connect, and how to interact with other digital users and technology providers. This is what could build transparency and confidence to ensure trust and safety in my point of view. As it is suggested in the event, “eventually nobody is going to own personal data- there’s just going to be permissions and relevant questions”.