Blockchain – Beyond the hype

 

In the last few years Blockchain has been a buzzword in technology and companies have rushed to launch their own blockchain solution in the same way apps exploded with the advent of smart phones in the mid 2000’s. For many people not in the technology space, Blockchain is only synonymous with Bitcoin and wild price volatility (as we saw before Christmas 2017 when it reached almost $20k, an unprecedented rise from just under $1k in 12 months). In my opinion, financial value is the least interesting part of the technology and it has the potential to disrupt so many traditional paradigms such as banking and medicine that the general public have lost faith with in recent times.

In short, blockchain is a ledger of records organized in ‘blocks’ that are linked together by cryptographic validation. It is decentralised and distributed which means that this validation occurs across multiple nodes (participants) through proof of work (POW) in most cases. This requires the service requester to complete a computational problem to deter cyber attacks such as DDOS.

The real issue at present is the lack of viable use cases being implemented coupled with any sort of regulation and as a result a lot of ICO’s (Initial Coin Offering) have been used to raise quick capital with limited roadmap/ long term vision. However, there are a few projects that offer solutions to existing problems that are not being addressed by conventional means.

WaBi

https://www.wacoin.io 

WaBi is a platform focused on tracking consumer products and using the blockchain to store and communicate information about it’s safety and authenticity.  Using RFID and anti-tamper labels to track data, their aim is to reduce counterfeit products in the supply chain such as medications and food sources. WaBi tokens are used as loyalty points to incentivize scanning of anti-counterfeit labels on Walimai protected products.It is primarily focused on the Chinese market where there have been a number of recent scandals around the authenticity of baby formula, most notably the 2008 Baby Formula scandal (http://news.bbc.co.uk/1/hi/7720404.stm)  that was reported to have affected over 300000 infants. As part of the Walimai company, this company already has a market and immediate use case.

Aion

https://aion.network 

Due to the plethora of blockchain solutions out there, it is only natural that there would need to be a way of connecting them and transferring data/ logic (or value/ smart contracts) across multiple networks which may otherwise become isolated silos. Aions mission is to provide a scalable and interconnected solution rather like Cisco connected disparate networks in the 1990s. This could potentially lead to mass adoption, accelerating the easy development of DAPPS (decentralized applications) that create bridges from one blockchain protocol to another. Put simply it will develop a blockchain protocol that will act as a “router” connecting these networks. Validation of blocks/ transactions will eventually use Proof of Intelligence(POI) which unlike POW requires a unique puzzle to be solved using the POI algorithm. The intent is to motivate the creation of AI-specific or specialized hardware that could be used for machine learning and neural network training in the future.

 

 

MedicalChain

https://medicalchain.com

Even within the NHS, there are so many  disparate hospitals trusts and groups, it makes sharing patient data using a generic Electronic Health Record (EHR) very problematic. This can lead to unnecessary delay in referrals and diagnoses, reducing the effectiveness of medical practitioners that have limited and sometimes inaccurate data at their disposal.  Medicalchain uses blockchain technology to securely store health records and maintain a single version of the truth. The different organisations such as doctors, hospitals, laboratories, pharmacists and health insurers can request permission to access a patient’s record to serve their purpose and record transactions on the distributed ledger. Ultimately it is the patient that has control over their own health record

Medicalchain provides solutions to today’s health record problems. The platform is built to securely store and share electronic health records reducing the time to access potentially lifesaving patient data. In the first instance it is providing a telemedicine solution so that patients will be able have online medical consultations secure in the knowledge that their records will not be compromised. This is in response to the multiple cases of high profile patient data breaches in the recent past – https://www.telegraph.co.uk/news/2017/03/17/security-breach-fears-26-million-nhs-patients/ 

Medicalchain has recently partnered with The Groves NHS practice consisting of four GP practices, supporting over 30,000 registered patients and 1,000 private patient families with system launch in early July.

 

New technology is only worthwhile if it improves/ iterates on what we already have and this is especially true of blockchain. Simply duplicating existing systems via a different platform will not encourage innovation or lead to ubiquity. There has to be a unique reason or problem identified that traditional methods cannot solve for it to be a worthwhile investment in terms of money and time. The examples above show that there is certainly potential for blockchain technology to be a game changer but in my opinion there are still too many projects that can be classed as vapourware, all style and hype but no discernible product. While these still exist, they will act to distract/ detract newcomers from the benefits and slow down mainstream adoption. Even so, I still believe the winners emerging from this space can revolutionise and transform existing systems like banking, we have for so long afforded a permanence in our modern society.

Further Reading

Jisc Futurist Martin Hamilton has written a horizon scanning report for Blockchain that is available at https://www.jisc.ac.uk/reports/blockchain-in-research-and-education  covering the potential impact in Research and Education.

The 360-Degree Cameras – bringing VR to the mainstream and educators.

Last year was a great year for everything AR, VR, and 360 filming – all the big names from Apple, Facebook to Samsung have contributed to a massive breakthrough that pushed immersive experiences into the hands of people. From Apple’s augmented reality technology ARKit and its rival Samsung’s ARCore to Samsung Gear 360 (2017) camera latest release not limited to Android users, we can not only experience immersive experiences but also quickly create our own without much more than a mobile phone.

We have recently received the Samsung Gear 360 (2017) camera to join the Jisc Digi Lab kit. So, what is a 360 camera and what can we create with it?

Samsung360

I am starting a series of blog posts on my recent experimentation with SamsungGear360 to give some insight into the technology, how to use it, and its applications in the educational space.

360 cameras can be dual or multiple lenses attached to each other, and they take 360-degree spherical images and videos that can be Monoscopic or Stereoscopic. Because Facebook, YouTube, and Vimeo started supporting 360-degree content, you are now able to view and share any 360 images or video on a web browser – just pan around with your mouse cursor, or on your mobile phone – just rotate your phone around to view your 360 videos/images.

This is a new viewing paradigm, for well over 100 years, we only looked into screens limited in their peripheral.

With 360-degree cameras, the field of vision is opened up to a dimension we’ve never had to deal with before”.

John Bucher said in his book Storytelling for Virtual Reality.

You may have seen one of your Facebook friends streaming their birthday party live as a 360 video, or live streaming a concert you wished you could attend, bringing an extra level of immersion through your mobile device.

Technologies like this literally blur the boundaries between us and other people, spaces and cultures. Using a Virtual Reality headset such as Google cardboard cannot only transport you into these places and cultures but allow you to live vicariously through another person’s eyes.

Try this immersive 360 video of Edinbrough Fringe festival.

Screen Shot 2018-02-26 at 15.10.09

An immersive 360 video Edinburgh Festivals experience.

This new form factor has brought massive opportunities to content creators – through Samsung Gear 360 you can shoot and record a 360 video that could be content for a Virtual Reality experience.

For some experiences, you wouldn’t need to create a whole CGI environment to build a 3D world and 360 experiences do not necessarily require multi-headed professional cameras that require a significant amount of stitching work afterward. The mobile app accompanying the camera stitches images and videos automatically so you can immediately load them and share them with others – unlike other cameras that require significant editing to share the content.

This is great news for teachers and those who have limited time and money to spend on creating and learning how to develop 360-degree content.

360 cameras and in particular the Samsung Gear 360 have been used by journalists at BBC and NEW YORK TIMES to capture and tell immersive stories. If you are a teacher and you’d like to explore this area more, you could put some content together within a very short time. In addition, there are different tools that can add interactivity to the experience such as Thinglink. The University of Southampton is experimenting with this tool to create interactive field trips – adding annotations to any 360 videos using Thinglink could enable you to build a more guided, engaging and meaningful learning experience.

Furthermore, teachers or students wanting to explore 360-degree experiences for teaching and learning, they could find plenty of videos online produced by someone like NASA and the BBC for example – say you want to make a particular science class more interesting and engaging for students, go to NASA360 Facebook page and you will be able to take your students to the red planet through a classroom activity that is more fun.

While 360 videos are very good at allowing users an experience of a place otherwise inaccessible, it can also be used to prepare your students for future employment. At medical schools, 360 videos are being used to prepare their students to go to emergency rooms and deal with emergency situations. This emotional and cognitive readiness can have a significant role in lowering stress and anxiety level among students and even junior doctors.

Filmmakers and journalists have found 360 videos a great medium to bring immersive stories in new ways that could impact people’s behaviours. Journalists have realised that 360 videos can be a game changer for storytellers as a new format of stories emerges with this technology. Journalists and storytellers are now able to give roles to viewers – what is it like to be an autistic teenager!? The Party experience which is the Guardian’s latest virtual reality 360 film offers a glimpse into how autistic people feel and cope with a depressing situation such as a social event. Through this experience, you will enter the world of an autistic person, then the emotional aspect will kick in. I believe this is a powerful medium that enables having shared experiences enhancing empathy and encouraging people to act and change their behaviours or reactions towards life situations and issues. The emotional journey of the audience through the experience is very crucial in communicating ideas about mental health, bullying situations, racism, and even more international issues such as climate change.

Virtual Bodyworks and the Digital Catapult are using their research findings in real-life applications to show that

powerful changes can be induced in people who have these experiences, even to the extent of decreasing feelings like racism or understanding the consequences of your own actions towards others.”

360 filming and immersive storytelling are being taught as part of university journalism programme courses at Stanford University in the US as they are believed to impact the audience in ways not possible with other journalism mediums.

While we talk about “immersive” technology which is definitely something I am very passionate about, I still do think that it is not the technology which is innovative, but the stories and the ideas that these technologies enable us to implement.

Next blog to follow on shooting and filming in Samsung gear 360 camera.

For now, join me at our  Digifest conference next week for a show & tell session on 360 camera/filming, I will be in the Digilab area so if you are around, please stop by.

“Alexa, ask Jysk” – Hacking new Alexa skills for Amazon Echo

An estimated 30 million smart speakers have been sold in the United States alone, with devices like Google Home and Amazon Echo (“Alexa”) nestling in the corner of many living rooms, kitchens and yes – even bedrooms. Amazon are betting big that you’ll want one in every room, even selling the Echo Dot in a six pack and twelve pack.

The digital assistants in these smart speakers seem like something from a science fiction movie – you can ask Alexa everything from general knowledge questions to very specific things like getting a weather forecast or checking to see if there are likely to be any delays on your commute into work. But how do they really work, and how might you go about teaching a digital assistant like Alexa some new tricks?

Amazon Echo - photo CC BY-NC-ND Flickr user michaeljzealot

Amazon Echo – photo CC BY-NC-ND Flickr user michaeljzealot

In this post I’ll look at how to create a new “skill” for Alexa, and hopefully demystify the technology a little bit.

At Jisc we operate dozens of services, for around 18 million people – from teachers and learners to researchers and administrators. Why log a call about a product or service when you could just ask Alexa? Maybe you could get your question answered there and then, from the information that we already have in our systems?

I thought it might be fun to work up a few practical examples, for instance:

  • I’m a network manager or IT director. What’s the status of the Janet network? – are there any connections down? Our  Janet status page and Netsight service has this info
  • I’m a researcher writing a new paper. What’s this journal’s open access policy? Our SHERPA services let you conveniently find out about publisher and funder OA policies
  • I’m an administrator reviewing a grant application before submission. Do we really need to build a hyperbaric chamber, or is there another institution nearby that has one? Our equipment.data service lets you find kit that institutions are sharing with each other and industry

What would that look (and sound) like? Here’s a short video that my daughter and I made to demo our prototype Alexa skill for Jisc:

The Alexa digital assistant is the Amazon product that underlies all of the Echo products, and we are also starting to see Alexa appear in third party products like the recently announced Sonos One speakers. If you don’t like saying Alexa, or perhaps someone in your house has a similar name, you can call it computer instead – very Star Trek!

Amazon provide developers a set of tools for building new Alexa applications. I’ll give you a quick overview here, but as you can imagine there is quite a lot of detail for those who want to take a deep dive into all things Alexa and Echo related.

First off, you have to tell Amazon what your new Alexa skill will be called – it needs to have a distinctive name because there are already over 15,000 skills out there. It turned out that Jisc doesn’t work as our skill’s name, and I had to resort to a phonetic spelling of our company’s name instead – Jysk. Once you find the right word or words to invoke the skill, people will be able to say Alexa, ask Jisc – or rather, Jysk!

Defining the Jisc ("Jysk") Alexa skill

Defining the Jisc (“Jysk”) Alexa skill

Secondly, you need to tell Amazon what kinds of questions people will be asking of your skill. This is where the magic and mystery of Alexa starts to unravel a little. It turns out that you have to be pretty precise about the wording of the phrases that you want Alexa to respond to, although there is a little wiggle room. For instance, if we tell Alexa to respond to questions about janet network status, it will also recognise that status of the janet network is the same question worded slightly differently.

Sample utterances for Jisc equipment data search

Sample utterances for equipment.data search

Thirdly, we need to tell Alexa how to find out the answer to the question. If it’s a simple question like janet network status, then this isn’t actually too hard either – we just need a place that we can go to for the requested info. And if it’s already available on the web somewhere, then we can even copy the info off the webpage without having to set up some kind of complicated database connection or Application Programming Interface (API).

Slot values for Jisc equipment database search

Slot values for equipment.data search

If our question has parameters, things do get a bit trickier – and this is where the final bit of mystique evaporates. Alexa doesn’t and can’t know all of the possible journal names or pieces of equipment that we might want to ask it about. Instead, when the skill is created, we tell it what the possible parameters are for the question. Amazon call these “slots”. When we’re asking about journal names we might include Nature, Computer Networks and so on in our slots. When we’re asking about equipment, our slots might contain mass spectrometer, spectroscope, hyperbaric chamber, and so on.

So where does Alexa go to get the answers to these questions? The answer is that it makes a simple HTTP request to a web server somewhere. This can be any server, but Amazon are quite keen for you to use their new Lambda system, which lets you run code on demand without the overheads of running (securing, patching etc…) a regular server. Lambda is a whole story in itself, and for demo purposes I’ve simply pointed Alexa at an existing Jisc test server.

What does the code look like to process a request from Alexa? Pretty simple, actually. Here’s the actual code that I use to make the Jisc (Jysk!) Alexa skill work…

Alexa PHP sample code

Alexa PHP sample code for the Jysk intent

Let’s spend a moment unpacking this – we’re using the Amazon Alexa PHP Library to process the incoming request. This makes an Alexa request object that contains the question and (if appropriate) the slot that Amazon think we were asking about. We can then decide what to do with the request. For the sample Jisc skill we fetch the Janet network status or journal policy information from a file that has already been populated separately, and for the equipment database lookup we go off to run an external program. Any external dependencies have to be pretty seamless, otherwise the user will be left waiting for ages wondering what is going on.

It’s important to note here that you don’t have to release your work-in-progress Alexa skill to the world until you are ready – in the Alexa console you can specify that the skill only works on Alexa devices linked to your own Amazon account, which is probably best for testing. You can also simulate interaction with an end user to test your back end code independently of Alexa’s speech recognition.

Sometimes it’s easier to see things rather than read about them, so I’ve made a short video that walks you through the Alexa developer console and shows you how this all fits together:

So now you know how to make your own Alexa skills. What will you make? Why not leave a comment and let me know!

 

AI can be DIY – a Raspberry Pi powered “seeing eye”

Scarcely a day goes by right now without a breathless newspaper headline about how artificial intelligence (AI) is going turn us all into superhumans, if it doesn’t end up replacing us first. But what do we really mean by AI, and what could we do with it? In this post I’ll take a look at the state of the art, and how you could build your own Do-It-Yourself “seeing eye” AI using a cheap Raspberry Pi computer and some free software from Google called TensorFlow.

If you want to have a go at doing this yourself, I’m following a brilliant step by step guide produced by Libby Miller from BBC R&D. I should also note that this is possible because of Sam Abrahams, who got TensorFlow working on the Raspberry Pi, and also literally wrote the book on TensorFlow.

Raspberry Pi logo screen printed onto motherboard

AI right now is mainly focussed pattern recognition – in still and moving images, but also in sounds (recognising words and sentences) and text (this text is written in English). A good example would be the Raspberry Pi logo in the picture above. Even thought it’s a little blurred and we can’t see the whole thing, most people would recognise that the picture included some kind of berry. People who were familiar with the diminutive low cost computer would be able to identify that the picture included the Raspberry Pi logo almost instantly – and the circuit board background might help to jog their memory.

While we talk about “intelligence”, the truth is that this pattern recognition is pretty dumb. The intelligence, if there is any, is supplied by a human being adding some rules that tell the computer what patterns to look out for and what to do when it matches a pattern. So let’s try a little experiment – we’ll attach a camera and a speaker to our Raspberry Pi, teach it to recognise the objects that the camera sees, and tell us what it’s looking at. This is a very slow and clunky low tech version of the OrCam, a new invention helping blind and partially sighted people to live independently.

Our Raspberry Pi powered "seeing eye" AI

Our Raspberry Pi powered “seeing eye” AI

The Raspberry Pi uses very little electricity, so you can actually run it off a battery, although it’s not as portable or as sleek as the OrCam! And rather than a speaker, you could simply plug a pair of headphones in – but the speaker makes my demo video work better. I used a Raspberry Pi model 3 (£25) and an official Raspberry Pi camera (£29). If you’re wondering what the wires are for, this is my cheap and cheerful substitute for a shutter release button for the camera, using the Raspberry Pi’s General Purpose Input Output (GPIO) connector. GPIO makes it easy to connect all kinds of hardware and expansion boards to your Raspberry Pi.

So that’s the hardware – what about the software? That’s the real brains of our AI…

Google’s TensorFlow is an open source machine learning system. Machine learning is the technology that underpins most modern AI systems, and it’s responsible for the pattern recognition I was talking about just now. Google took the bold step of not just making TensorFlow freely available, but also giving everyone access to the source code of the software by making it “open source”. This means that developers all over the world can (and do) enhance it and share their changes.

The catch with machine learning is that you need to feed your AI lots of example data before it’s able to successfully carry out that pattern recognition I was talking about. Imagine that you are working with a self-driving car – before it can be reasonably sure what a cat running out in front of the car looks like, the AI will need training. You would typically do this by showing it lots of pictures of cats running out in front of cars, maybe gathered during your human driver assisted test runs. You’d also show it lots of pictures of other things that it might encounter which aren’t cats, and tell it which pictures are the ones with cats in. Under the hood, TensorFlow builds a “neural network” which is a crude simulation of the way that our own brains work.

So let’s give our Raspberry Pi and TensorFlow powered AI a spin – watch my video below:

Now for a confession – I didn’t actually sit down for hours teaching it myself. Instead I used a ready-made TensorFlow model that Google have trained using ImageNet, a free database of over 14 million images. It would take a long time to build this model on the Raspberry Pi itself, because it isn’t very powerful. If you wanted to create a complex model of your own and don’t have access to a supercomputer, you can rent computers from the likes of Google, Microsoft and Amazon to do the work for you.

So now you’ve seen my “seeing eye” AI, what would you use TensorFlow for? Why not leave a comment and let me know…

 

Apple ARKit – mainstream Augmented Reality

Ever wandered through a portal into another dimension, or wondered what it would look like if you could get inside a CAD model or an anatomy simulation? This is the promise of Apple’s new ARKit technology for Augmented Reality, part of iOS11, the latest version of the operating system software that drives hundreds of millions of iPads and iPhones.

Turning the IET into a Mario level using Apple ARKit

Turning the IET into a Mario level using Apple ARKit

Augmented Reality has been around for years, but in quite a limited way – point your phone/tablet camera at a picture that has special markers on it, and the AR app will typically do something like activate a video or show you a 3D model.

But anyone wanting to develop an AR app in the past has had to fend with a couple of big problems – firstly the hardware in phones and tablets hasn’t quite been up to the job of real time image processing and position tracking, and secondly there hasn’t been a standard way of adding AR capability to an app.

With recent improvements in processor technology and more powerful graphics and AI co-processors being shipped on our devices, the technology is now at a level where real time position tracking is feasible. Apple are rumoured to be including a sensor similar to Google’s Project Tango device on the upcoming iPhone 8, which will support real time depth sensing and occlusion. This means that your device will be able to tell where objects in the virtual world are in relation to objects in the real world – e.g. is there a person stood in front of a virtual object?

Apple and Google are also addressing the standardisation issue by adding AR capabilities to their standard development frameworks – through ARKit on Apple devices and the upcoming ARCore on Android devices. Apple have something of a lead here, having given developers access to ARKit as part of a preview of iOS11. This means that there are literally hundreds of developers who already know how to create ARKit apps. We can expect that there will be lots of exciting new AR apps appearing in the App Store shortly after iOS11 formally launches – most likely as part of the iPhone 8 launch announcement. If you’re a developer, you can find lots of demo / prototype ARKit apps on GitHub. [[ edit: this was written before the iPhone 8 / X launch! ]]

As part of the Jisc Digi Lab at this year’s Times Higher Education World Academic Summit I made a video that shows a couple of the demo apps that people have made, and gives you a little bit of an idea of how it will be used:

How can we see people using ARKit in research and education? Well, just imagine holding your phone up to find that the equipment around you in the STEM lab are all tagged with their names, documentation, “reserve me” buttons and the like – maybe with a graphical status indicating whether you have had the health and safety induction to use the kit. Or imagine a prospective student visit where the would-be students can hold their phones up to see what happens in each building, and giant arrows appear directing them to the next activity, induction session, students union social etc.

It’s easy to picture AR becoming widely used in navigation apps like Apple Maps and Google Maps – and for the technology to leap from screens we hold up in front of us to screens that we wear (glasses!). Here’s a video from Keiichi Matsuda that imagines just what the future might look like when Augmented Reality glasses have become the norm:

How will you use ARKit in research and education? Perhaps you already have plans? Leave a comment below to share your ideas.

Unboxing the Mycroft AI open source digital assistant

Mycroft open source AI

Mycroft open source AI

Mycroft AI is the product of a Kickstarter campaign from Joshua Montgomery, who conceived back in 2015 of a voice activated digital assistant (like Apple’s Siri or Amazon Alexa) that was completely open source, built on top of an open hardware platform. Fast forward two years and $857,000 from crowdfunders and investors, and the first 1,000 units have just gone out to supporters around the world.

Watch me unbox the Mycroft AI Mark 1 “Advance Prototype” and take it through its paces:

Mycroft is really interesting for a variety of reasons:

  • Being open source software, you can see how the code works and tinker with it to make Mycroft do things that its creators never envisaged. This is a great way of learning to code, and understanding how to do speech recognition.
  • Mycroft capabilities, or ‘skills’, are typically written in the very accessible Python scripting language, and can easily be downloaded onto the device.
  • The Mark 1 itself is a clever combination of off the shelf hardware like Raspberry Pi and Arduino, but you can also run the Mycroft software on your existing Raspberry Pi, or on a conventional desktop/laptop. If you do have the Mark 1, then there are a wide range of hardware ports and interfacing options exposed on the back panel, including the full Raspberry Pi and Arduino GPIO pins, HDMI, USB and audio out.

So from an edtech perspective it’s easy to see Mycroft being used as a hook for teaching advanced hardware and software concepts and project work. And perhaps we’ll see DIY Mycroft kits turning up in maker families’ Christmas stockings before long too!

It’s also important to keep in mind that Mycroft’s developers see it as a white label digital assistant that (for example) organisations could customise for their own needs, retaining full control over the hardware and software – unlike the black box solutions from the tech giants. There could be quite a few use cases where this total control turns out to be a key requirement, e.g. from financial services to the defence sector.

I’ll have more to say about Mycroft soon, but in the meantime do leave a comment and let me know what you think about it, and how you might use it in research and education…

3D printing: Lessons and tips

Recently, I started experimenting with the Ultimaker+ extended 3D printer that we’ve got into our office as part of the digilab. I was very excited and enjoyed every little detail about it, from unpacking to assembly, to then making something out of it. As it happens that after a week of getting the printer, I was participating in the NFSUN (the Nordic Research Symposium on Science Education) conference to disseminate our work in the AR-Sci project. I thought it could be a great idea to show some of the 3D models we produced for the project in a new and different way. Throughout the project we produced a collection of interactive Augmented Reality experiences for science education, and 3D printing can be also great way to visualize things we don’t see normally with the naked eye.  3D printing enables you to bring your digital 3D representation into your real world as a tangible object.

In this blog post I will talk about my first print using the Ultimaker + extended and how I prepared the 3D model to be printable with other tips to fix yours easily.

I chose randomly to print a 3D model of chloroplast which you can see in the image below. This is a screen shot of the chloroplast model as part of the Augmented Reality experience.

Chloroplast in AR

Chloroplast in AR

Getting started with a new 3D printer means using a new software, Cura.  Cura is a slicing software maintained by Ultimaker. It prepares your 3D model by slicing it into layers to create a file known as G-Code which speaks the language of the 3D printer.

However, some 3D models need to be prepared before you bring them to the printer software as not all 3D models are ready to be printed straightaway. There are a set of criteria that need to be met to make your 3D model
printable.

There are two approaches to fix a 3D mesh; manually, or automatically.

For a quick fix, I recommend using this software that enable you to validate your 3D model, fix and convert it to the right format before sending it to printer. Meshlab, and Netfabb are the ones I used in several situations to prepare my 3D models, and I found both of them very easy and quick to use.

MeshLab:

  1. 3D model need to be in .stl format: you can use MeshLab to convert your design files from any format to .stl.
  2. Polygon reduction: It is advised that the files you need to print should be under 64 mb in size and have less than 1,000,000 polygon (triangles) faces: if your 3D model is larger than that, you can use MeshaLab for polygon reduction.

Netfab:

  • Netfab software can be very useful when checking your model very accurately and analyze all its features before you send it to a 3D printer.
  • I also found Netfab very useful to scale your 3D models to real world measurement.
  • The mesh needs to be closed, no gaps or wholes between faces and edges and Netfab can help you fix that using the automatic repair tool.

To fix the Chloroplast 3D model, I decided to prepare it manually using 3D modelling software called Blender :

  • Each 3D model needs to be a single seamless mesh:
  • Looking at the model, it can be printed in two different pieces that will eventually be put together and glued. This allows me to print the whole model in two different colors as well.
  • Wall thickness:
  • One thing to look for is that the wall thickness of the model needs to be above the minimum. This is an issue I had with the outer piece of the model. I had to increase its wall thickness by using the solidify modifier making sure that this doesn’t affect or change the shape of the model.
  • Last step was to look for holes in the model,
  • then convert it to .STL.

It is worth noting that Blender already has an add-on called 3D print toolbox which you can load it from blender preferences menu. This tool checks for

issues in your model like intersecting faces, distorted faces and thickness and over hanging before printing.

A final check for the model is always good by using Mesh Analysis panel which generate a heatmap of the problematic areas.

Finally, below is the end result of my model which I am very proud of. What I like about 3D printing is that you can rapidly prototype your design, visualise, and share it with other people who can test it or use it to stimulate a conversation around ideas or in facilitated lessons.

Chloroplast 3D print final result

Chloroplast 3D print final result

VR & AR World 2016: takeaways and potentials for the next generation learning environment

At the recent VR &AR world 2016 that took place in the ExCel centre in London, I was really excited to explore the vibe and experiences around cutting edge technologies that 50 countries came to share from all around the world. With many established companies like Meta, HTC and Epson and many more showingcasing their development in Augmented, Virtual and Mixed Reality, it was exciting to see their latest developments. And while there was still gimmick led content, the majority were aiming to show what added value these technologies can offer. I will concentrate on their potential in the next generation learning environment being informed from my experience visiting VR & AR world 2016 and.

  1. Sense of presence and the impact on the learner’s identity:

The latest Toyota C-HR VR experience was showcased at the exhibition, it was an experience where you are able to digitally discover and personalise this specific vehicle before it is being released. This time, it was the fact that I haven’t yet tried getting into a car in VR, that sparked my curiosity to try the Toyota experience. Throughout the experience, I was able to walk around the car, open doors, sit inside and change colours and some settings inside the car.  However, what astonished me was a feedback that I got from one of the event’s visitors watching the demo of the experience, he said: “yes, it is definitely important to open the door to get off the car”. I stopped for a second thinking about the comment he made.

Sure I felt I was actually in the car, I opened the door when I finished experimenting with the car and wanted to get off.  If you look that chair in the picture below, it is obvious that I can get off the chair without the need to open any door.

 

Toyota C-HR VR expereince

Toyota C-HR VR expereince

It is funny, yet it is the same thing that makes VR really powerful. This is one of the main things that makes me so passionate about VR and think it has great potential. The sense of presence that VR enables, can make you psychologically feel that you are taking up a physical space within the virtual environment whilst you are physically located in a different place. It is not only bringing to us experiences we never thought we could have again or enabling us to be in places that are may be dangerous or costly to visit otherwise, it is actually enriching our own life experiences. If we think about how many opportunities VR can bring to us, neurally speaking it is contributing to shaping our identity and who we really are: “who we are depends on where we’ve been”.

This is great because with VR the experiences we will be able to try can be limitless and could be more unique as these experiences become more personalised to how we individually interact with it.

I believe that this is going to make tremendous impact on the learners’ experiences and their understanding about the world, as they will be able to reach to things and explore places far beyond their physical locations. Couldn’t this be transforming in the next generation learning environment to overcome limitations of travel to learn about specific places or particular period of time, and furthermore enabling the learners to get different perspectives.

Toyota uses the HTC Vive in this demo which is a favoured VR tech, and it was occupying the floor in the event due to its super accuracy, a lot faster and with little latency compared to other VR headsets.

In an earlier blog post, Matthew Ramirez talked about the techniques and tricks that the vive deployed to enhance the immersion and the sense of presence in VR overcome some problems with other headsets like nausea.

 

  1. Virtual Reality and Mixed Reality in Maintenance and Engineering:

If we take this example even further, Epson, Vuzix and Meta has been developing their own smart/ mixed reality glasses for the automotive industry and construction. Colleges and vocational courses are the ones that could benefit from these technologies the most particularly to facilitate and smooth the transition between colleges, apprenticeships, and real life jobs.  With VR, AR and MR students could be more prepared for solving real life problems as they could develop many of the required skills when working with cutting edge tech in a very safe and constructive learning spaces.

This in itself could empower learners and particularly girls to get them into engineering and technology careers when their lack of confidence resulted in shortage in the skills required for STEM related jobs as reported in the Guardian and the BBC news. Stimulating experiences that engage students in problems solving and challenging activities with the use VR and AR could help unlock the learners’ creativity to come up with solutions in a flexible and safe environment. They are able to make mistakes and get feedback, and this could increase girls’ confidence in their abilities and affecting their career choices.

This is a video on using Microsoft HoloLense Mixed Reality in architecture which I believe is revolutionary as it will play a vital role in rethinking design, construction and engineering for  the next generation learners of much needed engineers and designers.

 

  1. Virtual Reality and Storytelling:

Another attractive point to me in the event floor was more relevant to the content and somehow the simple technology. the London’s Stereoscopic Company (LSC) who has a long reputation of publishing 3D stereoscopic images from Victorian times to the modern day is now adopting VR to bring their digital 3D stereoscopic content and films in a more exciting way and making them available to anyone with a smart phone.  It is bringing historical periods of time, books, collections of original images, VR films to us using a small low cost VR kit as you can see in the image.

I immediately thought that this a great medium for storytelling that could turn telling a story in a classroom to a more fun and flexible way.

OWL VR KIT

OWL VR KIT

You might argue with this sort of simple kit you do not get the same high-end experience than with the more expensive VR headsets, however, I believe for entry level users or even kids, it could be a great way to bring information that is only available in texts in books or cards to life through VR and Digital 3D content: “When this content is viewed in stereo the scene leaps into 3-dimensional life” as Brian May describe it. I enjoyed having access to the fascinating world of Stereoscopy they have through the cards as it makes me feel like a child again.

This is OWL VR KIT viewer for any 3D content and VR films, it comes as a box designed with high quality focusable optics.

This sort of kits like the Google card board as well is allowing more people to at least have this sort of first experience with VR and so increasing the number of people who are more savvy and more educated about what VR could potentially offer them.

My take from this example is that the technology itself cannot guarantee a fun and interesting experience alone. What we need to have is more compelling stories, stories that trigger emotions and that sense of wonder we all had when we were children, these are the sort of things that make VR valuable as a storytelling medium.

 

4. Haptic feedback in VR and what is next?  

In the event, I tried for the first time a form of haptic feedback using a  hand-held controller that allows you to feel some sort of force while interacting with virtual objects on the screen. This sensible’ phantom haptics system is one of the machines VR and AR companies are experimenting with to explore what possibilities of real-like feedback a doctor could have when puncturing their patients’ skins. So the system acts as a handle to virtual objects providing stable force feedback for a single point in space.

Hand-held controller

Hand-held controller

I imagine this is definitely going to be essential to any VR and AR experience. We all use our hands intuitively to reach out for things, to interact with things and people, and to feel things. In this example of using the controller in remote surgery training, haptic feedback has been used and it contributed to create an illusion so trainees felt more present in the remote surgery room. This is contributing to make the surgeons or trainee doctors experience more believable enabling them to understand and perform tasks far more effectively and safely.  For me, as haptic and tactile technologies advance and become integrated into more of these experiences, particularly when led by  cognitive neuroscience research on how we interact with the world, the quality of education with VR becomes increasingly high. Medical students will then have great opportunities experiencing and operating in more real-life surgery situations safely and constructively. Haptic feedback could also be a way of better informing students in the  decision-making process.

 

Surgery in VR with Haptic

Surgery in VR with Haptic

The opportunities for using that in education are great to perform tasks more efficiently and effectively are great. Chris Chin the founder of HTC vive is expecting in his interview that education and medical experiences with VR are coming next in 2017, will that be with some advances in the Haptic technology? Shouldn’t we prepare learners for these new delivery methods or at least prepare the current learning environment to be equipped to deliver and support these sort of experiences?

Here are a few examples of our work at Jisc in Augmented Realuty and Virtual Reality to immerse, engage and eduate learners, if you are interested in embarking on your own project or need help, we’d love to share experiences with you.

 

Research and Development at SIGGRAPH 2016

An exciting part of the SIGGRAPH 2016 conference was the experiential area allowing attendees the opportunity to interact with a range of emerging technologies – from Virtual/ Augmented Reality to haptics and immersive realities. Below are my top five picks, although I could easily have doubled the list.

Automated Chair mover

Furniture that learns to move through vibration

Furniture that learns to move through vibration

An innovative way of changing room layout using small vibration bursts to reposition the pose and location of furniture. Controlled through Wifi imagine the potential for time efficiencies in education and classroom/ lecture rooms!

Ratchair: furniture learns to move itself with vibration

Titian Parshakova, Minjoo Cho, Alvaro Casinelli, Daniel Saakes

 


 

Redirected Walking for AR/VR

Redirected walking

Redirected walking

By manipulating user camera view on HMDs (Head Mounted Displays), this application can redirect pedestrians to points of interest without competing for space. No more fighting for the best view of the T-Rex in museums!

Graphical manipulation of human’s walking direction with visual illusion

  • Akira Ishii, Ippei Suzuki, Shinji Sakamoto, Keita Kanai, Kazuki Takazawa, Hiraku Doi, Yoichi Ochiai

 


Mask User Interface for manipulation of Puppets

VR based manipulation of animatronics

VR based manipulation of animatronics

Using depth perception sensors controlling body movements and separate lip sensor (think Darth Vadars mouthpiece) managing mouth action this VR setup brings a new dimension to animated puppetry.

Yadori: mask-type user interface for manipulation of puppets

  • Mose Sakashita, Keisuke Kalahari, Amy Koike, Kenta Suzuki, Ippei Suzuki, Yoichi Ochiai

 

 


Haptic suit

VR integrated Haptic feedback suit

VR integrated Haptic feedback suit

Integrating localised haptic feedback through VR game experiences (object contact, explosions etc.) the Synesthesia Suit adds to this visceral experience by allowing the user to ‘feel’ the sound/ music as if it runs through the body.

Synesthesia suit: the full body immersive experience

Yukari Konishi, Nobuhisa Hanamitsu, Kouta Minamizawa, Ayahiko Sato, Tetsuya Mizuguchi


 

Motion Predictor

 

Motion predictor

Motion predictor

Using complex physics algorithms, Laplacian Vision allows the user to better predict object trajectory information, displaying it in a users field of view.

Laplacian vision: augmenting motion prediction via optical see-through head-mounted displays and projectors

  • Yuta Itoh

Enabling a sense of presence in Virtual Reality

Virtual Reality has emerged from the ashes of the early 90’s as a potential technology to revolutionise our everyday lives. However, some major challenges still exist, both technical and aesthetic, before this can happen. In a series of short blog posts I will examine the major flaws preventing VR from becoming a mainstream technology and propose possible solutions. Let us start with the technical restriction of replicating realistic movement and sense of presence.

Locomotion

Currently, the majority of VR experiences/ games offer the user a seated experience where their movement is controlled with a gamepad, this can often lead to a disconnect between the users body and the movement viewed on the VR headset. In turn, this can contribute to a lack of presence in the game space but perhaps more importantly can lead to feelings of nausea. The HTC Vive looks to overcome this problem by using room scale experiences that at least provide the user with the ability to walk around small areas. Multi-directional treadmills can also provide realistic walking simulation although they work by canceling the user’s motion to mimic acceleration (equivalent to walking on slippery ice) so there is a mismatch as the body attempts to translate this new movement through muscle memory into one it has become accustomed to for years (walking).

But what happens when you are in a VR experience that has a bigger footprint? In order to explore truly realistic VR on foot such as large buildings or even cities this can create a major issue.  Teleporting or returning to a controller can provide a lightweight solution but totally shatters any immersion and your mind becomes immediately disengaged. Redirected walking is a method used by developers to trick the mind into thinking they are walking in a straight line when in fact they are slightly moving on a curved/different path in the real world. This enables up to a 26% gain or 14% loss in distance travelled or 49% gain or 20% loss in rotation (yaw) without user detection. The body perceives the movement in a certain direction but without visual cues is less accurate when it comes to measuring the distance/ rotation travelled. So we are essentially hacking our brain and exploiting loopholes in our circuitry.

The paper Estimation of Detection Thresholds for Redirected Walking Techniques  posits that a technique called curvature gain can allow the user to walk in a 22 metre circle in a real room while travelling in a straight line in the virtual world. This can open up huge opportunities for expanding the virtual environment that was previously impossible while retaining the sense of locomotion felt by the user.

 

Haptic response

Kevin Kelly, founder of Wired magazine commented that “we are moving from an internet of information to an internet of experiences”. VR can also include other physical stimuli through passive haptic feedback where physical objects are linked to visual cues in the virtual world such as a torch or sword. They can be further enhanced by adding other sensory properties such as temperature or texture. However, this is not scalable as for every virtual prop, a physical equivalent needs to be created.

Remarkably, a method called Redirected touching can provide a solution where a real object is placed in the physical environment and remapped in the virtual space to assume several different shapes. This is achieved by using a discrepancy in hand interaction between the real and virtual, evident in the example below. Again, the visual reference tricks the user into believing that the physical object has many more sides than it actually does.

 

 

Taking this one stage further TheVoid has set up situated installations (think Laser Quest with VR) ranging from ancient temples to futuristic alien worlds where the physical environments are augmented in VR with digital equivalents. At one point you noticeably feel the change moving from a research laboratory corridor indoors to a suspended walkway outdoors, feeling the breeze as you move along hundreds of feet above the ground, creating an presence that is hard to imagine. The lack of photorealistic graphics does not detract from the user having a heightened sense of agency facilitated by peripheral stimuli and the ability to interact and touch all the environments.

 

 

Of course, this particular setup is not scalable at the moment but you start to understand how by melding additional sensory elements to VR, immersion can be enhanced leading to a more convincing proposition. VR is by no means the finished article yet but the myriad of speed bumps that block its path to mainstream adoption are slowly being eroded by the ability to employ mind hacks, utilising innovative techniques to trick the user into a greater sense of presence.