Immersive tech in education survey

We’ve recently kicked off a Jisc project looking at how we can help support our members at the UK’s universities and colleges to exploit the potential of immersive technologies like virtual and augmented reality (VR and AR) – find out more on our project page.

As part of this work, we’re doing a survey to find out how people are using these technologies right now, and what they’d like to do with them in the future. It should only take about ten minutes to complete the survey, and this will be a big help in understanding where to prioritise Jisc effort in this area.

Here’s a link to the survey form:

https://ji.sc/ar-vr-in-education

The deadline for responses is 5pm on Friday 12th July 2019.

We’d love to get as many responses as possible, so please do share the survey link with friends and colleagues. We’re looking for a personal view, so don’t feel that you need to represent your institution as a whole when responding.

We’ll write up the results and share them in a report later this year – look out for it on this site.

Tender opportunity – Jisc R&D consultancy framework

Jisc is a not-for-profit company that provides digital solutions for UK education and research.

We are currently tendering for consultancy services in a wide range of areas in support of our research and development activities:

  • Developer services
  • User-centred design
  • Augmented and virtual reality
  • Scholarly Communication, Information Management and Library consultancy
  • Data and analytics
  • Research and research data management
  • Teaching, Learning and the Student Experience
  • Data, Text Mining and data visualisation
  • Testing
  • Edtech start-up services

We are inviting expressions of interest in each of these areas through our online tender portal.

By exploring and developing new digital ideas, scaling them up and transitioning the best into full service, Jisc R&D supports our vision that the UK can be the most digitally-advanced higher education, further education and research nation in the world.

Jisc’s research and development approach is described on our website; current projects are listed on our projects pages.

Jisc’s core staff does not possess all the expertise and capacity to deliver this wide portfolio. Our members and customers require our R&D work to happen at pace. We therefore wish to adopt a ‘core and flexi’ approach, with the ‘core’ Jisc staff completed by a ‘flexi’ pool of defined expertise that can be brought in quickly to focus and scale up effort, and get our R&D projects over the line as quickly as possible. This procurement process will select that ‘flexi’ pool of expertise, and define associated contractual and commercial matters.

We expect to establish a framework agreement with multiple suppliers for an initial period of two years, with the potential to extend up to four years.

Please see below for further information, and note the deadline for expressions of interest is 23:59 on 9th June 2019.

Video wall at the Francis Crick Institute
Procurement Lots

 

 

Lot 1: Developer services

Jisc wishes to establish a framework agreement with bidders to deliver technical services that range from mobile applications to large web services and include scalable middleware.

Lot 2: User-centred design services

To provide compelling, valuable, and effective solutions to our members, we are seeking bidders to deliver UX, content and design expertise. We need great contractors with skills and experience in user-centred design to ensure our services meet our members’ and customers’ needs in efficient, engaging and usable ways.

Lot 3: Augmented reality (AR) and virtual reality (VR) services

Jisc wishes to establish a framework agreement with bidders to deliver services and support in relation to immersive technologies such as augmented and virtual reality (AR and VR), also sometimes referred to as mixed or extended reality (MR or XR).

As part of its R&D work, Jisc is prototyping and piloting a number of potential future services in this area which we will require technical assistance with e.g.

  • 3D scanning services
  • Creation and licensing of immersive content
  • Provision of interactive experiences
  • Provision of virtual classrooms
  • Training in the use of immersive technology
  • Provision of expert guidance and resources 

Lot 4: Scholarly Communication, Information Management and Library consultancy services

Jisc wishes to establish a framework agreement with bidders to commission consultancy services support in the development and review of new forms of scholarly communication (including teaching and learning), information management, and related library technical solutions.

Lot 5: Data and analytics services

Jisc wishes to establish a framework agreement with bidders who are experts with a broad knowledge of analytics in both the HE and FE sectors.

Lot 6: Research and research data management services

Jisc wishes to establish a framework agreement with bidders to commission expert support in the use of technology in all aspects of the research lifecycle. Experts will need to understand the research process and the technical systems that support it. Experts should be familiar with current pain points for researchers, librarians, IT staff and research managers, the legal and ethical issues involved and future horizons.

Lot 7: Teaching, Learning and the Student Experience consultancy services

Jisc wishes to establish a framework agreement with bidders to commission expert support for research and development activities in the use of technology for learning, teaching, assessment, educational and staff development and the support of the overall student experience.

Lot 8: Data, Text Mining and data visualisation services

Jisc wishes to establish a framework agreement with bidders to commission consultancy services to work with experts in the use of text & data mining (TDM) and data visualisation to support research in higher and further education.

Lot 9: Testing Services

Jisc wishes to establish a framework agreement with bidders to deliver consultancy services relating to software testing. Experts will need a broad knowledge and experience of software testing across a range of software products.

Lot 10: EdTech start-up services

Jisc wishes to establish a framework agreement with bidders to commission EdTech start-up experts to offer support our EdTech activity, accelerating the growth of EdTech start-ups companies. The suppliers will need to be able to provide mentoring and coaching to move EdTech start-ups companies forwards within an accelerator type model.

Video wall at the Francis Crick Institute
Next steps

 

 

Each interested bidder is required to register its intention to submit a response to this ITT via the Jisc e-tendering opportunities portal at: https://tenders.jisc.ac.uk

Note: if your company is not already registered as a user of the e-tendering portal it is necessary first to register as a user of the portal. Once this registration is accepted by Jisc, a username and password will be issued, and should be used for all future accesses to the portal.

Note also that Jisc will reject a bidder’s registration if there is already a registration on the portal for that bidder. Please remember also that this initial registration is not a registration for a particular contract, it is only the registration of the Bidder on the e-tendering portal. When the username and password are received, it is necessary then to login and register interest in a specific contract.

When registering, it is recommended that a generic mailbox (such as sales@supplier) is set up, so that information from the system can be sent to more than one representative at the company.

A host of detailed user guides are available within the portal under the Help tab.

Providing that there are a sufficient number of competent candidates, Jisc will shortlist between 6 and of 12 bidders, per Lot, to be invited to submit an ITT response. Subsequently, Jisc intends to award a maximum of 5 places to the highest scoring bidders on each Lot of the final framework.

Please note once again that the deadline for expressions of interest is 23:59 on the 9th of June 2019. Also note that all queries relating to this opportunity must be raised via the portal.

How open world games can help inform future learning

Screenshot from Red Dead Redemption 2 by Rockstar Games

In the 1990s the internet was seen as an oracle for quickly gaining knowledge, a pathway to find information. In the near future, Kevin Kelly suggests in his book The Inevitable – Understanding the 12 Technological Forces That Will Shape Our Future that it will evolve to being a library of experiences that users can connect with on an emotive and personal level. VR and AR are immersive technologies that can potentially harness this experience led approach and take users into a historical or hypothetical event on a whole new level.

Rockstar Games are often considered to be the benchmark for open world games and the release of Red Dead Redemption 2 does nothing to change that widely accepted accolade. If we put aside the gameplay and diverse story which binds the turn of century adventure together, it is the environments which truly captivate and breathe life into the game creating an intensely visceral experience. To me, characters merely play supporting roles to the main protagonist, the world that they inhabit. This is the hook that pulls you into the old west and it achieves this by referring back to the art movements that frame the eras visual backstory.

On a recent visit to Manchester Art Gallery I spent a lot of time fascinated with the way that 19th century landscape painters used shadows, colour and lighting to evoke scale, emotion and story. In constructing the world that the protagonist, Arthur Morgan lives in, Rockstar seem to have employed similar artistic ploys to immerse the player.

When the West with Evening Glows by Joseph Farquharson, 1901

Screenshot from Red Dead Redemption 2 by Rockstar Games

Although the photorealistic aspect of the game is constantly referred to, in my opinion it is the reflection of contemporary art of that era that frames the experience far more.

A Spate in the Highlands by Peter Graham, 1866

Screenshot from Red Dead Redemption 2 by Rockstar Games

Further in-depth detail around the visual inspiration for Red Dead Redemption 2 can be found in this interesting article https://www.polygon.com/red-dead-redemption/2018/10/26/18024982/red-dead-redemption-2-art-inspiration-landscape-paintings

Returning to the idea of experiences becoming increasingly prominent using new immersive technology such as VR, I feel that environments can go a long way to recreating a level of user belief if they are constructed using familiar cultural mirrors such as art. Without narrative and story telling, photo realism, so often the measurement of immersion cannot singlehandedly hook the user in and can appear sterile and two dimensional.

In education, when developing immersive content, the human condition is often neglected, dominated by a need for the transfer of knowledge. In reality, we as humans are impacted far more by personal interactions and small nuances. Imagine sharing the intimate experience of Howard Carter discovering King Tutankhamun’s tomb or the poignant moment between Captain Scott and Captain Oates in the Antarctic when Oates sacrifices himself so that others in the expedition can live.

My hope as it becomes easier to recreate these experiences for education is that we carefully consider how narratives can bind the personal and world together to heighten learning, rather than concentrating on merely striving for hyper realism.

 

 

Blockchain – Beyond the hype

 

In the last few years Blockchain has been a buzzword in technology and companies have rushed to launch their own blockchain solution in the same way apps exploded with the advent of smart phones in the mid 2000’s. For many people not in the technology space, Blockchain is only synonymous with Bitcoin and wild price volatility (as we saw before Christmas 2017 when it reached almost $20k, an unprecedented rise from just under $1k in 12 months). In my opinion, financial value is the least interesting part of the technology and it has the potential to disrupt so many traditional paradigms such as banking and medicine that the general public have lost faith with in recent times.

In short, blockchain is a ledger of records organized in ‘blocks’ that are linked together by cryptographic validation. It is decentralised and distributed which means that this validation occurs across multiple nodes (participants) through proof of work (POW) in most cases. This requires the service requester to complete a computational problem to deter cyber attacks such as DDOS.

The real issue at present is the lack of viable use cases being implemented coupled with any sort of regulation and as a result a lot of ICO’s (Initial Coin Offering) have been used to raise quick capital with limited roadmap/ long term vision. However, there are a few projects that offer solutions to existing problems that are not being addressed by conventional means.

WaBi

https://www.wacoin.io 

WaBi is a platform focused on tracking consumer products and using the blockchain to store and communicate information about it’s safety and authenticity.  Using RFID and anti-tamper labels to track data, their aim is to reduce counterfeit products in the supply chain such as medications and food sources. WaBi tokens are used as loyalty points to incentivize scanning of anti-counterfeit labels on Walimai protected products.It is primarily focused on the Chinese market where there have been a number of recent scandals around the authenticity of baby formula, most notably the 2008 Baby Formula scandal (http://news.bbc.co.uk/1/hi/7720404.stm)  that was reported to have affected over 300000 infants. As part of the Walimai company, this company already has a market and immediate use case.

Aion

https://aion.network 

Due to the plethora of blockchain solutions out there, it is only natural that there would need to be a way of connecting them and transferring data/ logic (or value/ smart contracts) across multiple networks which may otherwise become isolated silos. Aions mission is to provide a scalable and interconnected solution rather like Cisco connected disparate networks in the 1990s. This could potentially lead to mass adoption, accelerating the easy development of DAPPS (decentralized applications) that create bridges from one blockchain protocol to another. Put simply it will develop a blockchain protocol that will act as a “router” connecting these networks. Validation of blocks/ transactions will eventually use Proof of Intelligence(POI) which unlike POW requires a unique puzzle to be solved using the POI algorithm. The intent is to motivate the creation of AI-specific or specialized hardware that could be used for machine learning and neural network training in the future.

 

 

MedicalChain

https://medicalchain.com

Even within the NHS, there are so many  disparate hospitals trusts and groups, it makes sharing patient data using a generic Electronic Health Record (EHR) very problematic. This can lead to unnecessary delay in referrals and diagnoses, reducing the effectiveness of medical practitioners that have limited and sometimes inaccurate data at their disposal.  Medicalchain uses blockchain technology to securely store health records and maintain a single version of the truth. The different organisations such as doctors, hospitals, laboratories, pharmacists and health insurers can request permission to access a patient’s record to serve their purpose and record transactions on the distributed ledger. Ultimately it is the patient that has control over their own health record

Medicalchain provides solutions to today’s health record problems. The platform is built to securely store and share electronic health records reducing the time to access potentially lifesaving patient data. In the first instance it is providing a telemedicine solution so that patients will be able have online medical consultations secure in the knowledge that their records will not be compromised. This is in response to the multiple cases of high profile patient data breaches in the recent past – https://www.telegraph.co.uk/news/2017/03/17/security-breach-fears-26-million-nhs-patients/ 

Medicalchain has recently partnered with The Groves NHS practice consisting of four GP practices, supporting over 30,000 registered patients and 1,000 private patient families with system launch in early July.

 

New technology is only worthwhile if it improves/ iterates on what we already have and this is especially true of blockchain. Simply duplicating existing systems via a different platform will not encourage innovation or lead to ubiquity. There has to be a unique reason or problem identified that traditional methods cannot solve for it to be a worthwhile investment in terms of money and time. The examples above show that there is certainly potential for blockchain technology to be a game changer but in my opinion there are still too many projects that can be classed as vapourware, all style and hype but no discernible product. While these still exist, they will act to distract/ detract newcomers from the benefits and slow down mainstream adoption. Even so, I still believe the winners emerging from this space can revolutionise and transform existing systems like banking, we have for so long afforded a permanence in our modern society.

Further Reading

Jisc Futurist Martin Hamilton has written a horizon scanning report for Blockchain that is available at https://www.jisc.ac.uk/reports/blockchain-in-research-and-education  covering the potential impact in Research and Education.

The 360-Degree Cameras – bringing VR to the mainstream and educators.

Last year was a great year for everything AR, VR, and 360 filming – all the big names from Apple, Facebook to Samsung have contributed to a massive breakthrough that pushed immersive experiences into the hands of people. From Apple’s augmented reality technology ARKit and its rival Samsung’s ARCore to Samsung Gear 360 (2017) camera latest release not limited to Android users, we can not only experience immersive experiences but also quickly create our own without much more than a mobile phone.

We have recently received the Samsung Gear 360 (2017) camera to join the Jisc Digi Lab kit. So, what is a 360 camera and what can we create with it?

Samsung360

I am starting a series of blog posts on my recent experimentation with SamsungGear360 to give some insight into the technology, how to use it, and its applications in the educational space.

360 cameras can be dual or multiple lenses attached to each other, and they take 360-degree spherical images and videos that can be Monoscopic or Stereoscopic. Because Facebook, YouTube, and Vimeo started supporting 360-degree content, you are now able to view and share any 360 images or video on a web browser – just pan around with your mouse cursor, or on your mobile phone – just rotate your phone around to view your 360 videos/images.

This is a new viewing paradigm, for well over 100 years, we only looked into screens limited in their peripheral.

With 360-degree cameras, the field of vision is opened up to a dimension we’ve never had to deal with before”.

John Bucher said in his book Storytelling for Virtual Reality.

You may have seen one of your Facebook friends streaming their birthday party live as a 360 video, or live streaming a concert you wished you could attend, bringing an extra level of immersion through your mobile device.

Technologies like this literally blur the boundaries between us and other people, spaces and cultures. Using a Virtual Reality headset such as Google cardboard cannot only transport you into these places and cultures but allow you to live vicariously through another person’s eyes.

Try this immersive 360 video of Edinbrough Fringe festival.

Screen Shot 2018-02-26 at 15.10.09

An immersive 360 video Edinburgh Festivals experience.

This new form factor has brought massive opportunities to content creators – through Samsung Gear 360 you can shoot and record a 360 video that could be content for a Virtual Reality experience.

For some experiences, you wouldn’t need to create a whole CGI environment to build a 3D world and 360 experiences do not necessarily require multi-headed professional cameras that require a significant amount of stitching work afterward. The mobile app accompanying the camera stitches images and videos automatically so you can immediately load them and share them with others – unlike other cameras that require significant editing to share the content.

This is great news for teachers and those who have limited time and money to spend on creating and learning how to develop 360-degree content.

360 cameras and in particular the Samsung Gear 360 have been used by journalists at BBC and NEW YORK TIMES to capture and tell immersive stories. If you are a teacher and you’d like to explore this area more, you could put some content together within a very short time. In addition, there are different tools that can add interactivity to the experience such as Thinglink. The University of Southampton is experimenting with this tool to create interactive field trips – adding annotations to any 360 videos using Thinglink could enable you to build a more guided, engaging and meaningful learning experience.

Furthermore, teachers or students wanting to explore 360-degree experiences for teaching and learning, they could find plenty of videos online produced by someone like NASA and the BBC for example – say you want to make a particular science class more interesting and engaging for students, go to NASA360 Facebook page and you will be able to take your students to the red planet through a classroom activity that is more fun.

While 360 videos are very good at allowing users an experience of a place otherwise inaccessible, it can also be used to prepare your students for future employment. At medical schools, 360 videos are being used to prepare their students to go to emergency rooms and deal with emergency situations. This emotional and cognitive readiness can have a significant role in lowering stress and anxiety level among students and even junior doctors.

Filmmakers and journalists have found 360 videos a great medium to bring immersive stories in new ways that could impact people’s behaviours. Journalists have realised that 360 videos can be a game changer for storytellers as a new format of stories emerges with this technology. Journalists and storytellers are now able to give roles to viewers – what is it like to be an autistic teenager!? The Party experience which is the Guardian’s latest virtual reality 360 film offers a glimpse into how autistic people feel and cope with a depressing situation such as a social event. Through this experience, you will enter the world of an autistic person, then the emotional aspect will kick in. I believe this is a powerful medium that enables having shared experiences enhancing empathy and encouraging people to act and change their behaviours or reactions towards life situations and issues. The emotional journey of the audience through the experience is very crucial in communicating ideas about mental health, bullying situations, racism, and even more international issues such as climate change.

Virtual Bodyworks and the Digital Catapult are using their research findings in real-life applications to show that

powerful changes can be induced in people who have these experiences, even to the extent of decreasing feelings like racism or understanding the consequences of your own actions towards others.”

360 filming and immersive storytelling are being taught as part of university journalism programme courses at Stanford University in the US as they are believed to impact the audience in ways not possible with other journalism mediums.

While we talk about “immersive” technology which is definitely something I am very passionate about, I still do think that it is not the technology which is innovative, but the stories and the ideas that these technologies enable us to implement.

Next blog to follow on shooting and filming in Samsung gear 360 camera.

For now, join me at our  Digifest conference next week for a show & tell session on 360 camera/filming, I will be in the Digilab area so if you are around, please stop by.

“Alexa, ask Jysk” – Hacking new Alexa skills for Amazon Echo

An estimated 30 million smart speakers have been sold in the United States alone, with devices like Google Home and Amazon Echo (“Alexa”) nestling in the corner of many living rooms, kitchens and yes – even bedrooms. Amazon are betting big that you’ll want one in every room, even selling the Echo Dot in a six pack and twelve pack.

The digital assistants in these smart speakers seem like something from a science fiction movie – you can ask Alexa everything from general knowledge questions to very specific things like getting a weather forecast or checking to see if there are likely to be any delays on your commute into work. But how do they really work, and how might you go about teaching a digital assistant like Alexa some new tricks?

Amazon Echo - photo CC BY-NC-ND Flickr user michaeljzealot

Amazon Echo – photo CC BY-NC-ND Flickr user michaeljzealot

In this post I’ll look at how to create a new “skill” for Alexa, and hopefully demystify the technology a little bit.

At Jisc we operate dozens of services, for around 18 million people – from teachers and learners to researchers and administrators. Why log a call about a product or service when you could just ask Alexa? Maybe you could get your question answered there and then, from the information that we already have in our systems?

I thought it might be fun to work up a few practical examples, for instance:

  • I’m a network manager or IT director. What’s the status of the Janet network? – are there any connections down? Our  Janet status page and Netsight service has this info
  • I’m a researcher writing a new paper. What’s this journal’s open access policy? Our SHERPA services let you conveniently find out about publisher and funder OA policies
  • I’m an administrator reviewing a grant application before submission. Do we really need to build a hyperbaric chamber, or is there another institution nearby that has one? Our equipment.data service lets you find kit that institutions are sharing with each other and industry

What would that look (and sound) like? Here’s a short video that my daughter and I made to demo our prototype Alexa skill for Jisc:

The Alexa digital assistant is the Amazon product that underlies all of the Echo products, and we are also starting to see Alexa appear in third party products like the recently announced Sonos One speakers. If you don’t like saying Alexa, or perhaps someone in your house has a similar name, you can call it computer instead – very Star Trek!

Amazon provide developers a set of tools for building new Alexa applications. I’ll give you a quick overview here, but as you can imagine there is quite a lot of detail for those who want to take a deep dive into all things Alexa and Echo related.

First off, you have to tell Amazon what your new Alexa skill will be called – it needs to have a distinctive name because there are already over 15,000 skills out there. It turned out that Jisc doesn’t work as our skill’s name, and I had to resort to a phonetic spelling of our company’s name instead – Jysk. Once you find the right word or words to invoke the skill, people will be able to say Alexa, ask Jisc – or rather, Jysk!

Defining the Jisc ("Jysk") Alexa skill

Defining the Jisc (“Jysk”) Alexa skill

Secondly, you need to tell Amazon what kinds of questions people will be asking of your skill. This is where the magic and mystery of Alexa starts to unravel a little. It turns out that you have to be pretty precise about the wording of the phrases that you want Alexa to respond to, although there is a little wiggle room. For instance, if we tell Alexa to respond to questions about janet network status, it will also recognise that status of the janet network is the same question worded slightly differently.

Sample utterances for Jisc equipment data search

Sample utterances for equipment.data search

Thirdly, we need to tell Alexa how to find out the answer to the question. If it’s a simple question like janet network status, then this isn’t actually too hard either – we just need a place that we can go to for the requested info. And if it’s already available on the web somewhere, then we can even copy the info off the webpage without having to set up some kind of complicated database connection or Application Programming Interface (API).

Slot values for Jisc equipment database search

Slot values for equipment.data search

If our question has parameters, things do get a bit trickier – and this is where the final bit of mystique evaporates. Alexa doesn’t and can’t know all of the possible journal names or pieces of equipment that we might want to ask it about. Instead, when the skill is created, we tell it what the possible parameters are for the question. Amazon call these “slots”. When we’re asking about journal names we might include Nature, Computer Networks and so on in our slots. When we’re asking about equipment, our slots might contain mass spectrometer, spectroscope, hyperbaric chamber, and so on.

So where does Alexa go to get the answers to these questions? The answer is that it makes a simple HTTP request to a web server somewhere. This can be any server, but Amazon are quite keen for you to use their new Lambda system, which lets you run code on demand without the overheads of running (securing, patching etc…) a regular server. Lambda is a whole story in itself, and for demo purposes I’ve simply pointed Alexa at an existing Jisc test server.

What does the code look like to process a request from Alexa? Pretty simple, actually. Here’s the actual code that I use to make the Jisc (Jysk!) Alexa skill work…

Alexa PHP sample code

Alexa PHP sample code for the Jysk intent

Let’s spend a moment unpacking this – we’re using the Amazon Alexa PHP Library to process the incoming request. This makes an Alexa request object that contains the question and (if appropriate) the slot that Amazon think we were asking about. We can then decide what to do with the request. For the sample Jisc skill we fetch the Janet network status or journal policy information from a file that has already been populated separately, and for the equipment database lookup we go off to run an external program. Any external dependencies have to be pretty seamless, otherwise the user will be left waiting for ages wondering what is going on.

It’s important to note here that you don’t have to release your work-in-progress Alexa skill to the world until you are ready – in the Alexa console you can specify that the skill only works on Alexa devices linked to your own Amazon account, which is probably best for testing. You can also simulate interaction with an end user to test your back end code independently of Alexa’s speech recognition.

Sometimes it’s easier to see things rather than read about them, so I’ve made a short video that walks you through the Alexa developer console and shows you how this all fits together:

So now you know how to make your own Alexa skills. What will you make? Why not leave a comment and let me know!

 

AI can be DIY – a Raspberry Pi powered “seeing eye”

Scarcely a day goes by right now without a breathless newspaper headline about how artificial intelligence (AI) is going turn us all into superhumans, if it doesn’t end up replacing us first. But what do we really mean by AI, and what could we do with it? In this post I’ll take a look at the state of the art, and how you could build your own Do-It-Yourself “seeing eye” AI using a cheap Raspberry Pi computer and some free software from Google called TensorFlow.

If you want to have a go at doing this yourself, I’m following a brilliant step by step guide produced by Libby Miller from BBC R&D. I should also note that this is possible because of Sam Abrahams, who got TensorFlow working on the Raspberry Pi, and also literally wrote the book on TensorFlow.

Raspberry Pi logo screen printed onto motherboard

AI right now is mainly focussed pattern recognition – in still and moving images, but also in sounds (recognising words and sentences) and text (this text is written in English). A good example would be the Raspberry Pi logo in the picture above. Even thought it’s a little blurred and we can’t see the whole thing, most people would recognise that the picture included some kind of berry. People who were familiar with the diminutive low cost computer would be able to identify that the picture included the Raspberry Pi logo almost instantly – and the circuit board background might help to jog their memory.

While we talk about “intelligence”, the truth is that this pattern recognition is pretty dumb. The intelligence, if there is any, is supplied by a human being adding some rules that tell the computer what patterns to look out for and what to do when it matches a pattern. So let’s try a little experiment – we’ll attach a camera and a speaker to our Raspberry Pi, teach it to recognise the objects that the camera sees, and tell us what it’s looking at. This is a very slow and clunky low tech version of the OrCam, a new invention helping blind and partially sighted people to live independently.

Our Raspberry Pi powered "seeing eye" AI

Our Raspberry Pi powered “seeing eye” AI

The Raspberry Pi uses very little electricity, so you can actually run it off a battery, although it’s not as portable or as sleek as the OrCam! And rather than a speaker, you could simply plug a pair of headphones in – but the speaker makes my demo video work better. I used a Raspberry Pi model 3 (£25) and an official Raspberry Pi camera (£29). If you’re wondering what the wires are for, this is my cheap and cheerful substitute for a shutter release button for the camera, using the Raspberry Pi’s General Purpose Input Output (GPIO) connector. GPIO makes it easy to connect all kinds of hardware and expansion boards to your Raspberry Pi.

So that’s the hardware – what about the software? That’s the real brains of our AI…

Google’s TensorFlow is an open source machine learning system. Machine learning is the technology that underpins most modern AI systems, and it’s responsible for the pattern recognition I was talking about just now. Google took the bold step of not just making TensorFlow freely available, but also giving everyone access to the source code of the software by making it “open source”. This means that developers all over the world can (and do) enhance it and share their changes.

The catch with machine learning is that you need to feed your AI lots of example data before it’s able to successfully carry out that pattern recognition I was talking about. Imagine that you are working with a self-driving car – before it can be reasonably sure what a cat running out in front of the car looks like, the AI will need training. You would typically do this by showing it lots of pictures of cats running out in front of cars, maybe gathered during your human driver assisted test runs. You’d also show it lots of pictures of other things that it might encounter which aren’t cats, and tell it which pictures are the ones with cats in. Under the hood, TensorFlow builds a “neural network” which is a crude simulation of the way that our own brains work.

So let’s give our Raspberry Pi and TensorFlow powered AI a spin – watch my video below:

Now for a confession – I didn’t actually sit down for hours teaching it myself. Instead I used a ready-made TensorFlow model that Google have trained using ImageNet, a free database of over 14 million images. It would take a long time to build this model on the Raspberry Pi itself, because it isn’t very powerful. If you wanted to create a complex model of your own and don’t have access to a supercomputer, you can rent computers from the likes of Google, Microsoft and Amazon to do the work for you.

So now you’ve seen my “seeing eye” AI, what would you use TensorFlow for? Why not leave a comment and let me know…

 

Apple ARKit – mainstream Augmented Reality

Ever wandered through a portal into another dimension, or wondered what it would look like if you could get inside a CAD model or an anatomy simulation? This is the promise of Apple’s new ARKit technology for Augmented Reality, part of iOS11, the latest version of the operating system software that drives hundreds of millions of iPads and iPhones.

Turning the IET into a Mario level using Apple ARKit

Turning the IET into a Mario level using Apple ARKit

Augmented Reality has been around for years, but in quite a limited way – point your phone/tablet camera at a picture that has special markers on it, and the AR app will typically do something like activate a video or show you a 3D model.

But anyone wanting to develop an AR app in the past has had to fend with a couple of big problems – firstly the hardware in phones and tablets hasn’t quite been up to the job of real time image processing and position tracking, and secondly there hasn’t been a standard way of adding AR capability to an app.

With recent improvements in processor technology and more powerful graphics and AI co-processors being shipped on our devices, the technology is now at a level where real time position tracking is feasible. Apple are rumoured to be including a sensor similar to Google’s Project Tango device on the upcoming iPhone 8, which will support real time depth sensing and occlusion. This means that your device will be able to tell where objects in the virtual world are in relation to objects in the real world – e.g. is there a person stood in front of a virtual object?

Apple and Google are also addressing the standardisation issue by adding AR capabilities to their standard development frameworks – through ARKit on Apple devices and the upcoming ARCore on Android devices. Apple have something of a lead here, having given developers access to ARKit as part of a preview of iOS11. This means that there are literally hundreds of developers who already know how to create ARKit apps. We can expect that there will be lots of exciting new AR apps appearing in the App Store shortly after iOS11 formally launches – most likely as part of the iPhone 8 launch announcement. If you’re a developer, you can find lots of demo / prototype ARKit apps on GitHub. [[ edit: this was written before the iPhone 8 / X launch! ]]

As part of the Jisc Digi Lab at this year’s Times Higher Education World Academic Summit I made a video that shows a couple of the demo apps that people have made, and gives you a little bit of an idea of how it will be used:

How can we see people using ARKit in research and education? Well, just imagine holding your phone up to find that the equipment around you in the STEM lab are all tagged with their names, documentation, “reserve me” buttons and the like – maybe with a graphical status indicating whether you have had the health and safety induction to use the kit. Or imagine a prospective student visit where the would-be students can hold their phones up to see what happens in each building, and giant arrows appear directing them to the next activity, induction session, students union social etc.

It’s easy to picture AR becoming widely used in navigation apps like Apple Maps and Google Maps – and for the technology to leap from screens we hold up in front of us to screens that we wear (glasses!). Here’s a video from Keiichi Matsuda that imagines just what the future might look like when Augmented Reality glasses have become the norm:

How will you use ARKit in research and education? Perhaps you already have plans? Leave a comment below to share your ideas.

Unboxing the Mycroft AI open source digital assistant

Mycroft open source AI

Mycroft open source AI

Mycroft AI is the product of a Kickstarter campaign from Joshua Montgomery, who conceived back in 2015 of a voice activated digital assistant (like Apple’s Siri or Amazon Alexa) that was completely open source, built on top of an open hardware platform. Fast forward two years and $857,000 from crowdfunders and investors, and the first 1,000 units have just gone out to supporters around the world.

Watch me unbox the Mycroft AI Mark 1 “Advance Prototype” and take it through its paces:

Mycroft is really interesting for a variety of reasons:

  • Being open source software, you can see how the code works and tinker with it to make Mycroft do things that its creators never envisaged. This is a great way of learning to code, and understanding how to do speech recognition.
  • Mycroft capabilities, or ‘skills’, are typically written in the very accessible Python scripting language, and can easily be downloaded onto the device.
  • The Mark 1 itself is a clever combination of off the shelf hardware like Raspberry Pi and Arduino, but you can also run the Mycroft software on your existing Raspberry Pi, or on a conventional desktop/laptop. If you do have the Mark 1, then there are a wide range of hardware ports and interfacing options exposed on the back panel, including the full Raspberry Pi and Arduino GPIO pins, HDMI, USB and audio out.

So from an edtech perspective it’s easy to see Mycroft being used as a hook for teaching advanced hardware and software concepts and project work. And perhaps we’ll see DIY Mycroft kits turning up in maker families’ Christmas stockings before long too!

It’s also important to keep in mind that Mycroft’s developers see it as a white label digital assistant that (for example) organisations could customise for their own needs, retaining full control over the hardware and software – unlike the black box solutions from the tech giants. There could be quite a few use cases where this total control turns out to be a key requirement, e.g. from financial services to the defence sector.

I’ll have more to say about Mycroft soon, but in the meantime do leave a comment and let me know what you think about it, and how you might use it in research and education…

3D printing: Lessons and tips

Recently, I started experimenting with the Ultimaker+ extended 3D printer that we’ve got into our office as part of the digilab. I was very excited and enjoyed every little detail about it, from unpacking to assembly, to then making something out of it. As it happens that after a week of getting the printer, I was participating in the NFSUN (the Nordic Research Symposium on Science Education) conference to disseminate our work in the AR-Sci project. I thought it could be a great idea to show some of the 3D models we produced for the project in a new and different way. Throughout the project we produced a collection of interactive Augmented Reality experiences for science education, and 3D printing can be also great way to visualize things we don’t see normally with the naked eye.  3D printing enables you to bring your digital 3D representation into your real world as a tangible object.

In this blog post I will talk about my first print using the Ultimaker + extended and how I prepared the 3D model to be printable with other tips to fix yours easily.

I chose randomly to print a 3D model of chloroplast which you can see in the image below. This is a screen shot of the chloroplast model as part of the Augmented Reality experience.

Chloroplast in AR

Chloroplast in AR

Getting started with a new 3D printer means using a new software, Cura.  Cura is a slicing software maintained by Ultimaker. It prepares your 3D model by slicing it into layers to create a file known as G-Code which speaks the language of the 3D printer.

However, some 3D models need to be prepared before you bring them to the printer software as not all 3D models are ready to be printed straightaway. There are a set of criteria that need to be met to make your 3D model
printable.

There are two approaches to fix a 3D mesh; manually, or automatically.

For a quick fix, I recommend using this software that enable you to validate your 3D model, fix and convert it to the right format before sending it to printer. Meshlab, and Netfabb are the ones I used in several situations to prepare my 3D models, and I found both of them very easy and quick to use.

MeshLab:

  1. 3D model need to be in .stl format: you can use MeshLab to convert your design files from any format to .stl.
  2. Polygon reduction: It is advised that the files you need to print should be under 64 mb in size and have less than 1,000,000 polygon (triangles) faces: if your 3D model is larger than that, you can use MeshaLab for polygon reduction.

Netfab:

  • Netfab software can be very useful when checking your model very accurately and analyze all its features before you send it to a 3D printer.
  • I also found Netfab very useful to scale your 3D models to real world measurement.
  • The mesh needs to be closed, no gaps or wholes between faces and edges and Netfab can help you fix that using the automatic repair tool.

To fix the Chloroplast 3D model, I decided to prepare it manually using 3D modelling software called Blender :

  • Each 3D model needs to be a single seamless mesh:
  • Looking at the model, it can be printed in two different pieces that will eventually be put together and glued. This allows me to print the whole model in two different colors as well.
  • Wall thickness:
  • One thing to look for is that the wall thickness of the model needs to be above the minimum. This is an issue I had with the outer piece of the model. I had to increase its wall thickness by using the solidify modifier making sure that this doesn’t affect or change the shape of the model.
  • Last step was to look for holes in the model,
  • then convert it to .STL.

It is worth noting that Blender already has an add-on called 3D print toolbox which you can load it from blender preferences menu. This tool checks for

issues in your model like intersecting faces, distorted faces and thickness and over hanging before printing.

A final check for the model is always good by using Mesh Analysis panel which generate a heatmap of the problematic areas.

Finally, below is the end result of my model which I am very proud of. What I like about 3D printing is that you can rapidly prototype your design, visualise, and share it with other people who can test it or use it to stimulate a conversation around ideas or in facilitated lessons.

Chloroplast 3D print final result

Chloroplast 3D print final result