Skip to main content

How can we tap into AI for learning experiences?

Podcasts and Audio | 30.05.2018

This month, Paul and the Kineo team are joined by Filtered CEO Marc Zao-Sander to get under the skin of AI.

Paul Westlake  0:00  

Welcome to Kineo Stream of Thought a monthly podcast that features informal chat from the Kineo team about all things learning. I'm Paul Westlake, Solutions Consultant at Kineo. And today we're going to speak about all things AI.

Please pleased to say this week, I'm joined by.

Paul Welch  0:22  

 Hi, I'm Paul Welch, I'm a Solutions Consultant at Kineo.

Pete Smith  0:25  

I'm Pete Smith, Technical Team Lead at Kineo.

Marc Zao-Sanders  0:29  

I'm Marc Zao-Sanders, I'm the CEO of Filtered. Nice to be here.

Paul Westlake  0:33  

Thanks, Mark. Thanks for joining us. So I guess best to kick off with, what do we mean by AI for people who are sort of a bit confused with it? There's obviously a lot of talk in the press about robots killing everybody's jobs and everything being artificial intelligence this, that and the other. So who'd like to have a bit of a stab with what we mean by AI?

Marc Zao-Sanders  0:53  

I can have a go. And I'd love to hear your thoughts about about that definition. I mean, first of all, it's not very well defined, if you ask 10 AI experts, what AI is you'll get, you'll almost certainly get 10 different responses. And I think one way to think about it is in terms of just software that emulates human intelligence. And human intelligence encompasses, you know, computer vision, language processing, and cognitive abilities. And an AI system is just one it's artificial, it's synthetic. So we've created it. And, and a lot of the progress has been in terms of in terms of what we what we do as human beings. So thinking of it as in terms of human intelligence, I think is intuitive and unhelpful for for most people, but at the same time, it's not a very well, well defined term, like I said, At the start, what do you think of that, guys?

Paul Westlake  1:55  

So I'll go with that. Does that mean, machine learning is part of AI, or those two terms interchangeable?

Marc Zao-Sanders  2:04  

Well, okay, so I'm glad that you bought machine learning. Machine learning is, in my Well, in my view, is a better defined term. So here you've got a system, which gets better with experience. So there's a task that is being performed over time, there's some sort of feedback loop, so that the system that you've developed, again, synthetic is artificial, is made by man computerised system, that with training data, or experience of performing a task gets better and better. So for example, AlphaGo, and Alpha Zero, the recent output from from Google and DeepMind. And those programs on the machine learning but they've got deep neural networks, to, to play chess, or play 'Go' or 'Shogi' better and better with time without explicit programming, or, or teaching by, by human. So machine learning is better defined is something getting better with time. And artificial intelligence is a broader term that doesn't necessarily include machine learning.

Paul Westlake  3:13  

So in that example of go though, Mark did that was that having to have human input in the first place to say, here's how we would play the game, and she'll show it numerous ways of doing that, and then the machine then sort of picks up on patterns and learns from that. So really, it still needs the human input in the first place.

Marc Zao-Sanders  3:32  

Well, you've got to lay down some ground rules, yes. In the example of, AlphaGo, most of the progress was made by the program playing itself, you know, millions and millions of times. So once that initial input from the humans, so that's a minimal programming explicit programming input, you then set the thing to, to play itself. So this is called hyper learning, where you set all sorts of adversarial interactions between... So basically, the, it is the machine playing itself, you know, many many times, far more times than, then, you know, you could even construct games for and, you know, human games, and, and then it's, then he gets, he gets so good that in the case of alpha zero, and chess, he basically solved chess in four hours. So, I mean, I used to play chess as you know, as a kid and I still play play occasionally, okay, we should have a game some time! John Yates plays it too, so maybe we'll do that. But for for people that play chess and know a little bit about some of the openings, you basically saw the output from those four hours of work to being the answer. So this opening the truth, the objective truth is that, you know, the Queen's gambit declined is one of the stronger openings, and the King's Gambit is just terrible. And and to see, you know, after hundreds and hundreds of years of so many human beings and playing chess and Computer Chess over the last 20 years, which has become pretty strong, we've just been floundering now that you've got the answer, and it's kind of poignant, it's also a little bit painful to see that machines can just can just do that so quickly. Yeah, that's what happened in the case of, of chesco, and shogi, when, when DeepMind got there neural networks on it.

Pete Smith  5:32  

And the big difference, that's it's probably worth pointing out, this isn't actually particularly new, that AI has been around since the 1950s. And it became unbeatable at Tic Tac Toe pretty much immediately, because it's a simple game with simple rules. The difference is, and the reason why it's really taken off since the mid 90s, is just that growth of processing power, right, and the ready availability of all of that extra data. And so something complicated, like chess, takes a lot more data processing power, a lot more statistical analysis to be able to crack from machine perspective, but it is crackable. 'Go' is by several orders more complicated. And so it took a lot more data to actually get to a point where computers could actually start to beat 'Go' players. But that's now happened.

Paul Welch  6:22  

It's interesting, isn't it, because I read an article on that the computer that beat the 'Go' champion, and the team behind the AI said that if they change the parameters of the board, for example, they wouldn't have stood a chance where the human player would have been able to adapt a lot quicker. So I think that's something interesting about AI, they're very, very good. It's very, very good, or doing a particular thing where maybe the parameters change outside of the conditions it was set up to achieve then it maybe it falls down. And so it's not true learning as such, then it's more learning through trial and error. Well, I think so. And I mean, Mark is probably better placed than me to explain this. But as I understand it, you know, algorithms are very, very good at doing a thing. And I think a lot of the general scare mongering about AI is that there's this kind of super broad, deep intelligence that can conjur code and say I'm going to learn about that now. And that's where these kind of like the fears of the Elon Musks and so on come from and I think with that, sometimes they get a bit confused.

Marc Zao-Sanders  7:19  

Let me let me jump in there. I think there's some interesting stuff that you've raised. So tic tac toe is really easy. Right? So no wonder that was one of the first tasks that were set. And then and then quite soon after that, I think Alan Turing wrote himself an algorithm for playing chess and we would have been to a basic level. Interestingly, his his first algorithm, he was never computerized. So the algorithm was just sort of pen and paper, his his musings, it doesn't necessarily, an algorithm doesn't necessarily need to reside in a computer, we just often think of it in those, those terms. But yeah, you move down, the list is tic tac toe, then there's sort of elementary chess, then there's Connect Four. And then and then sort of full blown chess, which got to human level about 20 years ago, and it be when Deep Blue beat beat Kasparov. Yeah, but in all of those cases, that the main, the main difference versus what's going on now is that you had explicit programming input. So what I mean by that is, it's a rules based engine. So in any given situation, the computer knows what to do. So it looks at the legal moves, it goes down some, some branches. So if I do this, and then the opponent does that, then my evaluation for that position is, you know, plus naught point seven. In a different scenario, it's minus naught point three, and then you look at all of those scenarios, and you pick the number that's greatest, the point is that it can, it will do the same thing in any given chess situation and perform the same rules. Whereas with deep new neural networks, and deep learning, which is come around, come into its own into the last 5/10 years, in particular, you're not explicitly programming it, the computers doing it itself. And that's the scary thing, as well as the exciting thing about what some about what's up next. So there's a big difference between, you know, rules based brute force programming, and computers doing it, doing it themselves. This is the case with them with AlphaZero and AlphaGo.

Paul Westlake  9:30  

So where are we with something like I don't know Siri or Alexa, I'm assuming that still that they're almost working through sort of a fairly rudimentary test sheet, these things aren't clearly aren't going out and thinking for themselves are they, Marc?

Marc Zao-Sanders  9:46  

No, I mean, you've got some you've got some, some, some intelligent tech going on there. So there's natural language processing. There's, there's voice recognition, there's speech synthesis, and some of those in the cases of some of those personal assistants. But I think I think I'm glad you brought that up. Because, you know, for most people, how much does Siri or Google Assistant or Alexa, help your life? Probably not that much, it probably irritate you more. And I think we're at the cusp of, you know, those guys, Apple and Google, they're throwing loads of money into this, and they're gonna, you know, they're gonna get better and better, and you know, in a few years time, we're gonna assume he's gonna stop being irritating, it's actually gonna start being useful. And that's cool. And that's exciting. And actually, that is maybe a sort of part of the five year, five year feature of AI. If you look at that, did you see the the Google duplex demo?

Paul Westlake  10:49  

I found it both amazingly impressive. And then very quickly, I thought, I've no idea how frustrating this would be if I was on the other end. And it was really interesting. So I don't know if you saw about a few days later, Google came out and said, Oh, no, no, it wouldn't be exactly like that. We would say, for example, this is a Google Assistant making this call, that was one thing. So I was it was almost felt like duping people, which I thought was was a slightly awkward question.  The other one, the other one that they actually used on the stage. And I was amazed they did it. And they said, for example, have you ever tried to book a doctor's appointment for your child? And I thought, Oh, that's a nice example. And I thought, he is never going to be able to do that. So if you do actually try and book an appointment for your child, they will ask about what was the temperature, when's the last time they vomit? When etc...  That's  huge, and I'll be really impressively could handle that.

Pete Smith  11:38  

But you could see, he can see that is absolutely a direction that it's heading in. Because you're carrying this device around, it could potentially be listening in to you while you actually make all of those checks. So it could have all that information to hand. And I think that you're right, though, that immediately our responses or that's that's weird etc, that's not yet going to work. But we're going to find it incredibly convenient. And so people are actually going to start adopting that technology. And I think we're at a point where maybe our social responses just aren't quite keeping pace with the technological changes. We've changed a lot. But if you look at say, voice control on your phone, it's pretty good. All the voice recognition stuff. And yet, we're still stuck hammering away to a keyboard with our thumbs to try and send WhatsApp messages out.We don't need to 

Paul Westlake  12:27  

You say that Pete, but when you... I think again, those voice trees or when you phone up? And you you know, it uses your voice say yes for this, no for that. I think quite quickly, people find that almost frustrating, and try and find a way to work around it. So I know when I can say yes and get it to move on. I don't need it to talk through all this stuff. So possibly impressive first time, but then I think it becomes a bit annoying.

Marc Zao-Sanders  12:52  

Well, I think there's two things there. So what you're talking about Paul, I think is his voice recognition. So you know, you've been on the phone to your to your bank. And um, rather than speaking to a human agent, you're being asked to say yes or no, or, you know, your, your bank account or whatever. And, and so there's AI going on there to do the voice recognition. If but what the Google Duplex demo showed us was a future where it's not sort of it's not just voice recognition, although it is of course, that is also natural language processing. So it's understanding what you're saying and can respond to really anything that you say within that domain of you know, getting a haircut or booking a restaurant appointment, and then natural language generation, to say it back to you. And that was possibly the most impressive part of the demo... Google typically being very, very good at the PR. And I think they overplayed their hand, I think a little bit with with that demonstration, because that bit was so was so impressive. But I think the point there, and what Pete was getting at is that, if you if you have a bit more data, and we get a bit more comfortable with giving that data to the system, then you can you we might be able to relax more into having more of a natural, natural conversation with a machine in order to achieve you know, the ends that we need to achieve. Or even that personal assistant having that natural conversation with another machine, which would be the next step. So it takes humans out of the loop entirely.

Paul Westlake  14:30  

...being used for internet, for logging tech technical tickets, for example, and answering those obvious questions first...

Paul Welch  14:38  

... I was just gonna say it's interesting because sometimes it works just as well when there isn't a person assigned to it. So you know, Duolingo works. We know there's AI there, but there's no artifice there's no kind of pretending to be a human involved. And the other one that struck me that was really impressive was Jill Watson at Georgia Tech. Have you heard of that one? ... it was like the chat, the chat assistant it was answering, answered in 95 97% accuracy. The student queries that would come in, in in lectures online, obviously was using Watson behind the scenes.

Marc Zao-Sanders  15:07  

... It tricked the student sitting there, they didn't know that it was... and that was the potential controversy there. I mean, on your point Pete, about, you know, people getting sort of used to it, psychologically and emotionally, when you have a conversation about what might be in sort of four or five years, it always feels a little bit like that. That's a big, a big leap to get over. But But the reality is that things don't, you don't suddenly get landed in five years time appear in five years time. It's gradual, it's incremental. So you know, if you look at, if you look at, say, the mobile phone, and smartphone, in particular, we've had it for 10 years, if someone explained 10 years ago, the degree to which a couple of billion people on the planet are addicted to and compelled to use this thing that sits in their in their pocket, and that they ignore each other at meals that they that they watch films, but actually on their on their on their phone on these devices, more than they are watching films, even with other you know, other people in the room. That would be shocking. And I think not accepted. And yet, it is the case. So you know, in reality, it's incremental, and we, we adapt, we are an adaptive species, you know, the human species. And so I think we will just gradually get him more used to it. That demo by Google was part of that process, you know, that just put that door into, you know, millions of people's heads and is a little bit more acceptable than it was before they did it.

Paul Westlake  16:37  

And also, I would say that in one fell swoop, they also made everything that's existing look really dated. So my Alexa at home was suddenly became massively frustrating. So why can't you do it? Because I want that now, you know, there's a lot of that sort of stuff here. So I was I was on a flight last week, and there was WiFi on the flight. And the guy next to me was moaning about the speed of the WiFi on the plane. It's just like, classic. We're sitting in this metal tube in the sky, and you're on your laptop, and you're moaning about how fast this is. Because, you know, having WiFi on a plane would have been like, nobody would have thought of that years ago...

Paul Welch  17:15  

We're spoilt aren't we? We really are. I'm gonna stand up very quickly for Alexa...

every time you say Alexa, everyone swears at their phones!

Paul Westlake  17:26  

What I was gonna say is, it's brilliant for my two young kids. They use it all the time, because it's pitched in a way at their level, they can ask it questions, it is quite comfortable answering. And I found my eldest the other day answering maths questions herself, in her book, cheating in a way, I guess. But she was asking Alexa the answers. Yeah, it's just the interesting.

Marc Zao-Sanders  17:45  

It's very good at maths, Alexa, or arithmetic anyway. So and again, that sort of, if you give a, if you give an AI a narrow scope like that arithmetic, but with some voice recognition in there, it can return amazing results and really useful for that simple use case. But but it's when you when you sort of start broadening it out, we're we're still a long way from from being able to do too much. You know, if you take coming back to the Google example, in the Duplex thing, you know, they by their own admission, this is very, very limited. task. It's just ordering. We're trying to book a restaurant or hair appointment. And, also it wasn't a live demo. You know, how many times did they have to call up before they got that workable demo? I don't know. Maybe it was just one. I don't I literally don't know whether whether it was 20 or 1 or 100. But I'd love to see it and working alive now.

Paul Westlake  18:42  

Agreed. I mean, bit of a segue there, Paul, because you sort of brought it back to more education than L&D. But but maybe that's a that makes sense for us to talk about that... So maybe we should do that and look at what AI actually means for our industry and for L&D in particular, so is this stuff already in place? What are people already using?

Paul Welch  19:08  

Well, I mean, the, I guess the big one in our industry that's kind of come up. Adaptive learning is one of the big ones. Classification of content for curation, I guess, is another. People Googling for answers to questions involves AI, doesn't it? There's lots of them already out there.

Marc Zao-Sanders  19:31  

First of all, I think that the learning and development professionals really should be interested in this, not just... Partly because it's, you know, it's front page news, it's, um, it's the headlines. It's a big, big issue for the human species, but also because it's about learning. So we have developed a system that gets better at stuff over time. That's why it's called machine learning. And so you know, that we have developed a system that does that. And emulates effectively what we are trying to present as learning development professionals all the time. I think it's especially interesting, I'm biased, because, you know, we do AI to make recommendations, but I think anyone in learning should and many do have an interest in this, which is why, you know, all these conversations are happening. I think it's worth saying that we are helped by the fact that in terms of that education piece, we are helped by the fact that, as consumers, we're using AI all the time. So you know, some of those examples that you guys just gave, you know, using Google, to, to search and Google are pretty good with algorithms and an AI and AI first company as they are branding themselves, which is, which is smart. But it's and Google is maybe the most popular, but it's also YouTube. Of course, it's a Google company, but is AI going on there, Spotify. So in terms of entertainment, that said, there's a lot of recommendations and algorithms and AI going on in our consumption of that, but then also our news and information feeds. And think about how long you are on Facebook, Instagram, LinkedIn, it's a lot of a lot of time for most people, that's two, three hours a day, you know, and sometimes it's actually positively, normally it's time well spent, because people are assimilating, getting in front, getting the right information in front of them. And the right information is getting in front of them from all of that information that's out there because of algorithms and an AI. So So just as consumers that's happening, and then that bleeds into learning anyway, because, you know, we are, we're gathering information as part of our jobs, every single one of us. And, and then it gets a bit more, you know, there are some other areas of L&D where AI is started to be adopted more explicitly, like chat bots, and, and, of course, recommendations. And I think someone mentioned the, the example of curation and tagging classification of learning assets, as a big part of what we do. And, and again, it's a really good example of a relatively narrow task, you know, take a learning asset with title, description, whatever, whatever metadata you've got, needing to have a little bit more information about it, like is this advanced project management, or is this to do with communication skills, making the appropriate tag and then moving on so it's, it's, it's the kind of tasks that AI has been built for, you know, you need it, you probably don't want to humans doing it, because it's expensive, and, and, frankly, dull. But this is what you know, that's one of the things that we do in our algorithm stack. And, and I'm sure others will get too in due course, you know, nice narrow scope, dull, and give it to the machines that some, that's what they're there for.

Paul Westlake  23:00  

Yeah, I mean, as a as an example. So, as you were talking, and I was thinking, I'm not sure I've got that much stuff that you're using AI, but, for example, as a photographer, in Lightroom, now you bring photos into Lightroom. And they're automatically tagged, and you can then do a search over your whole catalogue, which in my case, is literally hundreds of 1000s of images, and say, show me something where it's yellow and got a picture of a dog in it, for example. And it's amazingly accurate, how it does that, and how quickly it does that as well almost like tagging on the fly.

Paul Welch  23:28  

I think it also helped with that kind of the silver bullet really of kind of being able to show you know, business impact of investment in L&D. It's all part of data science and big data isn't it, with the ability to track more now in learning, you know, things like xAPI to be able to maybe use machine learning to do those kind of big crunching of numbers to say this behaviour, which was changed by this has made this impact in the business, I think that we should be doing more of that...

Marc Zao-Sanders  23:53  

...I think that's built on, I think if you know, AI is often... Well, today AI is mostly not the solution that you need, you know, gets it gets a lot of our time. But often the solution, the best solution for business problem will be a manual job, or a semi automated job or, or, or basic automation. And that could just be sort of rules based. It might even be in a spreadsheet, some if functions are gonna just deliver what you need. And if that's the case, then do that. Because there's no point engaging AI, where AI comes into its own is when there really is a lot of data. And you're trying to do something with a narrow focus, but which can't be carried out by by, you know, just a simple rule. So coming back to the idea of the the example of attacking assets. In theory, if you had a relatively small pool of types of asset, you could get away with pure automation. So that's just rules that you set up at the start. They're going to give you what data you need for every single one of your assets. But as soon as you enter the real world, people have described things in different ways. And you just got all sorts of words in there. So just basically, the data becomes messy. And that's where you may need AI to come in. So it can be a little bit more flexible than a strict rules based, if this, then that system is going to deliver for you. But so I'll say in summary, two points there: one is AI really is not always the solution. The when it is is normally when when stuff has got a bit messy. And and there's alot of data. 

Paul Welch  25:33  

Yeah, but I think that's, that's really interesting, I think, I guess related to that is where I think that could be used for AI in learning these in kind of assessment when normally because the constraints that we operate under it's choose a option from ABCD, or E, you know, it's presented in the constraints, you know, with AI, you could say, right, this is an open input format, you write what you think is the correct answer, we'll interpret that and then we will off the back of that give you feedback based on what you get...

Marc Zao-Sanders  26:02  

...Much richer, broader pool of data.

Paul Westlake  26:06  

So is that is that now? Or would you say that's what's coming? Let's challenge that a bit. So we are using some basic AI now or maybe basic is an unfair word. But where does this go in? If we're having this conversation in five years time? Where would you like to see that being?

Pete Smith  26:24  

I think where we are at the moment, it's actually really well shown by the wildfire tool, because that's intelligent enough to search through a chunk of text, and identify keywords. And then when you actually come to answer the question, it's not a multiple choice question. It's a chunk of text with key words missing from it...

Paul Westlake  26:42  

...So sorry, just just taking it back step. So wildfire is a tool that creates content?

Pete Smith  26:49  

Yes, it's an automatic content authoring tool, which you can feed content to, or which is intelligent enough to know.... Exactly. And it will pull that back in, and it will turn your content into an assessment and mark the assessment for you. So that's exactly where we are at the moment. What Paul's describing is a bit further forward. And I think one of one of the big challenges is actually getting enough information to generate meaningful questions and actually assess how well people are doing. And really to use that you need a lot of data at your fingertips, which is why people like IBM having Watson which you can buy access to, at the moment, they really have cornered the market, and at the moment doing that sort of AI, it's very expensive as a result.

Marc Zao-Sanders  27:38  

Yeah, well, I mean, that they've, they certainly cornered the market in terms of PR, they've just owned that space with Watson. That's really impressive. And, and I don't mean to, I mean, it's amazing, the stuff that they've done is especially in some of their their bot stuff as well. But coming back to the example of wildfire, I've heard Donald Clark talk about it, it's a great example of using intelligent software AI to, to go through a lot of data and because you know that a lot of data is just you know, the corpus of, of all that humanity has has written be that on Wikipedia or you know, or somewhere else, and then generate whatever you want from it. And, and it's combined with some pedagogy. So those that follow Donald Clark's blog posts, and if you don't, you should, because they rock. But one of the many rants that he has on there is multiple choice questions and replacing, you know, that with, with, you know, filling in the blanks is more, there's more cognitive, there's cognitively expected of you, and therefore, it's more engaging and, and will stick with you better. So that's an example of where you've got AI combining with, with a concept, which is very close to home to, you know, learning and development people, pedagogy, to to give you a product, which, you know, could be really useful for for lots of people..

Paul Welch  29:04  

Just a couple of others that sprung to mind, maybe... 

Paul Westlake  29:06  

five years time, don't get too far ahead!

Paul Welch  29:08  

 ... Well, it's kind of already here a little bit, but it's kind of using facial recognition to mark attendance at events or to confirm attendance, certain exams is the person that is said, you are who you say, well, and I thought the other one that would be quite interesting to ponder is, are these are the subjects that we're going to be asked to produce change because of AI because actually the tasks that people need to learn about and are being done by AI. So some of the more kind of mundane line management type training that, you know, needs to be provided could in future be provided by AI.

Paul Westlake  29:39  

Can you hold that just a second? Because I've got more of a question for Marc really, probably on the back of that. And this is, this is me with my concern around where it's all going. So, Mark, do we have any sort of evidence that AI that's had human input for want of a better word in the first place, learns better or quicker or does a better job than when we just let the machine Get on with it himself? So, for example, I'm going to compare Spotify who you know, do a great job of serving up content based on other things, you've listened. And I'm sure there's lots of things there. I'm also very aware that when Apple launched Apple Music, they made a real big thing about yeah, sure, we've got some something crunching data in the background. But the really important bit of this will be human created and humanly tested. And that's why it's a far better job. Is that just all marketing, speak? Or is Is there anything, anything that sort of backs that up?

Marc Zao-Sanders  30:36  

I think any sort of intelligence needs to learn from experience. So, you know, with machine learning algorithms, you need to feed them lots of data, but the right kind of data, and that data may be provided by, you know, human activity. So, you know, for example, one of the early Google experiments was to identify photos on the web that were, that had cats in them. And now, how are you going to test how good your machine is, unless a human or some / many humans have actually done that. And that's actually why that data set was used, because because it was, there was pre existing tags on Instagram, or wherever it was, that indicated there was a cat in there in the photo or not. So I'm coming back to whether, you know, having a human in the loop, as they say, is important or not, it's probably not in every case, where you've got, there are some some cases like and we touched on AlphaGo and AlphaZero, in terms of playing chess, you just didn't need to have human involved at all. But with the slightly more slightly broader, more general tasks, like, like identifying, you know, pictures you're going to need, the more you can do them with humans, the better the results are going to be. And probably even more so with, with, with music, that over time, may change, because the computers are going to have more access to more more data and obviously become smarter in processing power, and all those things. But for now, humans are a pretty key to high performance machines. And the other thing is just just going back to if, you know, facial recognition, and I saw an article about the royal wedding, which was just last weekend, and how one of the papers, probably the Daily Mail, but anyway, one of the papers, used AI to just find all of the celebrities that were there somewhere. So there are some pretty, you know, pretty dumb uses of of AI, as well. But you can see where... that's potentially scary, you know, you could be anywhere in some some system is going to know. So, you know, there's the the gossip side of it, and the Daily Mail want to run that story, but there are issues, no doubt, you know, privacy and and what it is to be a free roaming human being on this planet. And of course, data is on... data and privacy is very much on people's minds with stuff with Facebook, and you know, what's happening with Facebook and GDPR, and what have you. So I think you know, even more than those sorts of important personal, social philosophical considerations are coming up on our minds. And we need to think about it more explicitly what we think we personally feel comfortable with.

Pete Smith  33:35  

Yeah, absolutely. And AI is great at unlocking those kind of philosophical debates and asking the kind of questions that you never would normally thought to run into, which are normally just a product of science fiction, but we are actually starting to enter the world of science fiction a bit...

Paul Westlake  33:50  

...an example of that. Just a great example of what you're saying, or what I think you're saying there is... I've been working with the guys at Rolls Royce and talking to them about automating cars and how that works. And, and then they're explaining that that technology is already there. But it's the philosophical questions that need answering before they can put that in place. So they were saying, how, what do we tell this system? Do we tell it that you're the driver? We're here to protect you as a driver? safety features built in? Someone's just running the road? Do we drive you into a wall, you hit the child? So how do you program that? How did you learn that? It's, it's and those are sort of issues we're looking at...

Pete Smith  34:27  

And, absolutely and also do we sell people the 'moral override package' so it's really an oligarch that actually rates, their safety above everyone else for a small fee! ...

Paul Westlake  34:36  

I mean, completley off the topic.... I love that their sum up line which is, but think about it. If we get this right. No one will ever need car insurance again, because the onus is on the car manufacturers to make sure that their cars are safe and therefore don't have accidents.

Paul Welch  34:51  

I think maybe a more mundane point, but then I think there are some things that you can solve by just sheer brute force computational power, like maybe the 'Go' and the chess things. I think there are other things in the mix with self driving cars that other problems to solve other than just AI. I think like in chess i don't think i might be mistaken, Marc may know, but I don't think they're very good, the computers at strategy. It's move by move. There's just such a large bank of data to draw on and learn from but a long term strategy I don't think they're so good at but they can get round it by that kind of just sheer grunt of computation. But driving I think self driving cars might be slightly different.

Paul Westlake  35:31  

I mean, we're running a little bit long, so maybe we need to tie up. My final question was going to be as a usual bit of a roundtable saying, you know, what are your hopes for beyond the five years? And so where do you think AI is going to change? Not necessarily L&D but if we could tie back to L&D, that'd be great. But where do you see that being in maybe the next 10 or 15 years time? And I'll start with a bit of a concern. And my concern is that I want, I want all of the good things that AI brings. But I don't want any of the things that I'm worried about. So for example, I don't want to have to say, Alexa, before I say everything, I want it to be listening, but then I don't want it to be listening to some of the conversations I'm having. I have no idea how that 

Marc Zao-Sanders  36:12  

You want to have your cake and eat it!

Paul Welch  36:13  

...That's a good one! I think really smart use of it, I think you can see it being used really well in the classrooms, I think it could be being it being everywhere. So it's kind of like you're analyzing what you're doing, and then providing learning at the point of need based on the context of what you're doing, that I think could be really, really useful.

Paul Westlake  36:39  

...offering help based on something is looking at you struggling...

Paul Welch  36:42  

You remember the paperclip? 

Paul Westlake  36:46  

Changing very quickly to the dog!?

Paul Welch  36:49  

I think that could be really, really interesting. I think for me, going back to that the big data, the analyzing how learning can can really be be used to make a difference for businesses or for the individual...

Paul Westlake  37:02  

Pete and then we'll leave the last word to Marc...

Pete Smith  37:04  

Um, I'm seeing nothing but fear and terror in the future as a result of all this, so actually picking up on Paul's points about optimising people's performance in business, there's a whole question about what sort of jobs people have. And there's also the the intrusion of technology into business. So already, Amazon employees are tracked every single second of the day. And you can see a future where we are tracked. So every single phone conversation that you have, is monitored, assessed for how effective you were in that. And then you could have training interventions based on how you did in that particular instance. And you will instantly find out how to do customer service a bit better. So all of these things, I think, are quite likely to come in with AI. And I think we need to have a proper debate about what it is to be human, what we are still better at than machines, and where we actually fit into that future world of employability.

Paul Welch  38:02  

I think that's all true. And I think it's a part of a bigger challenge that as a society we've got overcome. So for example, the high street disappearing, for example, that that's been all been driven by technology.

Paul Westlake  38:15  

But we're still riding the crest of the cool wave at the moment, aren't we? It's new stuff in isn't it great and it's things talking back to me, 

Paul Welch  38:22  

That's not necessarily got much to do with AI, but it's a technology driven problem, that as a society we're gonna probably have a big impact on the way we interact and live our lives...

Pete Smith  38:33  

I think autonomous cars are a good description of which way we're going in. At the moment people like IBM are very keen to talk about augmented intelligence instead of artificial intelligence. So the idea is that technology is less scary. It's giving you superpowers, at the moment, if you've got a modern car, it can do some extra braking for you, warning about blind spots. It can navigate for you. And it can do a bit of driving in traffic and it can park which is all great stuff. But you can see that already that you've got autonomous cars on the roads, which can drive themselves, the one makes you better the other one sheds millions of jobs worldwide. And that's what we're facing.

Paul Westlake  39:16  

So Marc, shall we, we need to wrap up, we were going to hand the last word to you, where do you see based on what you know, now, where do you see AI taking us in the next 15 years or so?

Marc Zao-Sanders  39:30  

15 years? ... If you think about AI as a powerful technology and technology is just there to make our lives better, easier. You know, from when we were technology once upon a time was you know a knife. Then you know you go through the phone and computers and now you've got AI to make our lives better and you can see it happening already, that it is taking away some of the more mundane tasks like, you know, we were talking earlier about tagging in, in curation. So our lives should become more interesting in theory, because we are doing more of the more human tasks and more of the dull tasks are being taken up by, by AI and as AI get some more advanced, it will, it will be able to take on even even more of that sort of responsibility. So if you think about AI as being the intelligence there, imagine that it's sort of sits with you. And it's, it's kind of like your, your mum, your wife, your best friend, your shaman, whoever you go to, for advice, is kind of constantly there as a mentor to make recommendations. Sure, but give them to steer you where you're away from mistakes, and enhance you so that you can be the best, you know, best you that you can be. I think that's some, I think that's the direction that we are headed in. And you can actually see that, you know, Google Apple, those guys are headed in that direction, they will be achieving that sort of thing for humanity, usefully in the next 10 years, as you say, and that will that'll make our lives better as technology has always been designed to do. We've got to keep some ethical issues in mind. But you know, the likes of Google are doing a good job of that. I'm not quite as worried as some of the some of you guys are!

Paul Westlake  41:35  

Thanks again for your time. I don't know if you wanted to have a quick plug for the slack group where people can pick up with you guys about AI.

Marc Zao-Sanders  41:43  

Yeah. Okay. So yeah, thanks for that. We started we started a slack Group A few months ago. It's just for the l&d community to share ideas. Four o'clock or 4pm GMT on, on, on Tuesdays. We have a live session for an hour, but it's really not our session. It's just whoever wants to go on and talk with each other. And share ideas and share links, discuss. There's about 1000 people that signed up so far. So you know, there's an AI channel. There's curation channel, there's and you can create channels yourself. So slack is just in a really slick, robust, a good way of sharing information better than LinkedIn. I think for you know, closed community like that. So um, so please, anyone feel free to join Tuesdays at four or outside of that? 


Lovely, thanks for your time again, Marc.


Cheers Paul, cheers guys.



Your speakers are


Marc Zao-Sanders is CEO and co-founder of Filtered, a learning technology which uses AI to make intelligent learning recommendations.
Paul has over 15 years’ experience in the elearning industry. He was responsible for the development and implementation of the Adapt Learning responsive design framework. Paul's contributions to technology innovation have been integral to the business and in 2015 he was recognised as Elearning Designer of the Year.
Paul was previously a Solutions Consultant at Kineo.
As a Technical Team Lead, Pete manages our team of Senior Technical Consultants and Front End Developers as well as taking accountability for the technical robustness and suitability of Kineo’s elearning and learning content. Pete also helps drive forward technical innovation working with our Technical Director and Head of Innovation to identify new opportunities for Kineo to branch into. Pete has a key role in the development of our Adapt framework and technical roadmaps for our proprietary tools and development.