Safeguarding podcast – The Truth Engine with Peter Cochrane OBE

Former CTO for BT, Peter Cochrane OBE proposes “Truth Engines” as an antidote to the algorithm-driven social media world which defines the digital context in which children are immersed and in which they face a range of harms, pervading all of which is an absence or a distortion of Truth.

Can we accurately and reliably identify truths, half-truths, distortions, falsities and lies? Can we make judgment calls fast enough? Can we continuously cleanse the net? These questions and more in a fascinating discussion with one of Britain’s foremost technologists:

For those that can’t hear podcasts, there’s a lightly edited transcript below:

Neil Fairbrother

Welcome to another edition of the SafeToNet Foundation’s safeguarding podcast where we discuss all things to do with safeguarding children in the digital context. Some people claim that we are living in a post truth world and point to Trump’s presidency in the US, and to Brexit in the UK as proof points of this.

The algorithm-driven social media world defines the digital context in which children are immersed and in which they face a range of harms which had been classified into four categories by UK Kids Online and they are; Aggressive risks, Sexual risks, Values risks, and Commercial risks, and pervading all of this is an absence or a distortion of truth. Yet naive children are supposed to thrive in this toxic environment that we’ve constructed.

Technology is clearly part of the problem and technology can be used as part of the solution, and I can’t think of a better technologist to discuss this with than today’s guest who is proposing the possibility of what he calls a Truth Engine, and he is Peter Cochrane, OBE. Peter, welcome to the podcast.

So Peter, can you provide a brief resume of your outstanding CV so that our audience has a context about your background and your experience?

Peter Cochrane OBE

Oh, sure. I was very ill at the age of 11 and I was confined to bed for about six weeks and at that point I got into technology and by the age of 15, I had a huge collection of World War Two surplus radios that I hacked about and I built transmitters and things. I became a radio Ham, radio control, hi-fi. I built everything. I worked in a radio and TV shop. I dug holes in the road for the GPO, became a lineman and I did all sorts of jobs to do with maintenance of electromechanical switches and sort of stumbled into university at the age of 22. Did a five year first degree, came out with a first class honours degree in Electrical Engineering, came down to BT’s research labs, and I was writing machine code and building test equipment and I sort of slid upwards in the organization taking on more and more responsibility.

The OBE was a result of me getting hold of a project and making it work and it was to install the first fibre optic cable across the Atlantic. And slightly after that I got fibre to the home. And I then started to build a new research lab and we got into artificial life, artificial intelligence, a gamification, all kinds of stuff that you perhaps wouldn’t expect a telco to do, including remote surgery using telepresence, when we had to have 32 dial-up telephone lines on ISDN to get the bandwidth!

And so my career has just been like that. It’s just been full of eventful things. So right now, I retired from BT 20 years ago. I’ve had my own company. I’m now moving into academia. I advise Facebook. I’ve also been an advisor to QCRC, that’s the Qatari Computing and Research Centre, where my boss was both female and 83!

So, I have this spectacular grouping of people that I get to work with, all ages, all colours, creeds, disciplines, all kinds of interesting backgrounds. It makes for a very rich life. So I’ve sort of stumbled into this topic of the Truth Engine through various routes and people have contacted me and said, will you help us? Will you participate?

Neil Fairbrother

As a technologist, what do you see as the problem with the online context? What do you see as being the problem? I see things, for example, from the point of view of the child. What do you see?

Peter Cochrane OBE

Any technology that we create is absolutely and utterly benign. What happens is people get it and they use it. So my favourite example for students is a hammer. What a wonderful invention. It transformed the human race’s ability to build things. Unfortunately, three years after it had been invented, somebody killed his wife with it, and you’ve invented a murder weapon. And so, today’s very reactive society would say, stop the production of all hammers. And what worries me a lot is that the technological ignorance, if you like, or lack of thinking might see us throw out the baby with the bath water. Social networks, the technologies of computing and communication are phenomenally powerful, incredibly successful in saving human lives and making human lives better. But then up pops a few bad people doing bad things and the danger is we overreact. My thinking is we got to find a solution to this.

So this is what worries me in a single sentence. I meet children who do not believe that we landed on the moon! And the earth is flat and a lot more. So, I keep saying this in conferences, but there are several wars, well, there’s only one war, but there’s the terrestrial war the ground war, there’s sea warfare, there’s air warfare, there’s space warfare, there’s cyber-warfare, the worst one of all is going to wipe out the of the whole of the human race if we’re not careful, is the information war, the distortion of truth.

Neil Fairbrother

Yes. Interesting that you should use the war analogy because in your presentation “How to Build a Truth Engine”, you do refer to the enemy within and the enemy without. What do you mean by that?

Peter Cochrane OBE

Well, the enemy within is us. The enemy within, is powered by the fact that truth is hard to determine and a lie so easy to believe and propagate. So I always quote an American author, Mark Twain, and I’m not sure this belongs to him or he originated it but he is very often cited, but he said, “A lie will run around the world twice while the truth puts its shoes on”. It’s so very true. So, people don’t fact check, they’re lazy, the enemy within is our negligence, our laziness, and are willingness to believe bad things.

The enemy without goes from organizations, political organizations or rogue states who are using the enemy within, us, and seeding ideas out there and just watching them propagate. And the first phase of this is to absolutely bury people in information, some right, some wrong. People don’t know what is truth anymore. They don’t know what’s right. They don’t know what’s wrong. Nobody’s doing any fact checking.

And so you’re seeing quite an interesting breakdown in the United States and in the UK, who are probably the two leading cyber-nations in many respects. This is extremely worrying. The MMRI jab is a good example of one person totally obscuring the truth because people panic and are willing to believe something that’s not true. They don’t check it out.

Neil Fairbrother

Another, coincidentally American, author that you quote is the science fiction author Isaac Asimov, who I certainly grew up with and thoroughly enjoyed reading, and he says: “There is a cult of ignorance in the United States and there always has been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that my ignorance is just as good as your knowledge”. Do you see this as a global contagion?

Peter Cochrane OBE

We depend, as a nation, on our brainpower. We have virtually no natural resources. If we can’t produce things with our brains and hands, we’re dead in the water. And here we are, at a university. If you go around with any university in the United States and the UK, you will find it very difficult to see anywhere a PhD student that is from your own country.

I always refer to immigrants really as “Schrodinger’s immigrant”. Schrodinger’s immigrant – they are at once stealing all our jobs whilst lying around doing nothing and living off the state. And the reality is if you turn away these people, we will have no farms, no NHS. We will have no hospitality industry whatsoever. And Industry will really suffer. And academia will more or less just collapse under its own weight because we’re just like the USA. We’re not producing people with the intellectual drive to create new wealth.

Neil Fairbrother

So, what can be done? You say in your presentation that we should fight fire with fire, but also that, for example, filtering content just to revert to the online context, filtering content isn’t without risks. So what are those risks and how could they be averted?

Peter Cochrane OBE

The big problem with any form of censorship is that it becomes driven by forces that are not themselves as pure as they would like to think. So human bias comes in and who watches the watchers? Who is going to be the censor? Who has the right to tell you and I or anyone else what we can see, what we can watch, what we can read. I don’t believe anybody does. However, I do believe as a society we have to step up to the plate and try and filter out the sedition if you like, the sexually outrageous, all the really bad stuff that’s out there. And so that is a moral judgment for the society. Now the reality is those people who think you can put fines on people like Facebook and Snapchat and all these other people, you can actually control it, you’ve got to be kidding themselves.

Since the government, our government, announced that it was going to take action, the last time I looked, five new social networks have been set up. They have no central organization. They are pretty much dark networks. The government won’t even be able to see them, let alone regulate them. And unfortunately we have people in power who are clueless about technology. And so this again is really quite damaging.

Years ago I was involved with an effort to cut back on paedophiles and the immediate reaction of ministers was, “Let’s go get him!”. My reaction was, no, no, no, don’t do that, in fact, let’s put up some honeypots. Let’s put up some sites, let’s encourage them, let’s see who they are, let’s establish where their networks are let’s find, not just the little guys, but the big guys. Where are the big operators in this? And once we’ve identified all of them, we’ve go out one night and get the lot. So the Government didn’t take note of that. Nobody thought it was a good idea. They attacked, they got all the little guys and gals and the big boys got away. They went underground. They’re all on the dark net and it’s thriving and we’ve now got a massive problem of trying to find this stuff.

Neil Fairbrother

Yes. In fact, I was at the Internet Watch Foundation’s annual report launch yesterday at the House of Lords, and they are doing a fantastic job certainly in the UK, but they are struggling with getting other countries to focus on this. And most of the images that they detect are offshore and therefore their powers are somewhat restricted.

Now you do ask some big questions about how we can tackle this problem:

  • Can we accurately and reliably identify truths, half-truths, distortions, falsities and lies? And you give that a tick.
  • Can we establish a new framework and metrics? and you give that a tick as well.
  • Can we make judgment calls fast enough? Now here you start to wobble a little bit because there’s a question mark and a tick.
  • Can we block or attenuate the pernicious? Again, you’re not sure.
  • Can we continuously cleanse the net? Now that’s an interesting concept that is very much open to debate and [finally]..
  • Can we educate society to the risks?

So what are your answers to those, particularly the last three?

Peter Cochrane OBE

Well I look at this pretty much the way I look at security and I feel that if my car out there kept breaking down, I’d be pretty rattled off. It’s for industry to give me a product that works. And I don’t think people should be responsible for the security on their laptops, it should be done for them by industry, by us. It needs to be automated and we know how to do that. It’s called an autoimmune system. And so we almost need an auto-meme-protection system out there.

Let me tell you what we can do and we can do it really well. So the most surprising thing I’ve done is I got involved and helped with a global hackathon. We had 130 nations, 42 teams. The teams were anything from one to 20 people, and they were given a week to put together some natural language software to see if they could detect propaganda and fake news. To my absolute amazement in one week they were getting a success rate of over 86%. Now this is, this is really extraordinarily good, but this is only one technique.

I have many, many more techniques I want to put in the Truth Engine. So let me tell you about one more and it’s relatively easy one to do but incredibly powerful. So you could go back to the 1950s say, and you could see a report by the BBC that 75 people were killed him some mining incident. And then you follow that report and then you find later on it turned out to be 82 people would have died. So there was a big error there. How did they get that wrong? And then you look at the reports and you say, well, you know, in the report it says, according to local sources, this is the official figure, in which case the BBC got a good tick.

And you keep doing this for a news event after news event after news event. And you say, well that’s interesting because the BBC are getting about 97% of their reporting quite correct. And then you have a look at the Washington Post and find that the 98%, and then you look at somebody like Breitbart and you find is like 32%. And then you look at something like the Daily Express and it’s like 68%. And they will put up numbers like the Russian atomic a power station catastrophe, the headline was 6 million people are going to die in Europe. It was like, come on, how many people have died? 17. That’s a rather big error. So now you, if you keep doing this, you can run a projection and, since 1950, the BBC have more or less held steady or they’ve strayed from the straight and narrow a little bit and the error bars on their stories have increased, and their “truth rating” has gone down or they’ve held that steady. You can do this for every source.

Now you’re in a wonderful position of being able to construct an AI machine, an artificial intelligence, because now you’ve got the weightings of truth for each organization. So all this stuff comes out and you say, well, is it true or isn’t it? So now you can extend this to scientific papers and individuals. We could say, let’s have a look at Neil and we’ll see how accurate is he on his reporting, so when you put something out, we can say, oh, this is great, Neil’s in the 95% bracket, so you know, we can put a lot of credence on what somebody says. That’s one part of the inference that we can apply.

Neil Fairbrother

Now you refer to artificial intelligence or AI as it’s often called, and there is also something known as machine learning which I think is related but different. What are these and how do they relate to each other?

Peter Cochrane OBE

Well when you build an AI engine, in this context, you have to recognize that once the bad boys discover that you’re using techniques to detect they’re doing fake news and propaganda, they will change. So there’s no good building machine that you bolt down. You need a machine that continually learns. So the machine has to keep looking and learning and learning and giving you all the trend lines, all the expectation and all the parameters. And so the AI can actually become a learning machine. It can adjust itself with time. So, that has to be built in. But there are several other sort of techniques that we can throw at this and I’m, I’m fairly confident that we can do quite a good job. So here’s another thing that I’d like to do. I know, and we can actually do it. I’ve got a company that can do this.

You’ll see something like 200,000 reports… the Queen’s just died, turned out to be wrong or some other stupid thing. And then what you can do is trace it back. You can go through the net, you can look at all the addresses and you can see that it goes back and back and back and it’ll turn out to be a guy in a shed in Illinois and here’s one of the failings of the human race. We associate quantity with the truth, so a single posting we may disregard, but 200,000 postings on the same thing or 20,000 or 2,000 postings on the same thing, we’re more inclined to agree, if all the newspapers are saying the same thing, it’s got to be true.

Neil Fairbrother

This is what you referred to as the “illusory truth effect”?

Peter Cochrane OBE

Yes.

Neil Fairbrother

Okay, so your Truth Engine then, let’s unpick that. What are the constituent parts? What are the components of a Truth Engine? How would it work?

Peter Cochrane OBE

Well first of all, you’ve got to be able to take either a printed feed, a voice feed, a video feed, and you’ve got to be able to take that in, in any language. You got to be able to abstract the meaning, so you need natural language processing in all the domains. And then you’ve got to apply some inference. So let me give you an example. We have now got machines that can do this, but I might say to you “I love you”. Or, “Hello Neil. I love you”. You know, so what am I am inferring with those two expressions of the same thing for the flat machine, dumb interpretation, we’ve got some kind of romantic thing going off here. It doesn’t understand sarcasm. It doesn’t understand any other medium. So that is important.

The other things are who wrote this stuff? Who is saying this stuff? Who is on camera? And you have probably seen these wonderful falsifications of President Obama saying things that he would never say. You have to be able to recognize all these things. And then when you’ve abstracted what that is, you then have to put it in context. So the first thing you can do is, there are 170 fact-checking organizations out there worldwide. Interestingly, there are not in China and Russia. Okay. Surprise!

So you go on and check against all the fact-checking organizations and I’m a good one for the present climate in the US and in the UK is Politico. They do a good job of checking the statements of Presidents and individual politicians and things like that, but it takes a long time because it’s doing it mostly manually.

But that’s another sort of benchmark. You then ask a simple question like who the heck is that? The author. Okay, who are they paid by? What is their motivation? What is the history? What have they published before? Are they left or right leaning? Have they got a hidden agenda? All that stuff you can find out, if they’ve got a history, so for some organizations it’s blindingly obvious that they are corrupt. Others who are supposed to be politically independent, might have a leaning in one direction or the other they ought not to have, but that shows up. So the statistics of that or is actually all available. So we’ve got a huge amount of data that actually quantifies all of that. So this all helps, you know, it was all goes into the AI machine as factors of making the decision.

Neil Fairbrother

Is this one super, super, super computer that sits there in the middle of the Internet somehow and everything has to pass through that, or is it a distributed mechanism? What form would take?

Peter Cochrane OBE

In my view, it needs to be in some way distributed, and I’ll tell you the reason why. We don’t need one, we need five or ten of them and here’s the reason why. There are many, many examples already with AI… if you live in the US for example, a lot of the judicial system is being automated. You commit a crime, the details are fed into a computer, and quite literally a sentence is passed – slightly worrying! We’re facing that conundrum here because our legal system can’t cope with the sheer volume of the stuff. All right? So what happens in the US is if you’re, if you’re a white man, you know, you commit some kind of infringement or a crime, you get one level of final punishment. But if you’re a coloured guy, Bang! it’s about 10 times worse. That is some human being building in racial bias into the damn machine.

So my solution to that is we get separate teams in separate countries to build Truth Engines and they start comparing, because the only way I can think of weeding out human bias is to have separate machines looking or you know, you give them the same data in the same way, well why do they get a different result?

Neil Fairbrother

So you might end up with machines checking machines?

Peter Cochrane OBE

Oh, they will. Absolutely no doubt. This is no different to you and I sitting in this room and having 20 people in here and deciding a consensus, this is no better, no worse. The reality is that if you’re on a jury, you all listen to the same stuff, but you all come to different conclusions. You know, it was said when I go to a management meeting, if there are 10 people in the meeting then that’s 10 different meetings! It’s one of the reasons that you have meeting notes and you agree them as you go, so that there is a consensus, this is what we’ve discussed, this is what we’ve decided in that debate. These are soft topics, they are very difficult topics, not like hard science where you know that black doesn’t equal white or you know how fast light travels.

These are very human things where they’re opinionated and they’re fashionable, and so one of the things that is going wrong right now is that people are making moral judgements on actions that were taken in the 1970s. And so we are going to have to start thinking differently, because the technologies are allowing us to do new things. But there are many, many, many, many similarities between the way the AI is doing stuff and the way we’re doing stuff. What excites me is AI is getting the right answer by different mechanisms. I just love that.

Neil Fairbrother

Yes, it’s not a carbon-based life form, it’s a completely thing. Not that I’m suggesting that AI equals life, but it is a different way of processing information.

Peter Cochrane OBE

Correct. So years and years ago, I got involved in what was the start of Internet Watch Foundation and my team, we were looking at detecting whether it was men or women. You and I do it by masculinity and femininity. We were putting photographs in, this is a man, this was a woman, this was a man, there’s a woman, you know, what is this? Say it’s a man, right? It’s got it right. Brilliant! When we unpicked it, guess what it was doing? It was looking for whiskers, mascara and lipstick!

So it was getting there. It was getting the right answer, but by different mechanisms. And I think that in itself is a very powerful thing to have because science itself is based on a tremendous amount of scepticism. Just because I proved something doesn’t mean say it’s right.

Neil Fairbrother

Yes. Now is this something that people really understand? Because scientists unfortunately use the word theory. Yes. And I think that that is often misunderstood by non-scientists as being, well, it’s just an idea. Yes. So are scientists somewhat culpable themselves?

Peter Cochrane OBE

Without a doubt. Without a shadow of a doubt. I very often lament the inability of my own profession to communicate clearly and to take the time to communicate clearly what all this stuff means. So what people don’t understand about science, and it’s pertinent to the Truth Engine, is science is not like religion. It doesn’t give you an absolute answer. It gives you the best model of the universe, and when I say universe, I mean everything that we study, the best model possible with the measurement capability, the computing capabilities and the modelling capabilities that we have at this time.

So let’s take a very simple example. The earth is flat. No, it’s round. No, it’s flat actually. It’s a sphere. No, actually it’s an oblique sphere. No, actually it’s a dynamic oblate sphere. It’s actually not spherical, purely, it’s a little bit lumpy. But as the moon and the sun move, it stretches and it compresses and it changes shape. That’s the best model we’ve got right now. And it’s a damn good model, but it’s taken several thousand years to get here. Right now, for example, people out there think we understand chemistry. No, we’ve got a first order approximation to chemistry. We will never, ever fully understand chemistry until we get quantum computers.

Neil Fairbrother

So if you are a child and you are online and you might see some information online, how do you verify, given that the child by definition is innocent, is naïve, because they just don’t have the worldly experience by definition of being a child. How do they decide? How can they decide whether that fact or that piece of information is in fact a fact, or if it is an untruth, if it’s false news, fake news?

Peter Cochrane OBE

I don’t think they can and I think we need to get the machines to do it. But I think there is a deeper sociological problem and it’s a very worrying them. We will be okay soon, but we’re not okay now. And when I say that, what I mean is, we’re not okay because we’ve got technophobes and we’ve got technophiles, and the technophobes unfortunately are the parents. They don’t engage in this, by and large. The kids are over there in the bedroom, and these are the technophiles and ne’er shall the two meet it would appear.

So this is a step change from the society I was brought up in where child and parent were using the same library, the same information sources, listening to the same things, seeing the same things and discussing them. Now we have a very interesting situation where people do not meet for a meal. There’s no discussion. The social aspects of many families are broken down.

You now see people drinking and eating as they walk continually. They don’t sit down and there’s no discussion time. There’s no socializing of things. And when the child sees something, they don’t go and tell the parents, they don’t say “I don’t understand this”, because very often the child doesn’t understand it. The parents don’t even know it exists. So there’s a big gulf there, this is kind of a worry and I think in about 20 years’ time, this will have sorted itself out because hopefully the parents and the children will be on the same table in terms of knowledge.

Neil Fairbrother

Yes, because as teenagers age and become parents themselves… teenagers today who are the technophiles as you described them, will presumably still be technophiles as they mature and become adults and have children themselves. And so this cycle will work its way through.

Peter Cochrane OBE

I would like to hope, that the 12, 13, 14, 15, 16-year old boys in particular who are watching pornography at night become men and become fathers, they will have the good sense to try and institute some kind of policing to not necessarily totally prevent their children’s seeing it, but to understand it in the full context. Because the real danger about this is that all the pornography does is deal with the mechanics. It doesn’t deal with the more subtle aspects. This can lead to some pretty bad outcomes.

With my own children, I had the wonderful experience of having two girls followed a gap and then two boys and the girls actually policed the boys. And then my oldest son policed the youngest son. And I just stood back and let it roll to see what would happen, and they’ve come out as reasonable human beings. But I have no doubt in my mind that it could have gone badly wrong.

Neil Fairbrother

Do people care though? Have people become so used to their algorithm-based bubbles that they live in that they simply don’t want to shift their thinking, they’re so comfortable in that bubble that to peer out of it, to peer over the edge of it and see what’s on the outside, to see a different perspective, is actually becomes quite a frightening thing because it’s then challenging their own self-perception and their perception of the world. So do they really care?

Peter Cochrane OBE

I call it the social network rabbit hole, and my wife’s got a wonderful technique. She makes friends with quite a lot of people she doesn’t particularly like or agree with because what she wants to see is the wider view. So what naturally happens is you go onto a social network and you put things up and you get metaphorically, beaten up for it. But then people come to your defence, and you get this cohort and you find yourself sliding down this very agreeable rabbit hole where you all think the same, you all say the same and it’s very, very comfortable and to break out of that takes a bit of energy and it’s uncomfortable.

And so the wider aspects of society we should all be aware of. I see the people sleeping rough. I hear about families living in poverty. I’m not happy about living in any society doesn’t care.

Neil Fairbrother

One other pathway, one other option, is to do what Sri Lanka has done a week or two ago in the wake of the appalling atrocity there recently when they simply turned social media off. Why don’t we just do that?

Peter Cochrane OBE

Because that would destroy the good side. All technology, it’s like the scales of justice, put the bad stuff on one pan and put the good stuff on the other pan and you’ll find that the good stuff is a hugely beneficial to society. And in actual fact, when you give people a freedom, it’s very difficult to take it away.

And now it might be warranted in a very short term to do something like that, but the real failing in Sri Lanka, as I understand it, has been an administration one where the intelligence, which was gathered off the social media was around, and so, you know, every single attack, terrorist attack or a major event, there are precursors. Every time you look at a major incident, retrospectively, you see precursors to the event. If only you know where to look for.

Neil Fairbrother

And this is where your Truth Engine comes in? Assuming it could work quickly enough, because it’s the latency of the analysis which is the issue here.

Peter Cochrane OBE

Yes.

Neil Fairbrother

So, you have hope. You don’t think that our children face this nightmarish dystopian future of polarized uncompromising views or a dark age of ignorance and violence online if not offline. You think there is really a brighter future?

Peter Cochrane OBE

I’ve seen worse times. I’ve seen much worse times and we’ve pulled through. I think the thing that worries me is not the technology, it’s always the people. A good example now would be we no longer have debates. We have shouting matches. We no longer have a polite discourse. We have politicians being very rude to each other, setting a bad example. It can be actually quite risky to be in a room full of people and say something that none of them agree with. They don’t enter into a discussion with you about it. They started attacking you.

And in the same way that people will say things on the telephone that they won’t say to your face. My God, when you go on social media, I’ve quite a bit of abuse. Years ago, when I was doing a regular blog for an organization in the UK and it was bought out by an American organization, it went global. And the first blog I put up, got a response of “Are you out of your effing mind?” And my rejoinder to that was something along the lines of “The great thing about having a good education and lots of experience in life, is it teaches you to be considered and polite and not to jump to conclusions, but to listen and to read and to look at other people’s points of view and ask the question, “might you just be wrong?”

Neil Fairbrother

We/re going to have to wrap it up here, thank you so much for your time. It’s been fascinating, and I’ll watch your Truth Engine project with intense.

Peter Cochrane OBE

My pleasure. Thank you.

Leave a Comment

Your email address will not be published. Required fields are marked *