When you think about Singularity, probably the blending of Virtual Reality and Artificial Intelligence is one of its most obvious shapes. But if you dig even just a bit deeper, there’s nothing obvious at all!
Artificial Intelligence can be a buzzword, but what we know is that it can extend our realities in ways that we can’t even imagine, especially when it comes to the social, ethical, and human consequences.
Oh, wait! There’s someone who’s really trying to cover it: they are called MKAI, a young UK-based organization that, motivated by the Open Ethics manifesto, advocates for transparency, inclusion, ethics, and accountability.
Richard Foster-Fletcher is the MKAI chair, and the journey with him is going to take us to a new level!
Oli: As all of our travel mates, listeners and watchers know, at the end of each episode, we ask our guests to give a definition of Singularity, what that means for them. In this case we’re probably touching one of the core visions of singularity, the convergence between AI and VR. So Richard, what does Singularity actually mean to you?
Richard: Well, it’s interesting that if you ask .”how we get to the singularity”, the person who seemed to be the one who tame the expression, he doesn’t seem to have an awful lot of answers. To me, I mean it’s a way of augmenting, but there’s a lot missing from that idea. I think that we haven’t really got our heads around. For example, if we look at the roads, we have this idea that we can mix autonomous vehicles with driven vehicles, connected with unconnected vehicles, and it just isn’t like that. If that was possible, then we’d be seeing horses carriage next to 5 series BMW on the motorways and we don’t because they’re not compatible. So fundamentally there is a non-compatibility between human and machine that we have to overcome. And yes, you know, machines work in nanoseconds, which is incredible. And a nanosecond to a second is like a second to 32 years. So it really is phenomenally different. But when we use our laptops today they slow us down, we’re faster than our laptop. So it’s just not this simple that we we suddenly become greater and more from a singularity between us and machines. But there’s incredible problem solving opportunity from the move towards that in a society that I think is pretty broken right now.
Oli: Would you say it’s always been a little broken or do you think it’s particularly broken right now?
Richard: It’s funny, in MKAI sometimes we have meetings, and I say “remember, we’ve always had problems”. A 1,000 years ago, we had problems. A 1,000 years from now, we’ll have problems, probably much worse than we have now. All we’re doing is just keeping ourselves busy working on the current problems, which to us, of course, seem huge: diversity and inclusion, access to opportunity, access to finance, the banked, the unbanked, the difference in economics, the capitalism that favors the few and not the many. These are all significant problems, but at the heart of it, the only real problem is that we don’t yet know how to live on this planet properly. But we just got to get up to a C+, right? We’re not an A* species. We get it.
I mean well, let’s say like, are humans good or bad? It’s like but you’re a human, you’re biased like don’t mark your own homework like past the whales or the elephants, elephants are almost extinct. What? How is that possible that we’re okay with that? Though in the long term, you know the orientation is that we need to met to manage everything, all soil in the world will have live IOT sensors for, you know, the desertification of it or the damage to it so that we can manage all of our planet, but nowhere near that with generations away. So in the shorter term this is just a mindset shift now of all of us just trying to get us up to a C+ in society rather than a D- or even an F, in some countries where we just just beating ourselves to death.
Oli: So, you reckon that we are kind of as bad as we’ve ever been and as good as we’ve ever been, but just with the technology changing, is that right?
Richard: To answer your question, we’ve always had problems, we’ll always have problems, they’ll just change. So what we say in MKAI us remember is about people. And one incredible thing of modern technology is like this. We get to find the people all around the world who do share these concerns and work towards better ways and better methods and they’re not in my estate, they’re not even in my local town a lot of time, but there’s thousands and thousands of people now that we can connect on the internet. Yes, but also now through VR which I think is phenomenal then it’s one of our greatest opportunities I think to learn and grow together.
Oli: Technologies are sort of an amplifier of outcomes. So what what do you think? Should we be scared about from the future? What do you see as the kind of threats? And on the other hand, maybe the opportunities from the scenario which we’re facing in the immediate sort of decades?
Richard: we’ve definitely got a hump to get over when you talk about threats. Do you mean particularly from technological or do you mean the wider societal and environmental threats?
Oli: Well, I mean, I think it’s all tied together. I’m going to leave it to you to kind of figure out what you want to put in there. But in general, just threats, I would say that technology can have something to do with, either by being the problem or by solving the problem, I guess
Richard: it this huge possibility from particularly artificial intelligence. I think in technology, one of the reasons I specialize in this is that on a long enough timeline we know that it’s kind of augmented intelligence becomes key to our exploration, which if we’re one thing we are incredible exploring apes that will go anywhere and press the button and see what it does and that’s in our nature, I believe. So technology allows us to nourish that. So on one hand in MKAI, we talked with time about explainability, openness, transparency, you know, collecting data, storing data, using data, selling data, sharing data. How are you doing that in ways that build responsible tech? And that’s huge for us. And it’s right now, that’s not the end of the story when it comes to AI and if you think about the wider world, then there’s a possibility that some of the solutions that we need to overcome we’re describing over the 10 years of problems which are climate-related, pandemic-related, inclusion-related. But as you say that they’re all connected, they are all examples of how we’re not working in society. So those problems are beyond our cognitive ability and in artificial intelligence, we have tools that may be beyond our cognitive evolution and what’s incredible about AI is that you can just put the data in of the planets in our solar system and the … data and it will predict the orbits. Yeah, fascinates me because it doesn’t know anything about gravity. We can’t do that without. Yeah, it’s just patterns. We can’t do patterns. We have to understand gravity and now we’re wrestling with neutrinos and particles and dark matter because we have to know those things before we can progress. AI doesn’t. So you can you can get past the kind of we must have glass box AI, say well what if we are okay with black box AI if it was able to examine and find ideas for us that were beyond our scientific and cognitive evolution because if we have to know what dark matter is to understand the universe and AI doesn’t…
Oli: I mean, if we understood the whole world through AI because it was explained to us in the future on one hand that would mean we would have a lot more knowledge. But it would also mean that we have a more superficial understanding of things because we understand the patterns behind things, but not really why they’re there. Does that represent any type of worry or something you would think about?
Richard: I think a lot about that, I think it shows up in corporations, a lot of leaders talk about being data-driven, data-driven decision making. But I think the more you do that, the more you encourage a culture of trusting the data, but your greatest ideas and innovation aren’t in the data.
Kavya: Richard, thank you for working together with XRSI to explore, you know what I start to hear from your conversation is convergence. So, the convergence of artificial intelligence and Virtual Reality seems quite obvious, but the domain is so wide that everyone has their own point of view, their own vision. So what’s your point of view for this crucial aspect of where the AI and VR intersections are happening? And having done so much research fairly recently for the XRSI Privacy framework I’m really curious for the sake of our audience because I’ve had the honor to review all the research. But please share with us the key aspects that you and MKAI group was able to explore and find this convergence.
Richard: Thank you for the opportunity to work on that with your incredible organization. You really did catalyze some amazing things that we are absolutely carrying on with even beyond the 1.1 framework that we’re complementing from your great work. I think there were two revelations for me that came out of the group discussions and the work that I did and I think one is that this is a nascent industry and I hope it doesn’t take a decade to learn the lessons of other industries and I’ll come to that in just a second. The other is that um, opportunities for exploitation in Extended Reality and spatial Computing is phenomenal and that I think some symptomatic of the recording devices, audio and microphones and data collection tools that are becoming normalized and en-masse and I don’t need to preach to the choir but the data collection from audio devices is phenomenal and it’s not only the everyday data in terms of preferences and behaviors and so on that social media giants think they can make even more money out of advertising and predicting behaviors from, but actually it’s a backdoor into your own personal data and I don’t know if that’s clear yet to the general public, but the movement, the changes of the time they are very very clear once you put them into this machine-learning models, you know I mean in terms of health and wellness and trends and it is not clear that that data is being collected and it’s very much available. So whether you’re sitting in your car, whether you’re putting a headset on or glasses or Apple headphones that have cameras in the future or whatever. So much has got to change, you might trust Apple’s walled gardens, you might not, you certainly have to be able to afford them in the first place, even if you have the luxury of deciding if you trust them. Then if you get into your Tesla and it’s watching you, well let’s have a better example: you get into your work van, your fleet van and you work for a big corporates. I don’t know if they’re going to draw those lines well enough, what is cool and what isn’t cool? You know, you pick up your phone that’s one log, pick it up twice, three times. You’re automatically fired. Are we okay with that? What about the music that’s being listened to? I read the other day that “American idiot” by Green day is the most likely song for you to crash to. So what does that mean if your fleet to listen to Green Day? So for everything in this area and everything in life, it’s like, you know, just forget binary. It doesn’t exist, it’s no good, bad, wrong, white, black. It doesn’t exist, you have to draw a line on things, as a CEO, you have to draw a line on data and your employees and your customers and what okay with him. What your narrow and you got to put a lot of work into figuring that out. And I use the example of veganism: there’s no such thing as veganism: you want to eat something that was alive, wheter you decide on that line is up to you. Are you okay with cattle? Are you okay with chicken, are you okay with eggs? Are you okay with collagen derived from fish scales product? You have to make your mind up.
And so that was point to point to if you’ll permit me is this kind of learning from other industries. In our pre-conversation. I was saying, you know, I look at other industries that fail because they don’t account for the whole market. And I’ve been exploring electric vehicles on the podcast, my own podcast for a number of weeks now and guess who can’t come on the show because he’s just joined a company that isn’t going to permit it, was going to argue against the 2030 electric vehicle net zero ruling in UK and other places because he said, it’s wrong and it’s taken away the innovation of the market. And I said to him, are you kidding me? Innovation in automotive. First thing you did lobby to try and get 2030 pushed back to 2050, all of you with millions spent on it. And then what if Volkswagen do, what was their innovation? They lied, Just lied about their vehicles. And you’re telling me. Now, the problem isn’t Volkswagen or the automotive industry is the fact that they didn’t account for environmental degradation for a greenhouse-gas emissions. It was proven in 1975: the burning fossil fuels produces greenhouse gases, which damages the planet in the ozone layer. We’ve known that for 50 years, unless you make a market account for it, it’s not going to innovate. Just like chemical companies used to pull their access chemicals into rivers and streams, you have to make them stop do it, and then they can innovate in a holistic sense for all the factors in the market, including the one that for some I’ve been known reason we’ve ignored for decades where we live and what we drink and we eat and what we breathe, which is nuts, insane. But then we’ve all the cheap flights and cheap stuff so much that we would sacrifice it seems anything. So, let’s learn from that and let’s have VR that’s interruptible, like telco industry has finally got to now where they’re unlocking all phones, put any sim card in there so that they can be re used a phone never have this life next life of life and saying for every headset it should be at a port across and use different brands. Otherwise it’s going to be data monopolies and device monopolies and people are gonna get locked into one and then it’s going to be owned by Microsoft or Google or whatever and you’re not going to get out and then it’s going to be forced upon them to make interruptible. Well, just save yourself the time and do it now because it’s going to be forced upon you anyway.
Oli: A typical example of the intersection between AI and VR is the travel industry. VR can give travelers a glimpse of what staying at a resort might be like or what exploring a destination will entail and AI can help them make decisions about their trips and might take care of the booking or reservations, for example. At this point, what’s the place of humans in this? Where are they? What’s the future for humans if they I can kind of do everything?
Richard: You have these incredible opportunities to have life to be much more convenient. We could get into a privacy versus security versus convenience debate all day, but you just have to go on to Duckduckgo and then go into Google and you see that one is a search engine and one is a concierge. So you know, we have to understand what things are. Google is not a search engine, it means for trying to predict your next direction and help you to make it easier. And that’s not inherently, I mean that’s the kind of you will make you happier. You make us richer argument. Um, there’s many people trying to create new social networks, but I think they’re missing the point. The social networks that we have that are worth billions, Facebook, $86.7 billion dollars, 98% of that from advertisements. They didn’t do that because they made a great social network. They did that because they realized that with the platform they had they could systematically exploit to make human biographies and monetize it. So if you want to create a social media platform, what are you creating? Just somewhere that we can share ideas and memories and messages? There’s no money in that. Hello. If you want to make a social network look at clubhouse: free to come in, use the power of celebrities, harvest all your data store them for 72 hours, sold to Facebook. So that’s one model that we have now. So we’re going to break that, we’re going to break that with decentralization, it’s going to count, and then all the big players are going to copycat that as they copycat everything else. And then hopefully they’ll copycat something good. So at least that will probably help us. So that’s part of it in terms of it’s phenomenal for convenience. Right? And you might hate Amazon Alexa, but if you say “book me a weekend away in the Cotswolds” and it’s all just done and you turn up and it’s wonderful, wow! You know, that’s incredible. Thought, you’ve got to give him a lot of data for them to be able to do that. So then you talked about the role of humans and people talk about “will AI replace jobs?” and they start talking before they give me the definition of a job. Before you have that conversation you got to say and what I mean by that is. If you look in the dictionary, there’s two definitions of job. One is “gainful employment” and that’s where you have protection and colleagues and redundancy packages, you have HR and support and maternity leave and all these things, right? We think are good about jobs. So that’s one. But the other side is “job is where you are paid for a task” to think task rabbit Uber eats or so. What are you referring to? Are you saying that it’s going to take away jobs where we have pensions and community and management schemes? Yeah, probably yeah. Is it going to take away Uber Eats? No, but do we want millions and millions of people to be managed by algorithms? So much so that they never even have a human box, they just have an algorithm managing them, an AI managing them. Is that most definitely is coming if we’re not careful. This this whole idea that we should live off universal basic income and if I understood what universal or basic meant, I’d have a headstart on that one, but it doesn’t make any instantly. Luckily we have a whole green revolution going on. We’ve got to do everything that we did now and do it with zero emissions. We gotta pour concrete without emissions. We’ve got a race cars without emissions. We’ve got to fly around the world without emissions. So we need to get busy. That’s millions of jobs, that’s trillions of investment and a phenomenal human challenge that we can accomplish together and feel brilliant about doing. Let’s get on with that right now. We’ve got plenty of jobs for people who want to do that in strategy and engineering and manual labor and whatever else needs to be done. Then we can talk about this future of AI replacing jobs and then what it means for creativity, but when people tell me that, they say AI is going to free up our time so we could be more creative, and I say back, do you really believe that the only thing that holds us back from being creative is because we haven’t got enough time?
Kavya: I feel like there is another crucial aspect which, you know, when it comes to AI is quality of the input. So at the beginning of the research about machine learning, there was this hope that processing data through, let me say, an “artificial brain” would have removed these human biases on various topics, such as gender or ethnicity, but we know that’s not true or not true yet. So how do we do with this? How do we deal with this kind of uh these kind of issues in the current, you know, avenues that we are exploring?
Richard: The big bias question. There’s obviously there’s no answer for how you overcome 145 human cognitive innate biases. They’re not going away any time soon and they are super power. You ask yourself, why did we survive and no other humanoids, what happened to the Neanderthals or any of the others? Well, the reason we’re here hopefully is we’re social in a way that they never were and we can have much larger groups than they ever could and that comes from being predictable. Great. You know, I get on really well with my neighbors but if they had no idea what I was going to do when I stepped out of the house then we wouldn’t. We all sign up to this predictability, all of us, and it’s great, and it it means that we all get along on mass. The downside of course is that that means it can be exploited. And people admire Cambridge Analytica for their ML, well not me. I think it’s very simple what they did: they just put some basic machine against human innate biases, it’s not hard. Like we don’t have the “in group -out group” phallacy. This person is more like me. I like them more. We all have that, no kudos to exploiting it. Do something intelligence and hard. That’s not hard. Um So it’s great when AI can go into a situation like a court and try and remove some of that bias. And it can, I mean for example the gambler’s fallacy shows up all the time in sports and in court and in rulings and exams and exam marking and you’ll see this all the time. An exam marker will will put three A’s in a row and then go by “am I being too generous” and they’ll mark the next one down would be just because the last three ways happens in baseball happens in court. And that’s before we get to all the standards, so that judges will be more lenient on women if they’ve got a daughter, they’ll be harsher on somebody if they’re hungry. Uh this just, you can’t help it because you don’t know it. So AI can help us to identify more about ourselves and help us understand our mood, our work, we shape our things just based on availability. You free at nine. Me too. Let’s do it. Will hold up. Is 9:00 a good time. Is it a creative time? What are you trying to achieve? So we’ve got to get much more into this kind of what’s right for us and when. But we’ve also got to remember the power of this platforms and went ahead and used to help their recruitment and told them they should only hire men because they had so many women of the organization. That’s what happened said. And then they spotted it and they changed it because of all they spotted it and changed it. That’s something that AI has shown a spotlight on a problem there, right? Amplified that problem. They dealt with it. What do they do? They removed the names of the women off the CVs. But you’re fundamentally forgetting what AI is: AI is about finding patterns. All it did then was the model just looked to identify who the women were from the societies they were involved in, the sports they did, the classes they took and it came to the same result because it figured out who the women were and then biased against them. But we’ve got 14% women in AI. it’s nuts. It’s insane. I don’t know what women want to do all they want to do it doesn’t really matter to me, it doesn’t matter to me that 40% means that we get a load of bias that we don’t need. And I speak to so many startup founders who have good intentions and want to standardize human process with machines and the kind of thing they say to me, they say “we’ve got an app now and it’ll go on someone’s computer at work and it will monitor them. And if we think that their mood is going down, it’ll start putting funny videos up on the screen”. Okay? And then what, if that doesn’t work, then we’ll send a message to HR and alert HR this person is not happy. I said to them please don’t make that, it’s like a terrible application. But they’ve got good intentions. They think this is what we need. So that’s why, I mean, it’s all sorts. That’s what it’s about. You know, how do we get multi-diverse, multicultural, multi-regional people from all ages to come together and look at a problem. And a few years from now, I’m hopeful that we will be the kind of organization where a university, a government, even a corporate will say “we’ve got a problem, can we bring a problem to you? Can you run it through your four-week process and at the end of it, shine a light on this in a way that we hadn’t seen before?” Crowdsourcing, but running this very safe environment of trust as well. So that’s my dream, I guess,
Oli: One of the most amazing aspects of machine learning and AI is the potential in drawing scenarios, right? So one of the biggest challenges that you’ve already mentioned is climate change and climate crisis. So what role does AI play or should play in countering climate change or mitigating its effects? Because I don’t think we’re going to actually be able to eliminate it completely, right? So how can we at least limit the effects of it with AI, what do you kind of foresee on that? What do you hope would happen?
Richard: I think there is definitely a place for AI in terms of being part of the semantic web of data that you would get from Climate data, Earth observation data, or even if you scale that down just within the city, you think about smart city. So, if I just take a half step back on that journey, where it begins is what we’ve been talking about, that we have an open and assuring culture of data that comes together, we’re so far away from that really when it comes to governments and so on. But it’s feasible and it’s was proven to a degree by banking after 2008 crisis. They develop an open banking, which was impressive. And now we’re seeing the same thing in open energy because we have to be able to play together. If you think about the grid now, a smart grid needs um, to be a connector of potentially millions of generation from solar panels or wind turbines, even people’s homes, the energy needs to flow back and forward, it needs to be stored in batteries or stored in cars and then used so that we balance the grid and can go sustainable. And that can only happen when those multi-industry players kind of work together and make the daily useful, interruptible, and you have that kind of semantic web where it can be accessed. So that’s happening and that’s happening in climate, smart cities happening in energy, happening in banking. So then what? And then the role of AI there is to try and draw out those insights of predictions as we’ve been alluding to. There’s two things AI does, but we don’t talk about the latter one so much. The latter one is paint scenarios, like you can keep cleaning the oceans all day, but if you keep putting more plastic in, it doesn’t really matter. So how do you get people to do that? Um I’m a member of the the UN FCCC Working Group on communicating climate change at the moment, which is fascinating. And I raised the question in there, I didn’t do a lot of talking, as you can imagine the people in that room, but I said, look, you know, if you don’t love yourself, are you really going to love your environment? I remember sitting outside Burger King once trying their new vegan burger to see what it was like, and this guy in front of me, a young guy, 20 something in his white Bentley and his meat whopper, and you have the engine running the whole time. And he spread his gum out the window, he left, you know, he left the packaging on the floor when he’d finished and he drove off and I thought “that’s not a man who loves himself!”. So if he’s not going to love himself, why on earth would he loved this car park or this world? So, what’s the role in AI? If Cambridge Analytica worked out how to nudges to kind of change our political voting, what can it do the nudges to be kinder to ourselves or more gentle to ourselves? And we, you know, we see that in films all the time, all the messaging, all the marvel stuff is always trying to challenges around us. They know what they’re doing, they’re trying to create new narratives in society and AI can play a role in that, some apps that just work on that, and then there’s kind of more kind of macro stuff around governments’ decision making and nudge behavioral theories and this kind of stuff. So that’s to be explored and exciting and problems, there’s more in there. And then the the everyday stuff that I was alluding to, that we need to get a measure this and then there’s so much just to sort out, right? I mean there’s 42 councils in the UK and there’s 42 different recycle schemes because of that, and the way that you recycle plastic is different in every single one of those. And I think my strawberries and I hold the plastic up in front of me and I just don’t know what I’m supposed to do with it. Am I supposed to recycle it? If I’m not, it’s going to damage that recycling and the whole thing’s been shut down landfill or you know, if I just put a landfill that I missed the opportunity to recycle it. And if I can’t answer basic questions like that, so that’s the opportunity for machine learning to show it to your phone, tell you what kind of plastic it is, what to do with it, How to give it a second life or an end of life. Just measurement across all of our waste, all of our processes. We have a smart city where we’re truly measuring right here, where we live, how do we safeguard and protect it?
Kavya: It seems to me that MKAI was literally born to tackle these sort of challenges, challenges where there are convergences and, you know, you even talk about the political things that can be changed the way we think about things that can change. So it all keeps coming back to this convergence, convergence of various different domains with Artificial Intelligence. And so again, I’m going to go back to this new edition of XRSI framework that is, you know, recently being released.  I really think that we need to focus on this convergence and truly understand how do we exchange knowledge across, you know, all these domains uh to improve data mining, the processing through, let’s say, these devices. But then there comes another question um about identity and you you talked about, you know, gender a little bit for me, I struggle with hey, why are we still making a gender selection as soon as you raise into a virtual world, why don’t we have default sort of neutral avatars? And I’m kind of trying to educate people about that. Then, you know, when we brought MKAI into all this mix, then that gender avatar is not just a that’s not the only bias. Then there is all these other things, other data points that are considered what makes a person, so I don’t know where I’m going with this, but mostly just you know, are we just reduced to a bunch of data points as a person? Are those, I mean those are my concerns at least. What do you think, Richard? Is this, Is this the right way to approach our humanity as we explore these convergences?
Richard: I think we we have these moments in our modern history. Well, we can do more than just have a technological change, we cannot reset and the opportunities to present themselves like here now we step into a whole new world and we pretend it’s like the old world. If you think about Elon Musk’s view of going to mars and ask yourself, what would that look like when it started at zero again, would it have government? They have taxes, would have money, would have jobs? These are all just stories. Some stories are great, we will drive on the left hand side of the road in the UK is great. I’m really glad we agree on that, great story. But many stories that aren’t so effective, some around religion aren’t quite so effective for example, it’s not get into that. But you know, this less effective stories. And one of them, of course I believe is capitalism as it is now. I think it’s run rot and it’s causing endless amounts of damage. Like if you want to be a billionaire, don’t start a company, go and see a therapist. So, I think there’s many industries like that and automotive I’ve mentioned so many times It’s been in my mind, like the national grid in the UK says, “Okay, how are we going to charge up 36 million electric vehicles in 2050?” Hold up: why is everyone just swapped their ICE vehicle, their internal combustion engine vehicle for electric vehicle? What about sharing? What about different sense of travel? What about these kind of technologies, holographic technologies? Why on Earth would we try and just swap one set of keys for the other. Take this opportunity to rethink why and how you want to travel. That’s what we have here, right? What do we want to be? Why do we want to have gender? I don’t know. I’m not saying I don’t know. It’s a useful thing or not useful thing. I try to stay out of the room Really. I don’t really have a strong opinion. So I’m not going to suggest that my opinion is worth anything on it, but we have an incredible opportunity to rethink society in this world. Let’s take it
Kavya: So in terms of privacy, because, you know, I’m thinking, I’m thinking, you know, within XRSI framework, the way we are thinking about privacy as we are exploring this cross section is all about expectation. Like, you know, back in the days or any different countries, there is some expectations of privacy, even there is the women’s room, in the men’s room for prayers and you know, there is a separation and all that and that societal fabric needs to be restructured when it comes to privacy in the digital landscape, especially in XR. So for example, if you want to scan my house, if you’re a visitor, perhaps I’ll give you permission to scan my living room, but not my bedroom or not some areas that I could set off limits. So I’m thinking about, you know, this kind of data collection, it’s okay, take my data, but I think we probably need to redefine privacy or something. And that’s kind of what I wanted to ask you is after all this exploration, what has been your take on privacy in terms of this, you know, convergence of artificial intelligence and XR?
Richard: Just a wonderful question. You know, there is no way to just have privacy. Perhaps, you know, you have to sacrifice security or convenience if you want it. We know that triangle will always exist when it comes to technology, but we can have more ownership of that I think. So where I think the opportunity lies and we’ve seen it in the press is the NFTs, Non Fungible Tokens, the transactions. Now there’s there’s two kinds of data, isn’t there? There’s there’s data that’s produced from something I’ve clicked and looked at, gestured at whatever, and then given enough computing power that can be turned into analytics and that can be monetized. We’ve seen many times. And by the way, that’s why you can’t just create a new Facebook, it’s different because how are you gonna afford the incredible amount of servers that Facebook has, you need billions to start another Facebook, to buy the data farms. So we know that, but that’s because it’s centralized, isn’t it? And as data is decentralized, it gives people more opportunity for edge devices but somehow needs to be still as convenient and useful. And one thing that it has done all this data harvesting is it made it made it useful because they’ve been to draw some very useful analytics and predictions out of it. So let’s come back to NFTs for a second. The content that we’re producing right now is going to be distributed for free. That’s just a story. In china, podcasts, usually you pay for them. But in the US and europe and so on we believe that this should be, that this information should be free, articles should be free. Even like music and film is becoming that same concept that we shouldn’t have to pay for it. A friend of mine, she does professional introductions, the problem is people think that intro should be free. So how do we monetize our data? How do we monetize our content? I think the same way is that we ourselves have to become the NFT. We need a human firewall around us, which as you say, allows us to interact with this to say yes to say no, you say “no no no no no”, well I’m fine, but you don’t access to play. You don’t get into the fun stuff. You don’t get the bills, you don’t get the cheaper insurance or whatever that if you contract your data you can have, so we can build a healthy relationship in this. And we will, when you walk into a mall or a superstore right now, you know, the CCTV is filming you and you know, you’re not going to appear on the internet. You just know it. Yeah, I’m not saying it never happens, but one in a million, whatever, but we don’t have that, we just don’t know what’s going on. You see a billboard with Coca-Cola, you say “uh, they’re trying to sell me a CocaCola. Okay with that. We understand it”, but kier altered reality. Just don’t know. Why’s Clubhouse free? I don’t know why Clubhouse’s free, what’s going to happen?
Oli: I wanted to ask you if you want to give us, just in a few words, what your ideal vision, how you would sum up what the role of what we call AI today should be in the future if we ever managed to create a real true AI, something that can think on a very complex level.
Richard: Wonderful question. I think we get very confused what artificial intelligence means. To me, it means you see something on a computer to go, wow, “I didn’t know you could do that”. That’s that basically like what we said was AI 20 years ago, we don’t call it AI anymore. Of course we can do that, an optical character recognition used to be AI it’s not anymore. So it’s a moving target. And it’s just a phrase. We haven’t talked about, like paper clips and nick Backstrom, which is probably good. Nick Backstrom has decided the head of the future of humanity over Oxford University that you tell an AI to create paperclips and it may fill the whole earth down to create. Well, I don’t think so, and I don’t think my toast is going to be conscious anytime soon. I really think this is just crazy. let’s just remember where they’re at the Summit Computer supercomputer from IBM uses a million kilowatts of our electricity. A million, the human brain uses 20 watts. The same as a small light bulb. Like these are so different. We have a three dimensional brain that’s processing, you know, in billions of neurons at pretty much the same time. And then somebody creates a two dimensional neural network and says “look, it’s the same”. It’s lying, it’s not the same at all. And so this is our journey. In summary, the outer journey is the easy bit. Everything that we’ve talked about in technology is the easy bit. We can go to Mars, we can colonize the galaxy, we can move planets and terraforming, we can monitor everything on the planet and getting at all working exactly at once and we can live in harmony for an external perspective. But the innergame, the inner journey, that’s a real challenge, just to get past all the external stuff and just find that high level of consciousness, that high level of love, that high level of peace that’s ultimately only ever going to be what our human story is about.