Singularity Watch Spotlight

Singularity Watch S01 E12 | What are big companies doing with YOUR DATA? A new journey with Robert Scoble | part 2

Big tech companies collect more and more data about you, about us, and constantly capture our reality

You could think that’s just for advertising, right? 

This is just one of the multiple uses of your data, and the most interesting or freaky ones are yet to come. As Robert Scoble told us in part 1, Probably, 2021 is going to be a transition year, but in 2022 we can expect much big news.

Robert is the “user number one,” a person who rode the first Tesla and was there when Siri and Uber were invented. He is more curious than scared and depicts a future world where we might not be able to separate virtual and “real” as easily as we do today

Kavya: I know that you and I don’t care about privacy in that same sense. You care about the impact that it will have on humanity, its effects on your children, and how your world would change. I care about how we can stay ahead of these negative consequences. But I would say people do care about privacy once they become aware of it. We saw that happening with the transition from WhatsApp to Signal. I think over time, this is what trust is gonna mean and matter.

Robert: We will move to things that are more trustworthy, right? And I like using the word “trust” because even Apple knows the consequences of this product category that’s coming with dozens of cameras on your face, possibly. We must talk about cameras because on the Oculus Quest, you have four black and white cameras seeing my room around. Today, that data isn’t being used for a lot of things. Tomorrow, let’s say by 2025, you’re gonna have very high-resolution color cameras on your face with a lot of neural network processing. Tesla is another excellent example of that: when my Tesla goes down the street, it sees garbage cans on the road, it sees stop signs, lights, kids, dogs, bicyclists, and so on. That’s all done by computer vision. It shows how much the world of technology has changed in just a few years. 

Kavya: This data collection and processing it’s becoming more accessible and more effective. What’s Tesla going to do with all this data that’s collected by its cars every day around the world? What’s coming for us? 

Robert: I own a Tesla Model 3, and there’s a camera inside the car aimed at my face. That’s very freaky. Apple and Tesla know that this technology is extraordinarily freaky to at least a third of the people. Two-thirds are like me, “Fuck it, turn it on, let it work, I don’t care”. I really don’t care that’s watching me; I don’t care that Siri is sitting there listening to me right now and trying to do stuff. I have three Facebook Portals in my house with cameras and microphones listening to me right now. Amazon Alexa is listening to me right now, so I got four different companies listening to me and studying my private conversations and everything. So, where am I going with this? With Tesla, you have 7-8 cameras on the car. One looks at the passengers; the rest are aimed outside on the street and people. Now it doesn’t yet do anything freaky with that data: when I go by you in the street, it doesn’t say “Kavya is on the street”, it just shows a human is on the road. And that doesn’t infringe on anything you’re doing. It doesn’t cause problems. I can see in 20 years that it actually will say Kavya’s on the street. And that’s the problem: there is a slippery slope argument with technological advances, and more and more cameras come along. NIO, a Chinese car company, is putting 8K cameras in their car. My Tesla has 4K cameras. So now you’re starting to think about “What does it mean? That I could go by Kavya’s house and see if she’s in the kitchen from the outside”. This takes media law to a whole another level. I was a journalism student at San JosĂ© State, and one of the most challenging classes was Media law. And one of the things we talked a lot about was “What is your right to take a photo?” “What is your right to write something about somebody?” “Are you allowed to take a picture through their kitchen window and put that in the newspaper?” There’s a gray line. Probably you are if you’re from the street and you’re not using a super long telephoto lens, and you didn’t close your drapes. If you’re out on the public road, the rules change to be anything is available to you. I could use a long lens and take a picture of you in a public street and print that in the newspaper without your permission, and that’s a First Amendment right that is gonna be there for a long time, and that’s what Tesla’s using. Tesla is saying, “in the public street, we can take as many images as we want, and there’s nothing really anybody can do about that because it’s the public street and the law is very clear about that”. Now, the question is what happens when you put 100 times more AI capabilities than we have in our Tesla with a camera that’s 8Kor 16K. Now, instead of just being 100 yards in front of the car, it can see 500 yards, and it can see a lot more detail about you through windows and stuff like that. So now we really are starting to see where the gray area is going and how we keep these companies or the governments from using that data against us somehow. The government is totally into using this technology; we saw that with the Capitol Building’s insurrection, they were using ClearView to take a photo of their security camera and then do matches to all the social media and figure out who was actually in the Capitol Building with AI. So the government is very willing to do that, and indeed the Chinese government’s very keen to do that. So we start talking about the difference that China is ready, Germany is not, there are differences regionally, and differences in countries about their approaches to this, and America is sort of in the middle. We’re skittish about privacy: on some side, we say, “Go for it, Facebook, study it, take it all”. And on the other side, we’re like, “not so fast”. 

Kavya: It’s also interesting what happens in closed doors versus what is shown to the world. You see the congressional hearing, but at the same time, here we have all these AI models being built in the closed doors. All these things are coming out, and it should be government’s business, but they’re quiet. Are they waiting for the next Cambridge Analytica? 

Robert: 15 years ago, I walked around Greensboro, North Carolina, with a senator, and we were doing this right before an election, and he had a list of addresses to go and visit. He only called likely Democratic voters who needed to be reminded to go and vote. We didn’t go and visit Republican houses, and we didn’t see people who never vote. We call people more likely to go and vote. That was done in a database 15 years ago; it was slow to do. Today I could do basically the same thing on Facebook with Facebook advertising and target you so minutely that I could go “Hey, I want only to talk to French people in Silicon Valley” and get them to vote because I know they all vote for me. So, Cambridge Analytica basically was doing that in a smarmy way, they were using academic data to try to get this targeting system to work, and that’s gonna look really quaint soon, I think. The data that we have now on each other, the systems we’re going to have in 5 or 10 years, will be stunning compared to what we have today. This is the time when humanity has to stop things. I think everybody will go forward and buy a VR machine from Apple or Facebook or Huawei or whoever because it’s so apparent that this is how computing is gonna be done. But that comes with a whole new level of data about us. Here’s a question for you that I’ve been asking people: how long will it take you to wear an AR pair of glasses? What if Apple came out with a headphone with the AR computer and a LiDAR camera on it, not even a screen, not even a camera. Just the LiDAR, how long we take to scan your entire house with a good percentage of your drawers open, your refrigerator open, your closet door open, your garage. How long do you think it would take to scan most of your house? 

Kavya: First of all, I’m not scanning my house. 

Robert: Yes, you are. I’m wearing four cameras on my face, and they are scanning my house right now. It’s not doing anything yet with that, but wait five years from now, with these cameras, you’re going to do computer vision lookup and AR. You’re going to be on top of my kids’ bed right now, which is right over here. This will be AR, not a wholly virtualized world that separates me from the real world. This is really a problem for people. This is one of the resistance points I keep hitting. I don’t like VR because I need to pay attention to the kids playing or Zoom. Right now, I’m in VR, and I can’t even hear them having trouble with their Zoom calls or whatever, and that keeps me from using VR during the day, usually. 

Kavya: So, getting back to your question, how long does it take to scan the whole house?

Robert: The answer is 10 minutes. How? Here’s one way: when you put your headset on, this device that’s coming next year, Siri can tell you that Apple has designed a lot of cool things in your house and in the world; everything around you can change if you touch it. And maybe Siri tells you to go around, look underneath your bed, open your closet door, open your refrigerator and go around the world, touch things, lift things up, look at them, and be blown away. Well, you just scanned your whole house in 10 minutes, and you’re going to do it because it’s gonna be cool. It’s gonna be cool. 

Kavya: Yeah, I guess I am scanning my house. So, because spatial audio is such a massive component of AR/VR, what do you think then? Is the first AR device from Apple already here? 

Robert: Sort of. In the Apple Airpods Max, there’s nine microphone right listening to you. So if I’m talking to that thing, it’s gonna know I’m aiming my voice there, and I’m looking at it, and I’m gesturing to it, right? Spatial audio is already here in the Facebook product. It’s just the speaker suck in the Facebook product, so you can’t really hear it very well unless you put on a pair of good headphones, right? Well, Apple’s device will start with the headphones, and it’s gonna beat that at the headphones that I just got. These headphones are stunning. Wired magazine said it’s the best wireless headphone out there, period, and so has pretty much everybody else. I don’t have better headphones, and I’ve been showing them to a lot of people. Nobody’s heard a better headphone than this one. So, the world is about to go experiential, that’s really what’s happening in 2022. Today the experience of being in VR, going to a music concert, playing blackjack, or some game with people it’s okay, it’s pretty cool. I mean, the visuals are pretty cool, and the audio works, but it’s not stunning, and that’s what’s coming next year. But what that means is that Apple will be able to give us a new experience on top of the real world. That’s gonna be pretty stunning, and that experience is going to require a 3D scan of your house. 

Kavya: That’s a lot of new data. 

Robert: Yes, you are gonna scan your house, believe me. My cameras right now are scanning my house and building a 3D model of my house. Every time I touch right here, there’s a virtual wall that keeps me from going outside the barrier and hitting things. How does that work? It had to scan your house to make that work, right? The computer vision in today’s device is not very advanced; the computer vision I’m seeing in labs right now is stunning. And Facebook strategy if I walk by you in the street in 2025 with Facebook Glass and you are wearing Facebook glass, it’s gonna say Kavya is your friend. Hey, would you like to play virtual football over there or throw something at her? 

Kavya: You know what I worry about? I worry about the lady in the red dress because somebody’s gonna plant that, and you’re not gonna wanna look away from it, and that’s when things start to get interesting. 

Robert: Are you gonna pay a great designer to design a virtual costume for you to wear down the street? I will. You know, it’s gonna be Burning Man 24 hours a day, eventually, 15 years from now. How do I know that? Because I hung out at virtual Burning Man with the people who organize Burning Man. They said, “we want you in five years to bring these devices to the real burning man and see the virtual Burning Man and the real Burning Man at the same time”. 

Kavya: I really want to ask you about this book that you’ve been talking about. You said that you’re going to write a new book and dedicate sort of stuff to XRSI. What’s going on with the book, and when to expect that? 

Robert: The book coming out is about 2022, which really is about 2030, but all this new stuff is about to start in 2022. Tesla Cybertruck is coming, which will have better cameras and better sensors, and better AI. We’re going to get full self-driving with this truck next year. And Apple’s coming out with this new AR/VR device, and there’s a lot of other things coming: Magic leap is coming, Microsoft is coming with a new Hololens next year, all sorts of new devices they’re gonna start raining at this time once Apple comes in. Apple actually is good for everybody, including Facebook, because Apple will explain this new world to everybody. You’re gonna be a poor kid in Mumbai, and you’re gonna hear about the Apple device. That’s Apple’s real secret sauce. Next year, all this new stuff coming out, and so I’m writing a science fiction book about what this means. What does the self-driving car do to the neighborhood? What does a VR/AR device do to culture, to humanity, to the community? I’m still working on it; I don’t wanna talk too much about it because it changes every day in my head as I write it. My aim is to have this thing written by June. 

Kabya: There’s one last question that we ask all of our guests, which is related to the name of this show: Singularity. We ask it because we really want to try to comprehend where we’re headed when it is there going to be. First of all, what is a singularity to you? And then, when is this point of singularity? The singularity is when I can communicate both with you and with the digital God, the cloud computer like a Siri or Alexa, as fast as my human mind can work, which there is a limit. I can’t remember what Brian Roemmele said about how many bits per second the human mind actually works. Still, it’s not unlimited, but when I can actually hook up to Alexa or Google and send it what I’m thinking and have it stand back for the answer, that’s when I think we’re gonna be in the singularity. We’re not that far away from it, we’re somewhere between two and 20 years. Let’s call it 15 years from now. I think we’re gonna be having people jacking at some level at that range with something like Elon Musk is doing with Neuralink. Talking about the glasses, Facebook and Apple are spending tenths of billions of dollars on these devices because they know they need the data to get there. They need to understand how our human mind works to hook sensors into our mind, either outside our skin or even Neuralink is actually surgery, and they put wires on your brain. I’m not sure I’m gonna spine up for that unless I need it. But if I have Parkinson’s or something like that, I’ll sign up for that. The cost is too high right now, and the side effects are too high, and the utility is not there. If you have Parkinson’s, you’re already signing up for it. They actually put a single wire into your brain to keep your hands from shaking, but it cost $150,000, and it has a lot of side effects because you’re cutting through brain tissue to put the probe in the right place, the wire in the right place. But if your hands are shaking and you can’t even feed yourself, you sign up for that, and it does solve the problem; your hands become very steady when you turn on electrodes in your brain. It’s just that’s one wire costs $150,000, and it only does one thing. Well, what happens if there’s 1,000 wires? What happens if there’s 10,000 wires? Then you start seeing a singularity. We’re some decades away from that, but we’re going to get there after we get AR glasses, so around 2030, we’re gonna use the data that we collected of all of our eyes, and of our voice, and of our gestures, and of our world to really understand how to hook in sensors to our brain. And we’re going to start that process. So you’re gonna see a lot of brain devices after the AR stuff comes. I’m already seeing him. I already have a device with many sensors that’s just on the back of my head. That lets me do things just by thinking about them, and it’s pretty interesting to do that.