Bits of Books - Books by Title
The Inevitable: Understanding 12 Technological Forces That Will Shape Our Future
Kevin Kelly
Becoming
Anarchists imagine lawless dystopias arising from crises such as nuclear war etc. But outlaw worlds quickly get taken over by organized crime and militias. Lawlessness quickly becomes racketeering and even faster becomes a corrupt govt - all to maximise the income of the rulers. Real dystopias are more like the old Soviet Union than Mad Max - they are stiflingly bureaucratic rather than lawless. The big bandits keep the chaos to a minimum.
We are actually living in a Protopia. Things are a little bit better today than they were yesterday. The solutions to yesterday's problems made things better, but also had unintended negative consequences, which are fixed with tomorrow's solutions and so forth, But the net cumulative effect is a small incremental, barely noticeable improvement every day. These improvements have compounded to produce today's societies.
Change has become so rapid that it blindsides people - they haven't noticed AVs until they suddenly become front page news, and their instinctive reaction is to be agin it. When the Internet appeared, pre-Netscape, it was just for a tiny bit of corporate email and for teenage boys willing to type in text commands. Mid-90's articles in Time and Newsweek dismissed wild predictions of online newspapers, electronic commerce etc as "baloney".
But even those who could see the future coming - a vision of connected computers accessing all the world's info - missed the big story. They could see a future of 5000 TV channels, but who would pay for the provision of content for those channels. In fact what happened was actually 500 million channels, all customer generated.
And we still don't fully recognize it today. There are about 60 trillion web pages, including those dynamically created by request, available today. That's 10,000 pages for every human alive. And that's been created in less than 8000 days. But if you go back to the expert predictions from the 1980', this incredible cornucopia of info and entertainment was not on anyone's future scenario. And it all came from the users. Youtube's 65000 videos per day, 300 video hours every minute.
Cognifying
One of the early stage AI companies that Google bought was an English company called DeepMind. In 2015 they showed an AI that they had taught to learn how to play games (as opposed to teaching it how to play games). They set it loose on a game called Breakout, a variation of Pong, and it learned how to keep increasing its score. There's a video of it's progress: at first it simply hits at random; after 150 games it's missing only once in 4 shots; after 300 games it never misses. Then in second hour it uncovers a loophole that none of the millions of human players had found, that lets it tunnel under a wall in a way that even the game's creators had not imagined. This is called deep reinforcement machine learning, and unlike human players, it gets smarter every iteration.
100 years ago you could get rich by taking a manual tool and powering it with electricity. Today can do the same thing by figuring out how to add AI to an existing product. Cameras have gone from SLRs to smartphones with insane amounts of AI to compensate for all the lenses that an 'old style' camera needed.
This is how it happened. In the early 2000's a new kind of chip - a graphics processing unit or GPU - was devised for the huge visual demands of video games. Millions of pixels had to be recalculated many times a second. This required a parallel computing chip to actually do the calculations. The demands of video controllers meant so many of these chips were produced that they became a commodity. Then researchers realized they could use these chips to run neural networks. Clusters of these GPUs could perform in minutes what used to take supercomputers weeks.
The breakthrough was in learning how to organize neural nets into stacked layers. Each layer takes the data collected and classified by the preceding layer and parses it more, then hands on to the next. Face recognition starts with blunt recognition "That's an eye!" then hands off to next layer which looks for other eye, then to next to find a nose and so on.
Facebook now has the ability to identify a photo of any person on Earth and correctly identify them out of some 3 billion people online. Human brains cannot scale to this degree, which makes this AI very unhuman.
In 1997 Watson's precursor, Deep Blue, beat world champion Gary Kasparov. But the consequences were interesting. Kasparov realized he could have played better if he'd had access to Deep Blue's database of all previous chess moves. So he pioneered the concept of centaur chess - player(s) with access to chess programs. And today the champion is a centaur called Intagrand, a team of several humans and several different chess programs. And it has inspired more people than ever to take up chess, and players, with coaching from their computers, have got better than ever. There are now twice as many grand masters as there were when Deep Blue beat Kasparov. Today's top ranked human player trained on AIs and has the highest human ranking of all time.
We are notoriously bad at statistical thinking, so we are making AIs that are skilled in that, so that they don't think like us. One of the best things about the AIs controlling AVs is that they don't drive like humans, with our easily distracted minds.
We will have to 'trust' algorithms, and that will be a problem. There are some areas of mathematics where the only proofs are done by computer, and some mathematicians refuse to accept those, since you have to trust a cascade of algorithms.
AI is going to force us to reconsider what makes us unique. If a computer program can play chess, fly a plane, make music or invent a mathematical law, and do it better, what are we good for?
Baxter, a workbot from Rethink Robotics, designed by guy who invented Roomba, is a new class of industrial robots. First generation ones had to be isolated in cages to protect human workers and had to be reprogrammed every time a task changed a little (and the cost of programming those robots was about 4 times the original capital cost). Baxter is cheap ($25,000) and can literally be shown how to do a task - the operator guides it's hand through a task, rather than by coding. It's as if the first ind robots were equivalent to mainframes and Baxter is the first PC robot. (see below for example of Baxter being taught)
There are jobs robots can do that we can't - obvious things like make millions of identical brass screws or computer chips, or sort through millions of web pages to tell us what the temperature was yesterday in Timbuktu. But they are also doing things that we could not have imagined even 50 years ago: Chinese rural peasants using smartphones to talk to their kids in the city, even before getting indoor plumbing.
We will quickly get to the stage where every workplace will potentially have robot(s) in the same way as most now have PCs. But it will be up to individuals to figure out unique ways to utilise them - perhaps things like making custom nitrogen dessert machines for expensive restaurants etc.
The seven stages of robot replacement: 1) I am so glad robots can't possibly do my job. 2) OK it can do a lot of the things I do, but not all 3) OK it can do everything I do but it breaks down sometimes 4)OK it operated flawlessly on routine stuff, but I still have to train it to do new things 5) OK OK it can have my boring old job 6) wow, now that robots are doing my old job, I've got a new job that's more interesting and pays more 7) I am so glad robots can't possibly do my job.
In the future you'll be paid for how well you get along with robots.
None of the jobs that pols are trying to protect are jobs that people wake up in the morning really wanting to do.
So, let the robots take our jobs, and let them help us dream up jobs that really matter.
Flowing
Price of a genome sequence dropping fast. Now $10,000 but soon $100 and then your insurance company will offer to do it free. Then you will pay for expert interpretation. Similarly pay for individualised travel arrangements or health care.
Soon will pay a warehouse to store your stuff and deliver on demand, or to rent you tools or luggage etc just when you need it.
Music now liquid. Was solid - 45s and LPs. The original attraction of Napsters etc was free music, but then found their new copies could be transported across platforms and played anywhere, and you could fiddle with it if you wanted. And you can assemble playlists to create personal radio station without ads or DJs, and link to sites which display lyrics or recommend similar tunes.
And now there is a greater market for music than ever. People want a soundtrack 24/7 they want variety, and they have the tools to search the whole world of music. And because costs nothing but time, prepared to try whole genres that might never have been exposed to. And commercial demand as well - with extra demand of Netflix and new channels such as Amazon, many new movies, shows and docos are being made, all looking for distinctive soundtracks. And thousands of user-made videos short and long pop up on YouTube, again wanting their own tunes to help them stand out.
Movies are being liquefied. Once they were expensive to produce and watch (in theatres with complex projection systems). Now they are made and watched at home and shared on YouTube etc. But also they can be chopped up and re-assembled as squeaky-clean versions for the Bible Belt, or with sex scenes inserted to produce poorno versions. Or video editors can insert friends and family into films for a seriously personalised version.
Screening
Culture clash between the People of the Book (people who produce newspapers and magazines, and those jobs ruled by regulations such as in law or finance) and People of the Screen who make their own content and construct their own truth.
Experts thought no-one would read books on small screens like Kindles, but now have tech called rapid serial visual presentation, where single words are shown very briefly on a tiny screen one word wide.
Wikipedia is the first networked book. Statements are cross-referenced with blue links. The link and the tag may turn out to be the most impt inventions of the late C20.
m
Accessing
Uber, the world's largest taxi co, owns no cabs. Facebook, the world's most popular media co, produces no content. Alibaba, the world's biggest retailer, has no inventory. And Airbnb, the world's largest accommodation provider, owns no real estate. Companies such as Spotify, Amazon, Netflix and PlayStation allow you to use songs, books, films and games without owning them. Access becomes more impt than ownership.
Access amplifies interaction with product - user reports bugs (so co doesn't need QA teams), gets tech support from forums (instead of co help desk) and develops add-ons to improve product.
Sharing
Every public health expert declared confidently that sharing was fine for photos, but no one would share their medical records. But PatientsLikeMe, where people pooled their treatments and results, to get better care.
At almost every turn the power of sharing rather than top-down direction, is solving problems in new ways. Bringing health care to the poor, free textbooks for students, funding drugs for uncommon diseases. Google, Facebook and Twitter have all thrived bc people are happy to co-operate in ways that would not have been suspected a generation ago.
Kiva, a peer-to-peer lending platform, found it was better to lend to individual peasants in Bolivia than to the Bolivian govt. An American could lend $120 to a Bolivian woman to equip a street stall and then watch as the money got repaid. Kiva's repay rate is 99%
Netflix's million-dollar competition to design a recommendation algorithm.
Postulate a future where all online, but with little tags to say who created it. Everything can be shared but when it is, a small micro-payment goes back to the creator. You could assemble an instructional video for YouTube by taking bits from all over the place, but each bit would be tagged to feed payments back to creator.
Or, you contribute to a huge virtual world entirely built by users and run by everybody contributing small CPU cycles and storage. Over 30,000 different games run on this Greater World platform. Anyone can build a settlement that others can visit or live in.
And that could also be the platform for a massive collaborative effort to build, say, a spaceship to go to Mars.
Filtering
Google does a 4 way matchup to decide which ads to display to you. First they match ad to content of web page you've loaded. Then they consider your interests, as shown by the cookies you're trailing when you arrive. Then they consider how much advertisers are willing to pay to show you their ad. Finally they do a juggling act to evaluate how likely you are to click on the ad. And that will vary all the time. A 5 cent ad for a softball mitt that gets clicked on 12 times is worth more to Google than a 50 cent ad for an asthma inhaler. But if the next day the same page is displaying a high pollen count ticker, the value of the asthma ad will climb, so it gets top slot.
Remixing
Tools like YouTube Capture and iMovie remove most of the effort from creating movies. Where once you needed a skilled team of Hollywood experts taking a million man hours to make a film, now tens of thousands of DIYers. The Hollywood movie is like a Siberian tiger - an animal that demands our attention, but that is also very rare. Every year Hollywood releases about 600 2 hour movies - about 1200 hours of viewing. But this is a tiny fraction of the hundreds of millions of hours produced by amateurs.
Most of this is remixing - take a H movie or trailer, slice it up and add scenes, dialogue or soundtracks.
Flickr has more than half a million images of the Golden Gate Bridge - every conceivable angle, lighting condition or POV. If you want to use an image of it in a movie or ad, it makes no sense to go there. There is a similar archive of 3D models, SketchUp, where you can find virtual models of every major building in the world, streets of NY or London. Film-making has broken free of photography - you can build an environment the way you want it.
Both Google and Facebook AI can look at a pic and name the people in it.
Who owns it? copyright has not caught up with the modern age - if you change one note is it new?
Our digital world is full of examples of things which were deemed "impossible" 20 years ago: a crowd-sourced encyclopedia written without payment or bosses, free maps on cellphones, access to pics of every street in world on demand, YouTube vids and songs etc etc.
Attention - we've seen so many games that now we just want to watch the highlights, or the very best of the highlights. Pretty soon everybody will be equipped with a tiny camera and every human event will be recorded. Then ordinariness will be boring - we will only want to watch the extraordinary.
Certainty, ironically has decreased. No longer a Truth, but a variety of thruthes.
(Wired)
If you're a virtual reality enthusiast, you probably read Kevin Kelly's April Wired cover story on Magic Leap, 'The World's Most Secretive Startup.' Kelly is one of the few people who've seen the much-hyped mixed reality technology being produced by the Fort Lauderdale company and was suitably impressed by it. 'While Magic Leap has yet to achieve the immersion of The Void,' he wrote (referring to the Utah-based immersive experience company), 'it is still, by far, the most impressive on the visual front - the best at creating the illusion that virtual objects truly exist.'
As the co-founder of Wired, publisher of the Cool Tools website, and former publisher and editor of The Whole Earth Review, Kelly has always been prescient about these things. In his new book, The Inevitable: Understanding 12 Technological Forces That Will Shape Our Future, he compellingly outlines a set of behaviors and trends that will change the way we live. We recently spoke with Kelly about the themes of the book, and of course, the latest developments in VR.
[The interview below has been edited for clarity, and condensed, though admittedly, not very much.]
There Is Only R: The first question I have to ask: what do you think of the Pokemon Go phenomenon? Given what you've written about the VR and AR, what's your take on it?
Kevin Kelly: I think it's just wonderfully thrilling to see - I think perhaps the only unexpected thing about it is its apparent suddenness. I was just walking around last night in our neighborhood, and there were all these little Poke stores and everything. It was kind of piggybacking on [prior AR app] Ingress sort of like, I don't know, like a sleeper cell or something. People including me have been talking about GoogleEarth and GoogleMaps as kind of a bed for VR for a long time, and I think what it's shown is how you could do mixed reality and what that might be like.
And I think no matter what happens to Pokemon Go, I think there'll now be a lot more tries, a lot more attempts to do something on top of it. I'm trying to think what the equivalent would be. It's sort of like the early days of video arcade games, where they were good enough to improve. You had Pong and you had Pac-Man, and people saw that people would pay money for those, and then we had this explosion where everybody else was trying to do something better.
I wonder if just the simplicity of Pokemon Go made it popular. You can pretty much figure out how to play Pokemon Go if you know nothing about Pokemon or AR even.
I'm assuming they just tapped into the interface in a way that Ingress didn't. And there obviously were network effects. You saw people doing it and that propelled you to try it, and the more people that tried it, the more obvious it became.
So that wonderful thing about the public aspect of it is, I don't know if it's going to be repeated again. Because later when people are doing it, you won't know what they're doing. There will be all these games and right now when you see somebody out on the street looking at [a device], there's only one thing they could be doing.
When you write books like this where you're making some hard predictions about the future, I would imagine that somewhere between the book going to press and it actually coming out, some of the things you write about actually happened. Is there anything that happened with this book?
Well, I did a lot more on VR which is your kind of domain, that I wish New York publishing was fast enough to put into the book. The text for The Inevitable was actually completed a year ago. I finished writing the stuff for VR in December. So I had a lot more firsthand experience and additional thoughts on what VR and AR mean to the world that's not in the book. I wish I could've included it.
There are lots of different ways to deal with complex ideas. I think, and I write about this in my book under the screen chapter, where there's this marriage of text and moving images - video that you read, books that you watch. I think there's something there that I want to explore, this convergence marriage of text and moving images together. I have another project that I'm thinking of for VR.
It's really hard to kind of invent both the medium and content at the same time, and I'm not actually that interested in inventing the medium. I would prefer to take some of that stuff that exists and make content for it because every attempt I've seen in the past for someone to try and do both, it just doesn't work. They're really two different mindsets, I think. And my inclination right now is to let others invent the medium and I'll try and make some content.
Yeah, what we're seeing is a lot of people we think would be in the market to develop content are kind of holding off because they want to see wider adoption in VR. Is that what you're seeing?
Well, I am seeing that, but I'm personally not worried about it. I just think that the tools are not there for me yet. I'm not so concerned about audience size - I don't need a big audience. If you have direct contact with your fans - a thousand true fans - you don't need these large audiences that big, big media companies need.
I think it's more just the absence of really good tools, not just the production tools, but even the appropriate form factors. Right now we have very crude tools.
When I see VR experiences that are a little bit less impressive, I always think it's because the creators are trying to take 2D formats and translate them directly.
Yeah, absolutely, and in the early days of the internet, we used to call [that phenomenon] shovel-ware, because you would take content from your magazine and you would just shovel it onto the web. And it was very evident to everybody that that wasn't going to work. So we actually had a whole separate editorial team working on content for the digital side of Wired, completely independent of the magazine side because we just knew it was going to require a different logic, a different workflow, different frequency.
With VR, there's definitely a tendency to make some of these narratives like movies and movie people are making them. And it'll take some years before we figure out what the new norms are - what works, what doesn't work.
And this is something you can't figure out by thinking about it. The smartest genius in the world could be applied to figuring this out in theory, but it is something that we only figure out by using VR. And no amount of preconception, pre-visualizing it is going to be able to solve this. I think we're going to need like 10,000 hours of experience in order to make any changes, to move in the right direction.
So who do you think is going to be on the forefront of that? Who do you think is gonna do the really innovative stuff and most experimental content creators are gonna be?
My bias is that the studios will spend a lot of money trying this, but it's the Buzzfeeds of the world that will come along and make something that will actually work. I think we're far from even seeing, even formation of these companies that will succeed. I think they haven't been formed yet, or maybe they're forming in the basement right now as we speak, but it's still years away.
I think some of the gaming companies, people you know, might have the first round of successes, but I think it's gonna take five years for the other forms. It's gonna be a little slow in the beginning. I don't see any kind of VR unicorns happening within five years.
When you submitted the book, had you seen Magic Leap yet?
No, I had not. I had not seen Magic Leap when the book was done. I had not seen Meta, and I'd not seen Hololens and I had not seen The Void, so I had not seen the major players when I wrote the chapter on VR in my book. I'd only seen The Oculus prototypes: the DK2.
And I'd seen the early stuff of Jaron [Lanier]'s. So that's something I would like to have updated.
Yeah, you talk a little about being in Jaron's lab in '89 or '90. [Jaron Lanier was an early VR pioneer who coined the term 'virtual reality.'] What do you think he really got right at the time and what were people working in VR at that stage wrong about?
I don't think they were wrong about very much. The quality of the experience at that time was actually not that far off from say, the Oculus. The resolution was not as great, but depth of feel was not that different. You had hands [in the experience] - you had gloves which were actually higher resolution than Oculus. And it was social. You had more than one person, and they had an articulated body.
Other experiments that were going on were pretty sophisticated. The thing that they sort of didn't get right or the thing they didn't have was that they weren't cheap. They were just way too experimental and also they were too expensive in two ways - one was the money, and two was the maintenance.
So keeping those systems going required professional hacking help. You needed a person to maintain them, so it wasn't just the purchase price, it was the fact that these things were temperamental and the tracking was always going off. It was not plug and play level. The thing that happened in those intervening years was not so much that the quality drastically improved, but simply that the price changed by three orders of magnitude.
So now we're at this level where they you have a great flywheel effect.
In your chapter on cognifying, you say, and forgive me if I'm oversimplifying, that you can layer AI into pretty much anything and it makes a huge difference. How do you see AI changing VR as we know it right now?
I see AR and VR as like peanut butter and jelly. I think that to curate the kinds of worlds that we expect and want in virtual reality, you need ubiquitous cheap AI. A lot of what the VR world would be doing would be anticipating or tracking, capturing our movements in order to affect some change.
And from the small level of recreating physics to being able to recognize a gesture or a movement will just require constant, ubiquitous, cheap AI machinery in some capacity. And if we start to add these additional little tricks like redirected walking and other kinds of magical misdirection that can enhance and make it practical in the kind of tight spaces that we actually operate in, that's another order of AI that's necessary to process all the stuff in the background.
And then there's the whole gaming aspect where you're populating it, not just with people, but these other agents, or agencies - things from the weather to serendipitous encounters. And they all require more AI horsepower, and so I just think that the two are going to grow up together.
And I don't imagine that the VR companies will manufacture the AI. I think they're going to be purchasing it from AI companies, just as they'll be purchasing electricity to run their service. But they will become a huge customer of AI.
To do AI, you don't need a lot of people. But I think to do VR and the data, I think these are gonna be kind of immense companies that will be capturing huge swaths of human behavior and making entire economies, virtual economies, and virtual lives, and virtual relationships. It'll require huge, huge amounts of data, and huge amounts of bandwidth. And that mobile bandwidth, all that infrastructure, is immense. You need seven cameras to do a full roomscale VR, and that light field data has to be processed. It's a great opportunity for a whole industry that's going to serve the needs of VR worlds.
Earlier I read an interview with Ray Kurzweil in Playboy from a few months ago. He has kind of a different scenario for AI and VR that's a little bit more sci-fi. He says that by 2030 we'll have VR tech embedded in our nervous system, like chipped into the neocortex or something like that. What do you think of that scenario? Does that seem plausible to you?
Not in 2030. There are none of the precursors necessary for that to happen in 2030. I think as soon as you start messing with the human body, you're talking about a different time scale. Digital stuff can progress at this exponential rate, but if you're messing with the human body, you have to do more.
We're susceptible to what I call 'thinkism', which is this idea that thinking about things can solve problems - that if you had an AI that was smart enough, you could solve cancer because you could think about it.
But we don't know enough. We don't have enough data, we don't have enough experiments. You have to actually do a whole lot more experiments on cells, and human biology, and humans before you could solve it. You can't just solve it by thinking about it.
And so the it's same thing with this implant idea. It doesn't take into any account the fact that you have to experiment on animals long before you get to humans, and that just takes biological time. [Kurzweil] will say, well, you can simulate them. But that doesn't work. We just don't know enough.
I think someday we'll figure this out, but not anywhere near the '30 year because we don't have enough experimental data to do that.
So he's not a futurist, but I noticed that Ernest Cline is quoted on the cover of your book. What do you think of the Ready Player One scenario for VR where we spend as much time in VR and social VR as the bulk of our human experience outside of it?
I think the idea of having this pervasive, continuous, broad, vast universe: that seems entirely reasonable to me. I think the question maybe comes down to how much time we'll spend in it. And the curious thing about the online world was that people were asking this exact same question in the '80s, when the first people were going online and going to bulletin boards. It was all just this text, and they were imagining that in the future, 30 years from now, everybody would be in their basement, and they would never leave. We would all be online in virtual communities.
What happened, of course, was very, very different. One of the things that we noticed when we did The Well in the mid-'80s was that the first thing people demanded once they started meeting online, was that they wanted to meet face-to-face. And you can draw a pretty good parallel line between the amount of hours you spend online and the amount of travel there is. Travel has not decreased; travel has increased.
I don't know if it's causation, but there's certainly a correlation that meeting and being able to do things online actually emphasizes the power and the benefits that we get face-to-face. And so I think as good as VR gets, it's a different experience meeting face to face. And that'll be true for a very, very long time.
More likely that what we will get is increasing options and choices with very few of the old ones going away. While we will have plenty of virtual travel and people will have incredible experiences online, it will make the real trip in the plane to somewhere else ever more valuable and precious.
There's another place in the book, where you refer to VR as a potential experience factory, and you talk a lot about a scenario where we all essentially own nothing. Or we don't own a lot of things that we think of as important to own right now.
Do you think that's in any way at odds with say, traditional American notions of success? I immediately thought about the housing crisis and the emphasis on home ownership that helped to precipitate it. Do you think maybe our thinking around that is changing? Or is this something unique?
In my book I thought about the shift from ownership to access - that if you have instant ubiquitous access anywhere anytime, that that can often service you better than ownership. Even if it's not exactly instant, even if say within an hour you could have something physical that you wanted.
If you can have this thing delivered to you and then have it taken away when you're done, that access can have more benefits to us than ownership. And since ownership is sort of the basis of our capitalistic society and notions of wealth, then moving away from that ownership would be a huge disruption.
And there's several things to say about that: One is that I was trying to imagine a world in which somebody didn't own anything and I don't think that's really reasonable or likely, but it was kind of a thought experiment just to show you an extreme what it would be like. But in fact, in order to have this world where people are accessing, somebody has to own something, right?
You may be summoning a car or a ride from somebody, but that person, somebody has to own and something to charge, but I think what happens is that we become a little bit more curated - that we own some things that we care about or that constitute our business. And the other things, things that we don't care about, basically, we would subscribe to.
So we wouldn't have universal ownership; we would have selective ownership. In some cases that selective ownership may be a way we display our status. In other cases it's because it's something we're passionate about, and other cases it's because we're doing it to make money.
But home ownership all you'd have to do is take away the tax deducted mortgage subsidy and that would change really quick.
And that could happen too, by the way, but I do think there would be a change in our identity, in our conception of ourselves. Maybe there's some home ownership, but it's nothing that is as important. As far as I can tell, banks own the houses anyway today. It's not really the homeowners, so I think we'll continue to kind of blur those lines.
Speaking of those politics, when you talk about digital socialism, how do you envision that affecting, for instance, social justice movements like Black Lives Matter?
I think it's a very complicated answer because on one level it doesn't have a direct effect. On other levels, it's obvious that there are things like tracking, ubiquitous cameras everywhere, and that makes a difference. The technological environment in which everything is filmed all the time will have a huge impact. In the end the cops are filming, and they should be filming, and the citizens should be filming - and citizens should have access to everything the cops film. The net effect will be good overall if everything is captured. Over time the greater good would be served by having that evidence.
But at the same time, it doesn't address the fundamental problems, so I think it's complicated.
It's a little bit of a subversion of the way we think about surveillance, where it's always state down. And we think of it as a way for the state will hold the citizenry accountable, but it seems like what we're seeing now is a reversal of that. Given that you turned in the book about a year ago, is there anything you thinking about while you were writing that seems particularly poignant right now, given the very odd political environment we're in?
My take on a lot of the anger, frustration that's being represented by both the British exit and Trump, is that they're derived from the fact that we have technological changes, changes in people's livelihoods. Technology is taking away some of their jobs, and makes it hard for them to find work and have meaning in life. And they're frustrated, and it has nothing to do with Mexico, or China or the immigrants in Syria. It has everything to do with the fact that automation is coming, will continue to come and that some of those changes will continue to happen.
The most common occupation in America is truck driving. There are three million truck drivers and their lives and livelihoods are going to be disrupted hugely by AI automated cars, so we're not at the end of this. This is still going to continue.
So I don't think we've heard the end of it.
More books on Inventions
Baxter being taught
“A robot must be able to make appropriate choices if it is to work alongside humans in daily life”
YOUR move, human. This robot is preparing to deal with the world by learning to play noughts and crosses.
Also known as tic-tac-toe, the game requires players to take turns drawing Xs or Os on a grid in a race to get three of their markers in a row. It’s a simple affair compared with other games mastered by artificial intelligence in recent years, such as Go and Jeopardy. But teaching a physical robot to play is trickier.
Heriberto Cuayahuitl at the University of Lincoln in the UK and his colleagues saw the paper-and-pencil puzzle as an opportunity to train a humanoid robot in multiple skills at once using deep learning.
The robot wasn’t preprogrammed to make the decisions or actions needed to win the game. To play successfully, it needed to figure out how to perceive its surroundings, understand verbal instructions and interact appropriately with its environment. Essentially, it had to use different senses to make a judgement about how it should behave and then act accordingly.
“For a robot to learn what to do and say, based on what was heard and seen, is not a trivial task,” says Cuayahuitl.
These skills aren’t just for fun. Any robot destined to work alongside humans in daily life will need to be able to take in different types of information and make appropriate choices based on what it learns.
The team worked with humanoid robot Baxter, developed by Rethink Robotics in Boston. They equipped Baxter with software and sensors so it could see its surroundings, recognise speech, move its head to follow the gaze of the other player and move its arm to draw its own noughts or crosses in the grid.
The robot could also serve up a handful of preprogrammed comments at appropriate moments, such as “I take this one” when it claimed a box on the grid, and “Yes, I won!”
Seven humans took turns playing with Baxter, which always selected to play noughts over crosses when it started a round. Deep learning algorithms helped it improve its game, as it figured out how to better perceive and respond to the humans’ actions. In the end, it won or tied 98 per cent of the time.
Down the line, Cuayahuitl’s team thinks their system can help efficiently train interactive robots. Future versions of their experiment may attempt broader conversations with humans, or take on more complicated games. The team is also planning to teach the robot to take its opponents’ emotions into account, so instead of winning every time, it could aim to perform in a way that makes its opponent happiest.
“The idea is to endow robots with the ability to develop and/or improve their own behaviours over time,” says Cuayahuitl.
Books by Title
Books by Author
Books by Topic
Bits of Books To Impress
|