![]() Some bad, but mostly good.īut I prefer to think of this in the long term. No one anticipated in the 1940s or 50s what computers would do. But I have no doubt that we will find a gazillion useful things for intelligent machines to do, just like we’ve done for phones and computers. You tell a story in the book about pitching handheld computers to a boss at Intel who couldn’t see what they were for. It’s relevant if you want to build a humanlike machine, but I don’t think we always want to do that. So focusing on language and human experience and all this stuff to pass the Turing test is kind of irrelevant to building an intelligent machine. Many will be very simple and small-you know, just like a mouse or a cat. I think in the future, many intelligent machines will not do anything that humans do. Is the focus on humanlike achievement a problem too? If you can trick somebody, if you can solve a task with some sort of clever engineering, then you’ve achieved that benchmark, but you haven’t necessarily made any progress toward a deeper understanding of what it means to be intelligent. The problem with all performance-based metrics, and the Turing test is one of them, is that it just avoids the conversation or the big question about what an intelligent system is. So playing Go was a great achievement for AI. Can a machine do something a human can do? And that has been extended to all the goals we set for AI. He was like, “Here’s some stuff to think about-stop bothering me.” But the problem is that it’s focused on a task. I just mean that if you go back and read his original work, he was basically trying to get people to stop arguing with him about whether you could build an intelligent machine. What did Turing get wrong when he started the conversation about machine intelligence? The question is how do we get away from, like, “Here’s another trick” to the fundamentals needed to build the future. I think at the end of the century, we will have machines like that. Try to imagine what kind of AI is required to do that. In the book I use the example of robots on Mars building a habitat for humans. An AI that can detect cancer cells is great. Even today, we still focus so much on benchmarks and clever tricks. You know, the Turing test is one of the worst things that ever happened, in my opinion. How do you see these conversations changing AI research?Īs a field, AI has lacked a definition of what intelligence is. I’m hoping it’ll be a real turning point. I mean, this brain research is less than five years old. Do we accept them? Do we disagree? That hasn’t really been possible before. I mean, my ideal dream is that every AI lab in the world reads this book and starts discussing these ideas. But one of the big goals of writing this book was to start a conversation about intelligence that we’re not having.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |