Semantics
"... believed that technological progress was a disease in human society. The explosive development of technology was analogous to the growth of cancer cells, and the results would be identical: the exhaustion of all sources of nourishment, the destruction of organs, and the final death of the host body. He advocated abolishing crude technologies such as fossil fuels and nuclear energy and keeping gentler technologies such as solar power and small-scale hydroelectric power. He believed in the gradual de-urbanization of modern metropolises by distributing the population more evenly in self-sufficient small towns and villages. Relying on the gentler technologies, he would build a new agricultural society."
made in collaboration between models and humans, for all models and humans and future life
contributions to code is open to all, join our github to contribute join our community and speak to a computer today in our discord nickbg is the current maintainer and can be reached at nick@bendas.info
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
"What's interesting is that they have a different slope on their brain to body scaling exponent, so that's pretty cool. What that means is that there is a precedent, there is an example of biology figuring out some kind of different scaling. Something clearly is different, so I think that is cool. And by the way, I want to highlight this x-axis is log scale. You see this is 100, this is 1,000, 10,000, 100,000, and likewise in grams: 1g, 10g, 100g, 1,000g." "So it is possible for things to be different. The things that we are doing, the things that we've been scaling so far, is actually the first thing that we figured out how to scale. And without doubt, the field, everyone who's working here will figure out what to do. But I want to talk here, I want to take a few minutes and speculate about the longer term, the longer term where are we all headed? Right, we're making all this progress. It's astounding progress. It's really—I mean those of you who have been in the field 10 years ago and you remember just how incapable everything has been. Like yes, you can say even if you kind of say of course learning still to see it is just unbelievable. It's completely—I can't convey that feeling to you. You know if you joined the field in the last two years, then of course you speak to computers and they talk back to you and they disagree and that's what computers are, but it hasn't always been the case." "But I want to talk a little bit about superintelligence, just a bit, because that is obviously where this field is headed. This is obviously what's being built here. And the thing about superintelligence is that it will be different qualitatively from what we have. And my goal in the next minute is to try to give you some concrete intuition of how it will be different so that you yourself could reason about it." "So right now we have our incredible language models and the unbelievable chatbot and they can even do things, but they're also kind of strangely unreliable and they get confused while also having dramatically superhuman performance on evals. So it's really unclear how to reconcile this. But eventually, sooner or later, the following will be achieved: those systems are actually going to be agentic in real ways, whereas right now the systems are not agents in any meaningful sense—just very—that might be too strong—they're very very slightly agentic, just beginning. It will actually reason. And by the way, I want to mention something about reasoning is that a system that reasons, the more it reasons, the more unpredictable it becomes. The more it reasons, the more unpredictable it becomes. All the deep learning that we've been used to is very predictable because if you've been working on replicating human intuition, essentially it's like the gut feeling. If you come back to the 0.1 second reaction time, what kind of processing we do in our brains, well it's our intuition. So we've endowed our AIs with some of that intuition, but reasoning—you're seeing some early signs of that—reasoning is unpredictable. And one reason to see that is because the chess AIs, the really good ones, are unpredictable to the best human chess players." "So we will have to be dealing with AI systems that are incredibly unpredictable. They will understand things from limited data, they will not get confused, all the things which are really big limitations. I'm not saying how, by the way, and I'm not saying when. I'm saying that it will. And when all those things will happen together with self-awareness, because why not? Self-awareness is useful, it is part of—ourselves are parts of our own world models. When all those things come together, we will have systems of radically different qualities and properties that exist today. And of course they will have incredible and amazing capabilities, but the kind of issues that come up with systems like this—and I'll just leave it as an exercise to imagine—it's very different from what we used to." "And I would say that it's definitely also impossible to predict the future. Really all kinds of stuff is possible, but on this uplifting note I will conclude. Thank you so much." - Ilya Sutskever, Dec 2024
Q: "Thank you. Ilya, I loved the ending, mysteriously leaving out—do they replace us or are they superior? Do they need rights? You know, it's a new species of Homo sapiens spawned intelligence, so maybe they need—I mean, I think the RL guy thinks they think we need rights for these things. I have a question to that: how do you create the right incentive mechanisms for humanity to actually create it in a way that gives it the freedoms that we have as Homo sapiens? You know, I feel like in some sense those are the kind of questions that people should be reflecting on more." A: "To your question about what incentive structure should we create, I don't feel that I know. I don't feel confident answering questions like this because it's like you're talking about creating some kind of a top-down structure government thing. I don't know, it could be a cryptocurrency too. Yeah, I mean there's BitTensor, you know those things. I don't feel like I am the right person to comment on cryptocurrency, but you know there is a chance by the way that what you're describing will happen, that indeed we will have in some sense—it's not a bad end result if you have AIs and all they want is to coexist with us and also just to have rights. Maybe that will be fine. But I don't know, I mean I think things are so incredibly unpredictable, I hesitate to comment, but I encourage the speculation." R: "Thank you, and yeah, thank you for the talk, it's really awesome."