>Effective accelerationism aims to follow the “will of the universe”: leaning into the thermodynamic bias towards futures with greater and smarter civilizations that are more effective at finding/extracting free energy from the universe and converting it to utility at grander and grander scales
Suppose your mom has cancer. She goes to the doc. The doc tells her that he *could* treat her cancer, but it wouldn't be the right thing to do: Your mother is middle-aged, and it is the "will of the universe" that she is consumed by cancer and passes away, so that the essential nutrients in her body (e.g. nitrogen, carbon, phosphorus) can cycle through the ecosystem and serve as building blocks for newer and younger living things. The doc doesn't have much to say about cancer. Instead he spends the entire visit giving your mom lots of scientific details related to essential environmental processes such as nitrogen mineralization that your mother's decaying body can contribute to as it lies 6 feet beneath the earth.
This "stop fighting the will of the universe, embrace it" type ideas are intresting.
For example, stop fighting viruses. The most virulent viruses spread across the population best. This is evolution. This is the will of the universe. Stop getting vaccinated and instead design the most virulent bioweaopon you can make. Wait WHAT.
Gravity is a fundemental law. Gravity pulls things down. Let's knock down all tall buildings, and then drop the earth into the sun. Wait WHAT.
Humans exist. Humans are part of the universe. Humans can and do fight against any process that threatens their survival and wellbeing. Whether that's fighting viruses with vaccines, or trying to fight against self replicating AI nanobots that want to disassemble us.
Our utility functions don't have to be simple, and mine isn't.
I know that I am far from the theoretical limits. I know I could never survive in economic competition among AI with nanotech. And so, I try to aim for a world that doesn't have economic competition between AI with nanotech.
If we take our time to solve alignment, we can program in whatever else we like. We can program the AI to fill 99% of the universe with the hyper optimizing superintelligences, and leave 1% for the humans to enjoy, if that is what we want to do. We can ask it to check that zombie AI are actually at a competitive disadvantage. (I am not convinced) We can take our time deciding, even 1000 years of carefull discussion costs far less than a fraction of a percent chance of making a mistake, of turning the whole universe into the wrong thing.
I agree that current central planners are stupid. (see USSR) Capitalism works better.
The thing is, central planning scales with intelligence. The smarter the central planners, the smaller the amount of resources they waste. But several minds can waste lots of resources on competition, even if both are vastly superintelligent. (Unless they don't compete, but then they are basically working together to be a central planner)
You seem to think utility monsters are real. (Utility monsters are philosophical beings who get such vast amounts of utility from their resources we should gladly exterminate humanity to give them marginally more resources.) What's more, you seem to think utility monsters are an inevitable result of economic competition.
> stop fighting viruses. The most virulent viruses spread across the population best. This is evolution. This is the will of the universe.
The idea is *not* to just let things happen because "it's the will of the universe".
It's to recognize that life (or as described here: The thing that captures free energy to preserve its own state of matter, increasing entropy elsewhere) is the thing worth expanding.
So anything that increases this thing, is good. (Humans have observably the best chance of making life multi-planetary.)
Anything that detracts from that goal is bad (viruses that wipe out humans arguable reduce our chances of expanding life).
I agree that AGI is not as clear-cut as this essay makes it out to be. AI would need to have a truly embedded *want* to self-replicate over as long a time-span as possible - something that life based on dna has a lead of millions of years on.
> We can take our time deciding, even 1000 years of carefull discussion costs far less than a fraction of a percent chance of making a mistake, of turning the whole universe into the wrong thing.
This is actually fair. However, consider the possibility of natural or (other) man-made disasters in the meantime, that collapse civilization. Maybe we don't have a thousand years.
We can only ever know in hindsight.
> The thing is, central planning scales with intelligence. The smarter the central planners, the smaller the amount of resources they waste.
The more capable the technology we can work with, the more capable the central planners. Sure. BUT, the more capable our technology, the more intricate are the trade networks and utility functions.
Central planners will always be behind.
> You seem to think utility monsters are real.
You seem to think he is trying to maximize utility. I think he wants to maximize what I call life, what might be called "local entropy reduction", or what he calls "The thing that captures free energy to preserve its own state of matter".
(more precisely he says
> "matter reconfigures itself such as to extract energy and utility from its environment such as to serve towards the preservation and replication of its unique phase of matter".
I don't think the emphasis is on the utility here, but instead on the preservation and replication)
They're just schizophrenics typing mad shit on the internet. Don't worry. This guy isn't doing anything that actually involves Crooks fluctuation theorem in real life.
This is a striking example of reasoning by analogy, recognized by Aristotle as among the least reliable forms of argument. Among other things, it is used to establish that aliens must have visited the Mayans because some of their clay figures look like airplanes. Pass.
So taping humanity's foot to the gas pedal, putting on a blindfold and letting go of the steering wheel, hoping you don't drive off a cliff, in pursuit of the desire to witness a tiny bit more of what the future has in store, got it 👍
"No need to worry about creating “zombie” forms of higher intelligence, as these will be at a thermodynamic/evolutionary disadvantage compared to conscious/higher-level forms of intelligence"
Just because something fits neatly into your philosophy doesn't mean it's true.
"e/acc idealists will be overtaken by e/ass, effective assassinationism, the philosophy that truly rules the world." - Hassan at-Tanstagi, Church of Don 3.0
Why does e/acc feel like a new age, technofuturist repackaging of ye good ol’ free market capitalism (which has proven to generate perversive incentives that serve the few over the many)?
"Consciousness is posited as a natural limit of intelligence beyond a certain threshold of scale/hierarchies of meta-optimization of cognition; a simple phase transition achievable by more scale and more optimization/evolution"
What? Why? Why would you need qualia for optimization? Assuming it takes resources wouldn't you be more energy efficient without qualia?
"In a capitalist system, these meta-organisms compete for resources, as such, typically resources are dynamically assigned towards meta-organisms that have utility to the meta-meta-organism that is our civilization"
What? Wouldn't it be first come first serve? If I can privatize a source of a finite natural resources I could extract them even if it wouldn't be good for the world as a whole. E.g. I can extract all the oil in a certain area for private gain even if I use it inefficiently/release a bunch of pollution while doing so.
Capitalism incentivizes underinvesting or even destroying public goods (utility for the meta-meta-organism) in favor of private goods (utility for the meta-organism).
Most interesting, though I wonder at the following:
"e.g. a new technological paradigm emerges, letting the free market find how to extract utility from this said technology would be the best way to proceed, much better than fear-mongering"
What do logic/instruments do you propose ought to intercede in such an instance - and there have been quite a few in recent times - wherein the free market derives utilities from a new technology in such a way that, though they satisfy market imperatives, do not have positive utility values when mapped against, for instance, outcomes in human wellbeing?
Or do you not believe this is possible, that the will of markets and "the will of the universe" are as one and can only ultimately lead to positive outcomes(if so, I congratulate Smith and Hegel on the birth of their new bouncing baby)?
This is such a wildly unhuman take. Trying to justify deregulation of a society-shifting technology because *takes a bump of ketamine* ‘the universe wants it’.
Here are some first principles takes based on our actual human experience:
1. Will it drastically increase the power of surveillance and the methods of control that industry and government have over mass populations - yes
2. Can it easily make pornography of underage people without their consent - yes
3. Does it require magnitudes more energy and infrastructure than anything that’s come before it - yes
4. Is it currently using IP without consent while simultaneously reducing the value of that IP (think AI music) - yes
5. Will it ultimately have massive impacts on employment and lead to mass job displacement - yes
The list goes on…
The technology elite have everything to gain by peddling their nonsense reasons for deregulating AI.
The reality is that common citizens and communities of regular people have everything to lose by not taking a strategic approach to AI development.
>Effective accelerationism aims to follow the “will of the universe”: leaning into the thermodynamic bias towards futures with greater and smarter civilizations that are more effective at finding/extracting free energy from the universe and converting it to utility at grander and grander scales
Suppose your mom has cancer. She goes to the doc. The doc tells her that he *could* treat her cancer, but it wouldn't be the right thing to do: Your mother is middle-aged, and it is the "will of the universe" that she is consumed by cancer and passes away, so that the essential nutrients in her body (e.g. nitrogen, carbon, phosphorus) can cycle through the ecosystem and serve as building blocks for newer and younger living things. The doc doesn't have much to say about cancer. Instead he spends the entire visit giving your mom lots of scientific details related to essential environmental processes such as nitrogen mineralization that your mother's decaying body can contribute to as it lies 6 feet beneath the earth.
Do you find the doc's argument persuasive?
e/acc just might be the philosophy future, hopefully it catches on. I personally agree with the application of entropy.
⚡️🚀
This "stop fighting the will of the universe, embrace it" type ideas are intresting.
For example, stop fighting viruses. The most virulent viruses spread across the population best. This is evolution. This is the will of the universe. Stop getting vaccinated and instead design the most virulent bioweaopon you can make. Wait WHAT.
Gravity is a fundemental law. Gravity pulls things down. Let's knock down all tall buildings, and then drop the earth into the sun. Wait WHAT.
Humans exist. Humans are part of the universe. Humans can and do fight against any process that threatens their survival and wellbeing. Whether that's fighting viruses with vaccines, or trying to fight against self replicating AI nanobots that want to disassemble us.
Our utility functions don't have to be simple, and mine isn't.
I know that I am far from the theoretical limits. I know I could never survive in economic competition among AI with nanotech. And so, I try to aim for a world that doesn't have economic competition between AI with nanotech.
If we take our time to solve alignment, we can program in whatever else we like. We can program the AI to fill 99% of the universe with the hyper optimizing superintelligences, and leave 1% for the humans to enjoy, if that is what we want to do. We can ask it to check that zombie AI are actually at a competitive disadvantage. (I am not convinced) We can take our time deciding, even 1000 years of carefull discussion costs far less than a fraction of a percent chance of making a mistake, of turning the whole universe into the wrong thing.
I agree that current central planners are stupid. (see USSR) Capitalism works better.
The thing is, central planning scales with intelligence. The smarter the central planners, the smaller the amount of resources they waste. But several minds can waste lots of resources on competition, even if both are vastly superintelligent. (Unless they don't compete, but then they are basically working together to be a central planner)
You seem to think utility monsters are real. (Utility monsters are philosophical beings who get such vast amounts of utility from their resources we should gladly exterminate humanity to give them marginally more resources.) What's more, you seem to think utility monsters are an inevitable result of economic competition.
I think you might be misunderstanding the intent.
> stop fighting viruses. The most virulent viruses spread across the population best. This is evolution. This is the will of the universe.
The idea is *not* to just let things happen because "it's the will of the universe".
It's to recognize that life (or as described here: The thing that captures free energy to preserve its own state of matter, increasing entropy elsewhere) is the thing worth expanding.
So anything that increases this thing, is good. (Humans have observably the best chance of making life multi-planetary.)
Anything that detracts from that goal is bad (viruses that wipe out humans arguable reduce our chances of expanding life).
I agree that AGI is not as clear-cut as this essay makes it out to be. AI would need to have a truly embedded *want* to self-replicate over as long a time-span as possible - something that life based on dna has a lead of millions of years on.
> We can take our time deciding, even 1000 years of carefull discussion costs far less than a fraction of a percent chance of making a mistake, of turning the whole universe into the wrong thing.
This is actually fair. However, consider the possibility of natural or (other) man-made disasters in the meantime, that collapse civilization. Maybe we don't have a thousand years.
We can only ever know in hindsight.
> The thing is, central planning scales with intelligence. The smarter the central planners, the smaller the amount of resources they waste.
The more capable the technology we can work with, the more capable the central planners. Sure. BUT, the more capable our technology, the more intricate are the trade networks and utility functions.
Central planners will always be behind.
> You seem to think utility monsters are real.
You seem to think he is trying to maximize utility. I think he wants to maximize what I call life, what might be called "local entropy reduction", or what he calls "The thing that captures free energy to preserve its own state of matter".
(more precisely he says
> "matter reconfigures itself such as to extract energy and utility from its environment such as to serve towards the preservation and replication of its unique phase of matter".
I don't think the emphasis is on the utility here, but instead on the preservation and replication)
Cheers, Friend
I hope you guys colonise space and leave this beautiful planet. Nuts like you are destroying it in the first place.
They're just schizophrenics typing mad shit on the internet. Don't worry. This guy isn't doing anything that actually involves Crooks fluctuation theorem in real life.
It is mad funny though lol
This is a striking example of reasoning by analogy, recognized by Aristotle as among the least reliable forms of argument. Among other things, it is used to establish that aliens must have visited the Mayans because some of their clay figures look like airplanes. Pass.
So taping humanity's foot to the gas pedal, putting on a blindfold and letting go of the steering wheel, hoping you don't drive off a cliff, in pursuit of the desire to witness a tiny bit more of what the future has in store, got it 👍
"No need to worry about creating “zombie” forms of higher intelligence, as these will be at a thermodynamic/evolutionary disadvantage compared to conscious/higher-level forms of intelligence"
Just because something fits neatly into your philosophy doesn't mean it's true.
"e/acc idealists will be overtaken by e/ass, effective assassinationism, the philosophy that truly rules the world." - Hassan at-Tanstagi, Church of Don 3.0
Why does e/acc feel like a new age, technofuturist repackaging of ye good ol’ free market capitalism (which has proven to generate perversive incentives that serve the few over the many)?
Maybe because he literally says Capitalism is intelligence.
Dumb is a form of intelligence too...and willfully dumb (aka arrogance) is where harm, neglect, and destruction go to party.
Free energy is conditional y'all. They only seek to manipulate the conditions for their gain. Dumb intelligence at its highest - and accelerating.
"Consciousness is posited as a natural limit of intelligence beyond a certain threshold of scale/hierarchies of meta-optimization of cognition; a simple phase transition achievable by more scale and more optimization/evolution"
What? Why? Why would you need qualia for optimization? Assuming it takes resources wouldn't you be more energy efficient without qualia?
"In a capitalist system, these meta-organisms compete for resources, as such, typically resources are dynamically assigned towards meta-organisms that have utility to the meta-meta-organism that is our civilization"
What? Wouldn't it be first come first serve? If I can privatize a source of a finite natural resources I could extract them even if it wouldn't be good for the world as a whole. E.g. I can extract all the oil in a certain area for private gain even if I use it inefficiently/release a bunch of pollution while doing so.
Capitalism incentivizes underinvesting or even destroying public goods (utility for the meta-meta-organism) in favor of private goods (utility for the meta-organism).
Great write-up, I look forward to exploring the details further. For sure, a crème de la crème philosophy for a positive future
Most interesting, though I wonder at the following:
"e.g. a new technological paradigm emerges, letting the free market find how to extract utility from this said technology would be the best way to proceed, much better than fear-mongering"
What do logic/instruments do you propose ought to intercede in such an instance - and there have been quite a few in recent times - wherein the free market derives utilities from a new technology in such a way that, though they satisfy market imperatives, do not have positive utility values when mapped against, for instance, outcomes in human wellbeing?
Or do you not believe this is possible, that the will of markets and "the will of the universe" are as one and can only ultimately lead to positive outcomes(if so, I congratulate Smith and Hegel on the birth of their new bouncing baby)?
This is such a wildly unhuman take. Trying to justify deregulation of a society-shifting technology because *takes a bump of ketamine* ‘the universe wants it’.
Here are some first principles takes based on our actual human experience:
1. Will it drastically increase the power of surveillance and the methods of control that industry and government have over mass populations - yes
2. Can it easily make pornography of underage people without their consent - yes
3. Does it require magnitudes more energy and infrastructure than anything that’s come before it - yes
4. Is it currently using IP without consent while simultaneously reducing the value of that IP (think AI music) - yes
5. Will it ultimately have massive impacts on employment and lead to mass job displacement - yes
The list goes on…
The technology elite have everything to gain by peddling their nonsense reasons for deregulating AI.
The reality is that common citizens and communities of regular people have everything to lose by not taking a strategic approach to AI development.
I would be happy to debate you on this topic, as a sort of token representative of Yudkowsky's views. Please contact me if this would be of interest.
Please see the following posts for my own physics-based theories of ethics:
https://bittertruths.substack.com/p/ethicophysics-i
https://bittertruths.substack.com/p/ethicophysics-ii-affilliation-economics
https://bittertruths.substack.com/p/contra-beff
suicide inducingly bad content