AGI is potentially very different from Yudkowsky-style superintelligence
TLDR; I can envision an AGI like ChatGPT that can produce intelligent output for 10 billion more inputs and still not be superintelligence.
For the sake of shorthand names I use the term AGI for a ChatGPT style AI, and "superintelligence" for a Yudkowsky-style Living organism-like AI.
The differences will become apparent below.
To explain this, I believe a Yudkowsky-style superintelligence needs 4 features, I will just list them up front and then go through each after.
1) Some kind of processing unit like a brain or a digital neural network that can produce intelligent output from given inputs
2) Some way of producing those outputs, either through digital only means like a chat, a python script, or actuators/limb-like things in the physical world
And then most important, the last two:
3) Some way of stably updating the processing unit in (1)'s structure and model, often, but not always, in accordance with returned stimuli from actions performed in (2)
4) A physical built in variation between what I call the "Stasis/Change" cycle (this is arguably the most important)
My contention is ChatGPT and other AI models have #1 and some #2, but very little to none of #3 and nothing of #4.
OK to start with:
#1 I don't think this needs a huge amount of explanation, but just to get the basics out of the way. My main definition of intelligence in this context is that the AI produces intelligent output from an ever increasing number of different inputs. Right now mostly humans only determine what is intelligent output, and there is some discussion of a "pure intelligence" that doesn't produce output or is merely about the inherent potential for intelligence in its internal structure, but presumably, if an AI can produce intelligent outpout for an ever increasing number of inputs, its internal model must be in some way intelligent, as we humans define it.
#2 This is a fixed set of actuators / interfaces to the external world whether digital or physical that the AI can produce its output to. The important thing here is that I can envision ChatGPT 10 in 20 years, that has an ever increasing number of both #1 and #2, but because it doesn't have #3 or #4, it will eventually never be anything more than ChatGPT is now - a sort of static intelligence that produces output in a reliable manner but doesn't ever alter itself or its models or outputs and essentially is "non-living".
#3 Ok so now the important bits - I believe us humans, and all organisms, and what differentiates us from other physical matter in the universe, is that we are some sort of physical structrure, that wants to maintain the integrity and security of its own body, and that we have evolved mechanisms to detect disturbances in that body, and then do actions to alleviate and fix those disturbances. When we are hungry, or sleepy, or even when we feel our egoes are damaged in the social sphere, those are detected as disturbances to the integrity of body, and then we have various evolved intelligent behavior to fix and/or alleviate those disturbances. More about this in a minute.
#4 Following directly from 3, I believe the human brain and the body, is just like DNA and its mutations, in that, in theory, DNA is never supposed to change, it's supposed to perfectly recreate the exact phenotype that it encodes for, but because physical reality is not stable, mutations occur, and in theory this is a bug, but arguably, the way things evolved, it has become a feature. The same is true for the human brain and the other functions of any organism - the dream state of an organism is to maintain Stasis forever, and return to having no disturbances, but since the physical world doesn't work that way, every stimuli and action an organism performs, can change both the brain and the environment, as well as the physical body, which leaves the "Stasis" / "DNA" part of the brain forever in flux, but which is not really part of the theoretical design of an organism in a physical design, whose only goal is to maintain perfect stasis with no internal change.
The gift though, and the thing that AI's don't have, is that evolution seems to have found a perfect balance between Stasis and Change. The brain is just plastic enough to deal with change, but it's also stable enough to maintain Stasis without damaging all its accumulated functionality with any one simple stimuli that comes in. This balance might even have been naturally selected in some ways - maybe there were earlier life that had DNA or other molecules to maintain its integrity but they died out because they were either too plastic, or too hard so they couldn't change enough or fast enough. So the physical stuff that creates the brain, and the DNA, is just hard enough, but also just soft enough to enable a good Stasis/Change cycle.
Compare this to an AI like ChatGPT - its neural network is brittle, and it has no natural constraint on how it will accumulate functionality while at the same time enabling change. It also has no natural flow between wanting to maintain stasis and having a set of outputs to correct disturbances to its "body", thus enabling a selection process on its actions / mental model, because for one, in a digital computer, there is no "Stasis" "natural state". It can literally represent anything as far as I can see, and even a very small change can wack the entire balance of weights out of order, and there is no inherent logic between "I made an action to correct a disturbance in my bodily state, and it had X outcome which deviated 0-n from my expected result state of my action".
I think it is possible to create this in principle though - if you create an AI that monitors its own state and then takes actions to fix disturbances to that state, that would be a start, but even in that scenario, you still need a a natural connection / logic between updating of the internal model based on actions, and also an updating process that doesn't allow too much change, while also not being too static, and in a digital computer where any change is allowed and there are no physical constrains like in the human, you would need some kind of theoretical model from which to allow the balance between Stasis/Change, which seems like a hugely complicated undertaking. If you don't having this, you would only have what I think we have now - which is a very intelligent static system that never changes and updates, it only repeats the same output and is limited by the state of its own neural net/model at all times.
But this is a huge win, potentially, at least compared to the Yudkowsky-style superintelligence that will kill us all, because it means we can have an AI that can produce intelligent outputs for a billion more inputs, and still only be a static AGI, so to speak.
And this brings me to my last point, which is that I think one reason humans are particularly "special" compared to all other animals, is that a lot of our intelligence is socially evolved. Probably at some point in the past, there was a selection pressure with other primates, and human groups who were more attentive to novel stimuli, and particularly other humans, were able to better communicate, and create more complex group plans for winning battles for resources from other species of apes. The other primates just weren't able to coordinate in the same way and so humans came out on top, probably. But what came out of this, is that humans are now particularly interested in other humans, and one vector for that interest is novelty. No other system on earth creates more novelty than humans, and when status and your belonging in a group is directly tied to how much novelty you can create, we are now, depending on how you look at it, doomed - or blessed - to always seek novel stimuli. This leads to a "self fulfilling prophecy" where we change our environment because we want stimuli, but then we have to deal with the consequences, and I don't think organisms - in general, were evolved to do this. We did not evolve, even as humans, to create and change our environment, we only evolved to adapt to it and detect changes to our bodily integrity, and that's why I think, most other animals _are_ like ChatGPT is now, they don't have #3 and #4 to nearly the same degree humans do, but this means, there won't be nearly as much conflict with a potential 1000x smarter AGI if it's like other animals and not interested in novelty, because a lot of modern conflict comes from disagreement about what to maintain Stasis about, and so the challenge to solve superintelligence is to basically control what Stasis it is working towards (or give it no stasis).