It's possible we see some ways in which AI becomes increasingly AGI like in some ways but not in others. For example, AI that can create novel scientific discoveries but can't make a song as good as your favorite musician who creates a strong emotional effect with their music.
More importantly, there's many ways that AI can seemingly look to becoming more intelligent without making any progress in that direction. That's of real concern. As a silly example, we could be trying to "make a duck" by making an animatronic. You could get this thing to be very life like looking and trick ducks and humans alike (we have this already btw). But that's very different from being a duck. Even if it were indistinguishable until you opened it up, progress on this animatronic would not necessarily be progress towards making a duck (though it need not be either).
This is a concern because several top researchers -- at OpenAI -- have explicitly started that they think you can get AGI by teaching the machine to act as human as possible. But that's a great way to fool ourselves. Just as a duck may fall in love with an animatronic and never realize the deciept.
It's possible they're right, but it's important that we realize how this metric can be hacked.
Ironic cuz it's actually exactly the opposite. In the same way 1,000,000 monkeys on typewriters will eventually write Shakespeare AI is plenty capable of creating "art".
It is however currently completely unable to "think". It can't make a novel scientific discovery because it can't even add 2 + 2. It can give you the most common answer to "what is 2 + 2" but it's not actually pulling up the calculator app and doing the computation, it's just giving the most probabilistic answer.
And even if it could pull up a predefined list of apps to double check it's work, that still isn't AGI.
This I'm very sure will be the case, but everyone will still move the goalposts and look past the fact that different humans have different strengths and weaknesses too.
A tone deaf human for instance.
There is another term for moving the goalposts: ruling out a hypothesis. Science is, especially in the Popperian sense, all about moving the goalposts.
One plausible hypothesis is that fixed neural networks cannot be general intelligences, because their capabilities are permanently limited by what they currently are. A general intelligence needs the ability to learn from experience. Training and inference should not be separate activities, but our current hardware is not suited for that.
If that's the case, would you say we're not generally intelligent as future humans tend to be more intelligent?
That's just a timescale issue, if its learned experience of gpt4 is being fed into the model on training gpt5, then gptx (i.e. including all of them) can be said to be a general intelligence. Alien life one may say.
Every problem is a timescale issue. Evolution has shown that.
And no you can't just feed GPT4 into GPT5 and expect it to become more intelligent. It may be more accurate since humans are telling it when conversations are wrong or not. But you will still need advancements in the algorithms themselves to take things forward.
All of which takes us back to lots and lots of research. And if there's one thing we know is that research breakthroughs aren't a guarantee.
I think you missed my point slightly, sorry my explaining probably.
I mean timescale as in between two points in time. Between the two points it meets the intelligence criteria you mentioned.
Feeding human vetted GPT4 data into GPT5 is no different to a human receiving inputs from its interaction with the world and learning. More accurate means smarter, gradually it's intrinsic world model improves as does reasoning etc.
I agree those are the things that will advance it but taking a step back it potentially meets that criteria even if less useful day to day (given its an abstract viewpoint over time and not at the human level).
Natural selection results in species succeeding that do some pretty brutal things. Natural selection also applies to religions, governments, and startups.
It has also become quite easy when sanding a UI to provide code to an LLM and describe something like a click deadzone and it can usually fix it immediately without you needing to investigate it. The missing piece is really just the glue between the browser, coding environment and LLM. I know a bunch of YC startups are working on this but nothing has really worked well for me. Please recommend anything you are using that does in fact provide this type of glue...
I love using https://aider.chat/
When you say glue between the browser though not sure if you're looking for something to automatically watch your browser's behavior. For now you can just pass screenshots to aider through the clipboard
By the way, there is an excellent film called Rebel Ridge now on Netflix where the basic premise is about small town police abusing the practice of civil asset forfeiture.
If that comforts you (I know it doesn't) Swiss police can take any money claiming it touched cocaine so it was maaayyyybe used in trafficking. Stats say about 90% of banknotes qualify so it's a sure shot for them. Even if you're cleared of the accusations, they will STILL keep your money.
It's been recommending that movie to me, and it looks excellent, but I know if I watch it it's gonna end up with me throwing the TV out the window out of rage.
I know it's just a figure of speech, but I'd also like to add a reminder if someone needs to see it: channeling that same rage into investing time and effort into making a change is the very reason why such an enraging show is made and why it should be watched. Just raging into the wind is pointless, specially after choosing not to invest anything into making a change
Setting aside the addressing of anxiety and avoidance, a lot of good code review practice boils down to "could/should/must" and that it's good to call out these when you see them and label them appropriately, and when receiving a code review understand the difference between them.
Best? Sorry but I think that's a poor one because if someone else drink your milkshake, you don't have it anymore which isn't the case here..
Of course if someone copies your data, while you still have your data their value has decreased because the data's scarcity is reduced.
> Buying GPUs now when they're highly-available is a hedged-bet against a worse GPU shortage than ever seen before in the face of unprecedented territorial aggression.
This actually makes it occur to me that if you've been hoarding a huge amount of GPUs, it may actually be in your interest to then encourage the events that would lead to a subsequent GPU shortage.