Hacker News new | past | comments | ask | show | jobs | submit login

With the advancements of AI in the past year alone it seems silly to think that, within a lifetime, AI doesn’t have the ability to manifest society collapsing contagion. AI is certainly going to be granted more network access than it currently has, and the feedback loop between AI, people, and the network is going to increase exponentially.

Reduced to the sum of its parts, the internet is less than a magpie, yet viruses and contagion of many forms exist in it, or are spread though it. ChatGPT 2.0 greatly amplifies the effects of those contagion, regardless of our notions of what intelligence or agency actually is.




Innovation doesn’t follow any path; discovery is messy. No matter how much we advance towards smaller chips, we are never going to get to 0nm, for example.

There are limits, but even if there weren’t, we’re no closer to AGI today then we were a year ago. It’s just a different thing entirely.

LLMs are cool! They’re exciting! There should be rules around their responsible operation! But they’re not going to kill us all, or invade, or operate in any meaningful way outside of our control. Someone will always be responsible for them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: