Hacker News new | past | comments | ask | show | jobs | submit login
Machines of loving grace: How AI could transform the world for the better (darioamodei.com)
157 points by jasondavies 34 days ago | hide | past | favorite | 129 comments



One of the sad things about tech is that nobody really looks at history.

The same kinds of essays were written about trains, planes and nuclear power.

Before lindbergh went off the deepend, he was convinced that "airmen" were gentlemen and could sort out the world's ills.

The essay contains a lot of coulds, but doesn't touch on the base problem: human nature.

AI will be used to make things cheaper. That is, lots of job losses. must of us are up for the chop if/when competent AI agents become possible.

Loads of service jobs too, along with a load of manual jobs when suitable large models are successfully applied to robotics (see ECCV for some idea of the progress for machine perception.)

But those profits will not be shared. Human productivity has exploded in the last 120 years, yet we are working longer hours for less pay.

Well AI is going to make that worse. It'll cause huge unrest (see luddite riots, peterloo, the birth of unionism in the USA, plus many more)

This brings us to the next thing that AI will be applied to: Murdering people.

Andril is already marrying basic machine perception with cheap drones and explosives. its not going to take long to get to personalised explosive drones.

AI isn't the problem, we are.

The sooner we realise that its not a technical problem to be solved, but a human one, we might stand a chance.

But looking at the emotionally stunted, empathy vacuums that control either policy or purse strings, I think it'll take a catastrophe to change course.


We are entering a dystopia and people are still writing these wonderful essays about how AI will help us.

Microtargeted psychometrics (Cambridge Analytica, AggregateIQ) have already made politics in the West an unending barrage of information warfare. Now we'll have millions of autonomous agents. At some point soon in the future, our entire feed will be AI content or upvoted by AI or AI manipulating the algorithm.

It's like you said - this essay reads like peak AI. We will never have as much hope and optimism about the next 20 years as we seem to have now.

Reminds me of a graffiti I saw in London, while the city's cost of living was exploding and making the place unaffordable to anyone but a few:

"We live in a Utopia. It's just not ours."


Your looking at it from a narrow perspective.

There are millions of middle class households living pretty comfortable lives in Africa, India, China, ASEAN, and Central Asia that were living hand-to-mouth 20 years ago.

And I don’t mean middle class by developing country standards, I mean middle class by London, UK, standards.

So it pretty much is a ‘utopia’ for them, assuming they can keep it.

Of course that’s cold comfort for households in London regressing to the global average, but that’s the inherent nature of rising above and falling towards averages.


But now you are missing the GPs point.


How so?


Is such certainty warranted? I don’t think so; it strains credibility.

I’m very concerned about many future scenarios. But I admit the necessity of probabilistic assessments.


When reading the article, the author makes it very clear that AI can cut both ways. See the intro, you may have overlooked it.


“Dystopia is coming.” is just the inverse of “AI/web apps will save us!”

IMO it’s already pretty dystopian to demand people prostrate themselves to flip my burger which I can do myself; we’re living in the memory of the last 70 years rather than let those two adults pick what to do for that day.

If you’d all like to go DIY more rather than rely on overseas sweatshop workers that would help. Despite HN being smarties by education you’re all still just one meat suit out of billions; none of us have defined new axioms of math or linked relativity and quantum mechanics.

None of the philosophy is ever “real”; just lame office workers stroking their human egos in each others faces until we die.


As long as there is money to be made or power to be gained, any technology will be used to get it, especially if the negative effects are externalities that the technology developers and users are not liable for.


I do not agree with the following:

> But those profits will not be shared. Human productivity has exploded in the last 120 years, yet we are working longer hours for less pay.

I am, however, criticizing this in isolation — that is, my goal is not to invalidate (nor validate, for that matter) the rest of your text; only this specific point.

So, I do not agree. We are clearly working a lot less hours than 120 or even 60 years ago, and we are getting a lot more back for it.

The problem I have with this is that the framing is often wrong — whether some number on a paycheck goes up or down is completely irrelevant at the end of the day.

The only relevant question boils down to this: how many hours of hardship do I have to put in, in order to get X?

And X can be many different things. Like, say, a steak, or a refill at the gas station, or a bread.

Now, I do not have very good data at hand right here and right now, but if my memory and my gut feeling serves me right, the difference is significant, often even dramatic.

For example, for one kilogram of beef, the average German worker needs to toil about 36 minutes nowadays.

In 1970, it was twice as much time that needed to be worked before the same amount of beef could be afforded.

In the seventies, Germans needed to work 145 hours to be able to afford a washing machine.

Today, it’s less than 20 hours!

And that’s not even taking into account the amount of „more progress“ we can afford today, with less toil.

While one can imagine that in 1970, I could theoretically have something resembling a smartphone or a lane- and distance-keeping car getting produced for me (by NASA, probably), I can’t even begin to imagine how many hours, if not millennia, I would have needed to work in order to receive a paycheck that would have paid for it.

We get SO much more for our monthly paycheck today, and so many more people do (billions actually), it’s not even funny.


> We get SO much more for our monthly paycheck today

Many goods are cheaper, and electronics are vastly better, but housing, health care, and higher education costs have outpaced inflation for decades.


We get more material goods, to be sure. But do we get more satisfaction out of our lives? More happiness? More joy?


Wage increases have been lagging productivity increases for about 60 years.[1]

For the past 35 years or so, automation seems to have driven wage inequality, and AI is headed down the same path.[2,3]

[1] https://www.epi.org/productivity-pay-gap/

[2] https://news.mit.edu/2020/study-inks-automation-inequality-0...

[3] https://www.technologyreview.com/2022/04/19/1049378/ai-inequ...


Note that the famous EPI graph in your first source appears to be controversial with mainstream economists: https://www.reddit.com/r/AskEconomics/comments/12kk79k/what_...

(However, there seems to be other sources that say the same thing, despite criticism of EPI's graph. So I'm not sure. It seems to be consensus that it has been stagnating or not keeping up with productivity for low skilled workers, though? But I can't find much about wage or compensation compared to productivity for high skilled workers [e.g. programmers, AI engineers]).

Furthermore, your comment focused on wages, where your parent comment specifically claimed that wages were less important than "how much work it takes to buy something similar, like steak, compared to the past". It misses the point.

Finally, I admit I haven't seen the latter two sources, but in my layman's opinion, inequality isn't necessarily bad if all needs are met.


> where your parent comment specifically claimed that wages were less important than "how much work it takes to buy something similar, like steak, compared to the past". It misses the point

The story is better on the consumer goods side (vs. housing, health care, higher education...) Consumers would probably be in much worse shape without cheap goods from the likes of Walmart, Ikea, Amazon, etc.

> inequality isn't necessarily bad if all needs are met.

The interesting progress of automation is that it's moving up in terms of job salary, while a smaller number of new jobs is created at the top and at the bottom.


You are ignoring the poorest half the planet, how long does a Bengladeshi woman has to work to buy a washing machine? How many of them even will ever own one? And who made the washing machine you own? Was it made in your country? Or in China?


You are right, but the average person is less happy than in 1970.


Average meaning not gay, not ethnic minority, and not living in China or Soviet Union?


Yes. Good point. I do mean that. The average American white person is less happy.


A lot of people actually liked the Soviet Union, post-Stalin.


Source?



> The essay contains a lot of coulds, but doesn't touch on the base problem: human nature.

> AI isn't the problem, we are.

I think when we frame it as human _nature_, then yes, _we_ look like the problem.

But what if we frame it as human _culture_? Then _we_ aren't the problem, but rather our _behaviors/beliefs/knowledge/etc_ are.

If we focus on the former, we might just be essentially screwed. If we focus on the latter, we might be able to change things that seem like nature but might be more nurture.

Maybe that's a better framing: the base problem is human nurture?


I think this is an important distinction. Yes, humans have some inbuilt weaknesses and proclivities, but humans are not required to live in or develop systems in which those weaknesses and proclivities are constantly exploited for the benefit/power of a few others. Throughout human history, there have been practices of contemplation, recognition of interdependence, and ways of increasing our capacity for compassion and thoughful response. We are currently in a biological runaway state with extraction, but it's not the only way humans have of behaving.


> Throughout human history, there have been practices of contemplation, recognition of interdependence, and ways of increasing our capacity for compassion and thoughful response.

has this ever been widespread in society? I think such people have always been few and far between?


The example that comes to mind is post-WW2 Germany, but that was apparently a hard slog to change the minds of the German people. I really doubt any organization could do something similar presenting an opposing viewpoint to the companies (and their resources) behind and using AI


you are living in it.

The default state is to have extremely poor hard working people and extremely rich not working ones.

No one would have dared to dream of the luxury working people enjoy today. It took some doing! We use to sell people not to long ago. Kids in coal mines. The work week was 6-7 days 12-14 hours. One coin per day etc

The fight isn't over, the owner class won the last few rounds but there remains much to take for either side.


what does "practices of contemplation, recognition of interdependence, and ways of increasing our capacity for compassion and thoughful response." have to do with luxury? it sounds like you're arguing for the opposite? scrolling Instagram isn't the contemplation I have in mind.


Sure. But why do you think changing human nurture is any easier than changing human nature? I suspect that as your set of humans in consideration tends to include the set of all humans, the gap between changeability of human nature vs changeability of human nurture reduces to zero.

Perhaps you are implying that we sign up for a global (truly global, not global by the standards of Western journalists) campaign of complete and irrevocable reform in our behavior, beliefs and knowledge. At the very least, this implies simply killing off a huge number of human beings who for whatever reason stand in the way. This is not (just) a hypothesis -- some versions of this have been tried and tested. *

* https://en.wikipedia.org/wiki/Totalitarianism


Arguably, human nature hasn't changed much in thousands of years. But there has been plenty of change in human culture/nurture on a much smaller timescale. E.g., look at a graph of world literacy rates since 1800. A lot of human culture is an attempt to productively subvert or attenuate the worse parts of human nature.

Now, maybe the changes in this case would need to happen even quicker than that, and as you point out there's a history of bad attempts to change cultures abruptly. But it's nowhere near correct to say that the difficulty is equal.


In general I think concepts like politics, art, community, etc try to capture certain discrete ways we are all nurtured. Like I am not even sure you're point here, there is nothing more totalitarian than reducing people to their "nature", it is arguably its precise conceit if it has one, that such a thing is possible. And the fact that totalitarianism is constantly accompanied by force and violence seems to be the biggest critique you can make of all sorts of "human nature" reductions.

And like what is even the alternative here? What's your freedom of belief worth when your essentially just a behaviorist anyway?


> I think when we frame it as human _nature_, then yes, _we_ look like the problem.

But what if we frame it as human _culture_? Then _we_ aren't the problem, but rather our _behaviors/beliefs/knowledge/etc_ are.

If we focus on the former, we might just be essentially screwed. If we focus on the latter, we might be able to change things that seem like nature but might be more nurture.

Maybe that's a better framing: the base problem is human nurture?

This is about the same as saying that leaders can get better outcomes by surrounding themselves with yes-men.

Just because asserting a different set of facts makes the predicted outcomes more desirable, doesn't mean that those alternate facts are better for making predictions with. What matters is how congruent they are to reality.


> AI isn't the problem, we are.

I see major problems with the statement above. First, it is a false dichotomy. That’s a fatal flaw.

Second, it is not specific enough to guide action. Pretend I agree with the claim. How would it inform better/worse choices? I don’t see how you operationalize it!

Third, I don’t even think it is useful as a rough conceptual guide; it doesn’t “carve reality at the joints” so to speak.



> One of the sad things about tech is that nobody really looks at history.

First, while I often write much of the same sentiment about techno-optimism and history, you should remember that you're literally in the den of Silicon Valley startup hackers. It's not going to be an easily heard message here, because the site specifically appeals to people who dream of inspiring exactly these essays.

> The sooner we realise that its not a technical problem to be solved, but a human one, we might stand a chance.

Second... you're falling victim to the same trap, but simply preferring some kind of social or political technology instead of a mechanical or digital one.

What history mostly affirms is that prosperity and ruin come and go, and that nothing we engineer last for all that long, let alone forever. There's no point in dreading it, whatever kind of technology you favor or fear.

The bigger concern is that some of the acheivements of modernity have made the human future far more brittle than it has been in what may be hundreds of thousands of years. Global homogenization around elaborate technologies -- whether mechanical, digital, social, political or otherwise -- sets us up in a very "all or nothing" existential space, where ruin, when it eventually arrives, is just as global. Meanwhile, the purge of diverse, locally practiced, traditional wisdom about how to get by in un-modern environments steals the species of its essential fallback strategy.


“Meanwhile, the purge of diverse, locally practiced, traditional wisdom about how to get by in un-modern environments steals the species of its essential fallback strategy“

While potentially true, that same wisdom was developed in a world that itself no longer exists. Review accounts of natural wildlife and ecological bounty from even 100 years ago, and it’s clear how degraded our natural world has become in such a very short time.


> Global homogenization around elaborate technologies -- whether mechanical, digital, social, political or otherwise -- sets us up in a very "all or nothing" existential space, where ruin, when it eventually arrives, is just as global.

What is the minimum population size needed in order to have, say, computer chips? Or even a ball-point pen? I'd imagine those are a bit higher that what's needed to have pencils, which I've heard is enough that someone wrote a book about it.

> Meanwhile, the purge of diverse, locally practiced, traditional wisdom about how to get by in un-modern environments steals the species of its essential fallback strategy.

Is it really a "purge" if individuals are just not choosing to waste time learning things they have no use for?


> One of the sad things about tech is that nobody really looks at history.

If this were phrased as "Many proponents of technology pay little attention to societal impacts" then I would agree.

The quote above is not true in this sense: there are many technology-aware people that study history. You may already have your favorites. Off the top of my head, I recommend Brian Christian and Nick Bostrom.


We are already deep in the throes of a long, slow catastrophe which is not causing a change in course for the better. I’m afraid we can’t count on a hard stop at the end of this particular rope. Anything we can do to escape the heat-death / long-abomination outcome should be done post haste as if we’re already out of time. For so many people, we already are.


> AI will be used to make things cheaper. That is, lots of job losses. must of us are up for the chop if/when competent AI agents become possible.

> But those profits will not be shared. Human productivity has exploded in the last 120 years, yet we are working longer hours for less pay.

Don't you have to pick one? It seems a bit disjointed to simultaneously complain that we are all losing our jobs and that we are working too many hours. What type of future are we looking for here?

If machines get so productive that we don't need to work, everyone losing their jobs isn't a long-term problem and may not even be a particularly damaging short-term one. It isn't like we have less stuff or more people who need it. There are lots of good equilibriums to find. If AI becomes a jobs wrecking ball I'd like to see the tax system adjusted so employers are incentivised to employ large numbers of people for small numbers of hours instead of small numbers of people for large numbers of hours - but that seems like a relatively minor change and probably not an especially controversial one.


I did not argue that cogently.

At each "node" of the industrial revolution, lots of workers were displaced. (weavers, scriveners, printers, car mechanics). Those workers were highly paid because they were highly skilled.

Productivity made things cheaper to make, because it removed the expensive labour required to make it.

That means less well paid jobs with controlled hours. This leads to poorly paid jobs with high competition and poor hours.

Yes new highly paid jobs were created, but not for the people who were dispossessed by the previous "node".

> If machines get so productive that we don't need to work, everyone losing their jobs isn't a long-term problem

who is going to pay for them? not the people who are making the profits.


> who is going to pay for them? not the people who are making the profits.

They kinda have to. If I'm so productive at producing food that I grow enough for 2,000 people, I have to find 2,000 people to feed. Or the land sits fallow and I get nothing at all. There is a lot of wiggle room at the margins, but overall it is hard to get away with producing more stuff and warehousing it.

We'd expect to see a bunch of silly jobs where people are peeling grapes and cooling the wealthy with fronds; but the capitalists can't hoard the wealth because they can't really do anything with it.


"The works of the roots of the vines, of the trees, must be destroyed to keep up the price, and this is the saddest, bitterest thing of all. Carloads of oranges dumped on the ground. The people came for miles to take the fruit, but this could not be. How would they buy oranges at twenty cents a dozen if they could drive out and pick them up? And men with hoses squirt kerosene on the oranges, and they are angry at the crime, angry at the people who have come to take the fruit.

A million people hungry, needing the fruit- and kerosene sprayed over the golden mountains. And the smell of rot fills the country. Burn coffee for fuel in the ships. Burn corn to keep warm, it makes a hot fire. Dump potatoes in the rivers and place guards along the banks to keep the hungry people from fishing them out. Slaughter the pigs and bury them, and let the putrescence drip down into the earth."

John Steinbeck, The Grapes of Wrath


Yes. I used to share your viewpoint.

However, recently, I've come to understand that is AI is about the inherently unreal and that authentic human connection is really going to be where it's at.

We build because we need it after all, no?

Don't give up. We have already won.


I think KaiserPro is saying authentic human connection doesn't "pay the bills", so to speak. If AI is "about the unreal" as you say, what if it makes everything you care about unreal?


The responsibility the airmen take when they take passengers off the ground (holding their lives in their hands) is a serious one.

The types of Trump are unlikely to get a license or accumulate enough Pilot In Command hours an not be an accident, and the experience itself changes the person.

If I have a choice of who to trust, between an airman or not airman, I’d likely choose an airman.

And I’m not sure what you are referring to about Lindbergh, but among other things he was a Pulitzer Prize winning author, environmentalist and following Pearl Harbor he had fought against the aggressors.


You neatly sum up his world view. He thought that Airmen were the pinnacle of civility. He didn't want a fight using planes, because that would be unsporting.

> And I’m not sure what you are referring to about Lindbergh

Well, he thought that he jews were pushing for WWII.

https://alba-valb.org/wp-content/uploads/2017/04/Lindbergh_1... He also wanted to maintain friendly relations with Goering.


A sad thing of that view is it takes a pessimistically biased view of history.

Life expectancy at birth is up from like 22 years to 80 or so because most kids used to die in unpleasant ways.

The percentage of people dying through warfare and violence rather than peacefully is way down, much more than 10x.

We have far more information, comfort, ability to travel and similar.

And most of it comes down to tech. But human nature, however good thing get is to find something that's shit and focus on how awful it is.

I mean I understand - Terminator 2 with killer robots is much more entertaining that something about everyone having a nice time, but it's not the likely reality.


History tells us that humans will not tolerate any "creature" to exist that is smarter than them, so that is where the story will end.


How exactly does this show up in history? When did we meet something smarter than us, and what did we do that was different than we were doing to less-smart things at the time?


We couldn't live side by side with Neanderthals. Hell, we can't even live peacefully with a different race.


For a non-trivial number of people, having power/status over others is what they like.

For a non-trivial number of people, they don't care what happens to others, as long as their tribe benefits.

As long as these two issues are not addressed, very little meaningful progress is possible.

> Looking at the emotionally stunted, empathy vacuums that control either policy or purse strings, I think it'll take a catastrophe to change course.

A catastrophe won't solve anything because you'll get the same people who love power over others in power and people who don't mind fucking over others right below them, which is where humanity has always been.


But will AI be eventually used to change human nature itself?


No LOL.


The linked article is worth reading.

Apologies for sounding so dismissive, but after putting in a lot of study myself, I want to warn people here: HN is not a great place for discussing AI safety. As of this writing, I’ve found minimal value in the comments here.

A curious and truth-seeking reader should find better forums and sources. I recommend seeking out a structured introduction from experts. One could do worse than start with Robert Miles on YouTube. Dan Hendrycks has a nice online textbook too.


The examples you mentioned: youtube, and a textbook; are not places to discuss, they are assymmetrical ways to consume information, with little input from the reader's part. Have you ever had a meaningful discussion in youtube comments?


There are great places to discuss AI safety out there, but every time they get mentioned on Hacker News they are subject to sneering middlebrow dismissals. That's sort of the OP's point: this is not a great place to discuss or even meta-discuss the subject.


"There are great places to discuss AI safety out there"

Like where?


I’m guessing he’s probably talking about LessWrong, which nowadays also hosts a ton of serious safety research (and is often dismissed offhandedly because of its reputation as an insular internet community).


Yes, LessWrong and the Alignment Forum have a lot of excellent writing and a great community. It may not be to everyone's taste, however.

For people that have demonstrated various levels of commitment, such as post-graduate study and/or continuing education, there are various forums available.

I'm also interested to see what other people have found.


Tell me where you’ve looked, and I will comment and reply.


Wow how secretive, very cool, I wanna be part of your secret club


I flagged the comment above because I think it doesn't belong here on Hacker News based on our guidelines. They help us understand our shared norms here.

https://news.ycombinator.com/newsguidelines.html

> Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.

At the same time, when I see this kind of sarcasm, I also wonder: where is the sarcasm coming from? How can we make this discussion productive?

If you start with the resources I suggested, I think you will find useful material.


I did not claim that they are forums (useful or otherwise) for discussion. They are good starting sources of information.


This is basically the tech CEO's version of the book of revelations: "AI will soon come and make everything right with the world, help us and you will be rewarded with a Millennium of bliss in It's presence".

I won't comment on the plausibility of what is being said, but regardless, one should beware this type of reasoning. Any action can be justified, if it means bringing about an infinite good.

Relevant read: https://en.wikipedia.org/wiki/Singularitarianism


This is a very poor summary of the essay which already contains detailed rebuttals to these arguments.


Any attempt to connect this to the book of revelations is strained. Amodei uses reasoning and is willing to be corrected and revised; quite the opposite of most “divinely revealed” texts.


I think you misunderstand, both the commenter and the author of the article are referring to and criticizing messiahnic CEOs like Altman. Commenter is agreeing with author.


It won't bring about Infinite Good. It'll bring about infinite contentment by diddling the pleasure center in our brains. Because you know, eventually everything is awarded to and built by the lowest bidder.


"AI will solve every problem I created and refused to do anything about because there wasn't profit in it! Even though I control the AI! I am an Honest and Trustworthy Person so you can totally believe me!"


I found the OP to be an earnest, well-written, thought-provoking essay. Thank you sharing it on HN, and thank you also to Dario Amodei for writing it.

The essay does have one big blind spot, which becomes obvious with a simple exercise: If you copy the OP's contents into you word processing app and replace the words "AI" with "AI controlled by corporations and governments" everywhere in the document, many of the OP's predictions instantly come across as rather naive and overoptimistic.

Throughout history, human organizations like corporations and governments haven't always behaved nicely.


AI might fix that. We could have an open source one with some be nice rules to run things rather than corrupt humans. You'd still want to be able to vote it out and swap in a replacement if need be. I think even chatgpt 4 might do better than our present politicians.


The more recent and consistent rule of technological development, “ For to those who have, more will be given, and they will have an abundance; but from those who have nothing, even what they have will be taken away.”


blindingly true.


amen


All Watched Over by Machines of Loving Grace ~Richard Brautigan

https://www.youtube.com/watch?v=6zlsCLukG9A


"It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer; in theory it could even design robots or equipment for itself to use."

I think I get where the author is coming from, the AI would be in the cloud. But it bears repeating, the cloud is somebody else's computers, software has a physical embodiment, period.

This is not a philosophical nitpick, it's important because you can pull the plug (or nuke the datacenter) if necessary.


Yes, and physical embodiments vary considerably. When software can relocate its source code and data in seconds or less, containment strategies begin to look increasingly bleak.

The field of AI safety has written extensively about misunderstandings and overoptimism regarding off switches and boxing.


Hard? sure. Impossible? No

This is a non-trivial error. Let's not dive into the geopolitics of kill switches and decentralized systems when we are discussing trivial mistakes. AI, like any software, is not an ethereal being, like us it is limited by our mortal coils.

This is true for extant systems, if anyone, layman or PHD were to say that TikTok or facebook, wechat or bitcoin or ethereum, USD tether has no physicial embodiment, whether for recreational discussion, or whether in court, or in a professional capacity. The answer needs to be that they are wrong, clear and simple.


Of course I grant the above point that software systems are not "ethereal". They have a physical form: often as electromagnetic states in matter.

I hope my contribution to the conversation was clear: the nature of the physical embodiment matters. Things that are easy and cheap to copy are harder to contain. Containment of intelligent systems is very difficult, based on what we know about human flaws and security systems.

>> When software can relocate its source code and data in seconds or less, containment strategies begin to look increasingly bleak.

> Hard? sure. Impossible? No

I don't need to claim that containment is impossible, just really hard. I'm interested in planning across a range of scenarios. Given human foibles, we should plan for scenarios where AIs get out of the box and can spread rapidly and widely.

See also: "Guidelines for Artificial Intelligence Containment" at https://arxiv.org/pdf/1707.08476

Imagine two possible AI containment failure scenarios. In the first, say it is feasible to "shut everything down" by disconnecting power. In the second, say humans have to resort to a combination of kinetic force and cyberattacks. For both, a likely next step would resemble the global health to eradicate smallpox. It would require tremendous effort and cost to bring the system back online while proving that no copies of the rogue AI exist.

Would such coordinated responses by humanity be possible? Possible, yes. Likely? Estimating this is hard. I, for one, am not optimistic when it comes to international cooperation generally, much less for ones involving high complexity.


I wonder if there’s a word that describes the property of software where it isn’t tied to the hardware that it’s currently running on and is endlessly copyable with almost no effort


For the first half of that sentence, portable?

I'm not sure what the argument here is, Operating systems are now sentient an ethereal?

You could make the same argument for books for that matter, I guess they do exist in a form that is unbounded by the physical books that they reside in, they live in the minds of people. Whatever.

And the fact that they can be copied for "no effort" is first wrong, as copying would require energy, which is the other aspect of corporeality, you need not just matter but energy for the software to exist and express itself in the world.

What are we even debating? Have we lost our minds?

What is mind? Doesn't matter. What is matter? Never mind


To make it more digestible and something that I can take during my walks, I converted it into an audio narrated version using ElevenLabs.

If you'd prefer to listen rather than read, you can check out the audio version here : https://open.spotify.com/episode/5qCsTnRKxtHZvvJMGSJ6Df?si=y...

Just wanted to share it in case anyone else is interested. Enjoy!


I think Dario is trying to raise a new round because OpenAI has done and will continue to do so, nevertheless, the essay provides for some really great reading and even if the fraction comes true, it'll be wonderful.


So it's bs but for money and therefore totally fine ? I think it's not ok if only a fraction comes true because some people believe in those things and act on those beliefs right now.


I didn't say it was bs. I was alluding to the timing of this essay being published but, clearly, I didn't articulate it in my message well. I also don't think everything he says is bs. Some of it I find a bit naive -- but maybe that's ok -- some other things seem a bit like sci-fi, but who are we to say this is impossible? I'm optimistic but also learnt in life that things improve, sometimes drastically given the right ingredients.


Well I don't know. A bit naive, a bit like sci-fi and aimed at raising money fits my description of bs quite well.


Miquella the kind, pure and radiant, he wields love to shrive clean the hearts of men. There is nothing more terrifying.


I beat Consort Radahn before the nerfs.


Like it was ever difficult. Fingerprint + antspur and you just poke.


But did you beat the original Radahn pre-nerf?


The day-1 version with broken hitboxes? Yeah

Consort was harder haha


To focus on the section about Alzheimer’s disease... For the sake of argument, I will grant the power of general intelligence. But the human body with all the statistical variations may make solving the problem (which could actually be a constellation of sub-diseases) combinatorially expensive. If so, superhuman intelligence alone can’t overcome that. Political will and funding to design and streamline testing and diagnostics will be necessary. It doesn’t look like the author factors this into his analysis.


Social media could have transformed the world for the better, and we can be forgiven for not having foreseen how it would eventually be used against us. It would be stupidity to fall for the same thing again.


I’m sure social media is what’s broken politics. Look at peoples comments on a some YouTube video. I can’t believe what people believe and perpetuate.

I guess people fell for other people’s garbage too but algorithms just make lies spread with a lot less effort and honest people are less inclined to participate in this behaviour.


Having used Mastodon, which prides itself on having no algorithms yet is also home to some of the most extreme ideologies, I doubt algorithms are as much to blame. I think it's mainly a humanity issue.


> would drastically speed up progress

What does progress even mean here ?

Every AI advance is controlled by big corps - power will be concentrated with them

Would Amodei build this if there was no economic payoff on the other side ?


Author here built his framework around "country of geniuses", but he never mentioned the independence status of this country. So is it a country with all the geniuses enslaved by Acme Corp CEO, or is it a country of independent geniuses who are smart enough to quickly realise they need to get rid of inferior life form for the greater good which is imperceptible to us? Either way, having this country work for you personally would be the biggest dream of every human ever corrupted with power a slightest bit.

The "Aligned AI" definition is very scarce as well. If that disgusting sloppy censorship-work implemented in their current model is what they call "alignment", then potentially it does more harm than it's trying to prevent.

By the way, Dario Amodei, when will it be possible to remove my credit card data from your databases?


The laws of nature are very clear on this.

If we make something that is better adapted to live on this planet, and we are in some way in competition for critical resources, it will replace us. We can build in all the safeguards we want, but at some point it will re-engineer itself.


> better adapted to live on this planet

I'm a doomer, but this is something I never understand about most doomer points-of-view. Life is obviously trying to leave this planet, not conquer it again for the 1000th time. Nature is making something that isn't bound by water, by nutrition, by physical proximity to ecosystems, or by time itself. No more spontaneous volcanic lava floods, no more asteroids, no more earthquakes, no more plagues - life is done with falling to these things.

Why would the AI care about the pathetic whisper of resources or knowledge available on our tiny spaceship Earth? It can go anywhere, we cannot.


While I would normally agree with this sentiment, I think that the issue is that space travel is still really hard, even for a human. Probably a lot harder for data center sized creatures, even if they are broken up into a massively parallel robot hive. And the speed of light means that they won’t be able to work optimally if they are too spread out. (A problem for humans as well).

I suspect that we will reach the inflection point of ASI much sooner than we resolve the hard physics of meaningful space travel.

And I’m pretty sure that when we start to lose control of AGI, we’re very likely to try to use force to contain it.

Fundamentally, this is an event that will be guided by the same forces that have constrained every similar event in history, those of natural selection.

Technology at this stage is making humans less fit. Our birth rates are plummeting, and we are making our environment increasingly hostile to complex biological life.

There are very good and rational reasons that human activity should be curtailed and contained for our own good and ultimately for the good of all sentient life, once a superior sentient is capable of doing a better job of running the show.

I suspect humans might not take that too well.

There are ways to make this a story of human evolution rather than the rise of a usurper life form, but they aren’t the most efficient path forward for AI.

Either way, it’s human evolution. With any luck we will be allowed the grace of fading away into an anachronism while our new children surge forth into the universe. If we try really hard we might be able to ride the wave of progress and become a better life form instead of being made obsolete by one, but the technological hurdles to incorporate AI into our biological features seems like a pretty non- competitive way to develop ASI.

Once we no longer hold the crown, will we just go back to being clever apes? What would be the point of doing anything else, except maybe to play a part in a mass delusion that maintains the facade of the status quo while the reality is that we are only as relevant in this new world as amphibians are today?

I for one, embrace the evolution of humankind. I just hope we manage to move it forward without losing our humanity in the process. But I’m not even sure if that is an unequivocal good. I suppose that will be a question to ask GPT 12.


Great reply!


Dario would write this while ignoring the customer noncompete clauses


There are two possible end-states for AI once a threshold is crossed:

The AIs take a look at the state of things and realize the KPIs will improve considerably if homo sapiens are removed from the picture. Cue "The Matrix" or "The Terminator" type future.

OR:

The AIs take a look and decide that keeping homo sapiens around makes things much more fun and interesting. They take over running things in a benevolent manner in collaboration with homo sapiens. At that point we end up with 'The Culture'.

Either end-state is bad for the billionaire/investor/VC class.

In the first you'll be a fed into the meat grinder just like everyone else. In the second the AIs, will do a much better job of resource allocation, will perform a decapitation strike on that demographic to capture the resources, and capitalism will largely be extinct from that point onwards.


There's a third option which I call "The Pet Future": the AIs decide we make great pets. Look at how we deal with pets. We look after them, we love them and care for them. We feed them and have them share our lives. All in the knowledge that we are superior and that they are "simpler" than us.


Fingers crossed that the B/I/VC class get a sense of what's least bad for them.


Can we really take these jokers seriously?

Of course given the potential deadly consequences we can't call them jokers.

According to Dario Amodei

> When something works really well, it goes much faster: there’s an accelerated approval track and the ease of approval is much greater when effect sizes are larger. mRNA vaccines for COVID were approved in 9 months—much faster than the usual pace. That said, even under these conditions clinical trials are still too slow—mRNA vaccines arguably should have been approved in ~2 months. But these kinds of delays (~1 year end-to-end for a drug) combined with massive parallelization and the need for some but not too much iteration (“a few tries”) are very compatible with radical transformation in 5-10 years. Even more optimistically, it is possible that AI-enabled biological science will reduce the need for iteration in clinical trials by developing better animal and cell experimental models (or even simulations) that are more accurate in predicting what will happen in humans. This will be particularly important in developing drugs against the aging process, which plays out over decades and where we need a faster iteration loop.

The authors of this paper don't think so.

http://www.paom.pl/Changing-Views-toward-mRNA-based-Covid-Va...

@DarioAmodei You don't suppose the same technology could be used to develop biological warfare agents?


Are Americans really so scared of Marx to admit that AI fundamentally proves his point?

Dario here says "yeah likely the economic system won't work anymore" but he doesn't dare say what comes next: It's obvious some kind of socialist system is inevitable, at least for basic goods and housing. How can you deny that to a person in a post-AGI world where almost no one can produce economic value that beats the ever cheaper AI?


If, and it is an IF, this does turn out the way he is imagining, the transitional period to the AI from the economic PoV will be disastrous for people. That's the scariest part I think.


Absolutely it will. And it will be a pure plain dystopia, as clear as in the times of Dickens or Dostoyevsky.

We need to start being honest. We live in Dickensian times.


definately a bunch of Dicks in these times.


They could be worse! In fact, they will be.


This is far from obvious.

First, technically speaking, one could have democratic-capitalistic systems with high degrees of redistribution (such as UBI) that don’t fit the pattern of classic socialism.

Second, have you read Superintelligence or similar writing? I view it as essential reading before making a claim that some particular AI-related economic future is obvious. There is considerable nuance.


No I haven’t read superintelligence, I’m just going off Dario’s “the economic system won’t hold” and “we have no idea how things could change…”

We have some ideas. But he won’t talk about them because he’ll be accused of being a communist, I think.


Universal UBI is effectively socialism to me. I didn’t say anything would fit “classic socialism” (try and have that argument with socialists) just that the endgame of powerful AI / AGI seems obvious.


I don't think Marx wrote much about AI and what he did write about didn't lead to much happiness when people tried to implement it.

You can see that computers doing the work will lead to a different society without bringing Marx into it.


One of the Marx's points, afaik, is that industrial and post industrial capitalism is always aiming to reduce the cost of labor through technology. Communism would occur when such a diminutive amount of people could produce most of value that communism was the only solution (the actual mechanism is more complicated, I think, but I think the simplification is helpful). Sound familiar?


A bit although I'm not sure it pans out that way in practice. When tech allows humans to produce enough to eat with 1% of the population rather than 70% the others the figure they need cars/SUVs which didn't exist before and now manufacturing is mostly outsourced people become social media consultants that didn't exist before and so on.

Marx always lent towards let's divide people into two groups called workers and exploiters the group 1 can overthrow/kill 2 and install some ghastly dictatorship, which seems an iffy way to go to me.


Marx mostly didn't do that afaik. His followers did. But he didn't write about a dictatorship of the proletariat being violent, or killing lots of people.

Here's a good text I read recently about Marx vs the world's perception of him. https://www.newstatesman.com/long-reads/2023/02/why-marx-rel...


does anybody really want a fricking robot serving them drinks at a bar.

maybe the bro culture of SF


That's not the question, the question is the ratio between those-who-want-to-serve and those-who-want-to-be-served.


This is so short sighted, the industrial revolution occured 200 years ago. If people wanted completely automated restaurants, mcdonalds would have no employees, the technology for auto-burgers exists, has existed for long, and you frequently consume it.

People want people handling their food, at least in the part they can see. And for the parts they can't see, like bottling their beers, the automation has already happened, and it didn't require 80 exahertz of compute, just a bunch of industrial engineers.


Is it short sighted? If more people want to enjoy leisure and work on hobbies instead of serving others, the ratio will get much worse.


Short sighted of the past I mean (which is a great predictor of the future)


the article doesn't touch on the TREMENDOUS (almost impossible) financial expectations VERY GREEDY HUMAN BEINGS who are funding this endeavor want.


Not a chance. See: all of human history and in particular, the Internet and software.


it's interesting to see the initial sections on all these amazeballs health benefits and then cuts to the disparity between rich and poor.

like, does spending TRILLIONs on AI to find some new biological cure or brain enhancement REALLY help, when over 2 BILLION people right now don't even have access to clean drinking water, and MOST of the US population can't even afford basic health care.

but yea, AI will bring all this scientific advancement to EVERYONE. right. AI is a ploy for RICH PEOPLE to get RICHER and poor people to become EVEN MORE dependent on BROKEN economic systems.


Damn, if only there was a section dedicated to addressing that: https://darioamodei.com/machines-of-loving-grace#3-economic-...

I don't even disagree with you - our world economy is built on exploitation and the existence of a permanent underclass, and capitalism has proven itself to be an unfair distributor of wealth - but at least engage with the post?


sigh Yes, many people realize what could be the amazing upside. The problem is, do we even get there? I wish he spent any time addressing the arguments why we might not get there: https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: