Hacker News new | past | comments | ask | show | jobs | submit login

This being ycombinator and as such ostensibly has one or two (if not more) VCs as readers/commentators … can someone please tell me how these companies that are being invested in in the AI space are going to make returns on the money invested? What’s the business plan? (I’m not rich enough to be in these meetings) I just don’t see how the returns will happen.

Open source LLMs exist and will get better. Is it just that all these companies will vie for a winner-take-all situation where the “best” model will garner the subscription? Doesn’t OpenAI make some substantial part of the revenue for all the AI space? I just don’t see it. But I don’t have VC levels of cash to bet on a 10x or 100x return so what do I know?




VCs at the big/mega funds make most of their money from fees, they don't actually care as much about the potential portfolio investment exits 10-15 years from now. What they care MOST about is the ability to raise another fund in 2-3 years, so they can milk more fees from LPs. i.e. 2% fee PER YEAR on a 5bn fund is a lot of guaranteed risk-free money.

To be able to achieve that is entirely dependent on two things:

1) deploying capital in the current fund on 'sexy' ideas so they can tell LPs they are doing their job

2) paper markups, which they will get, since Ilya will most definitely be able to raise another round or two at a higher valuation. even if it eventually goes bust or gets sold at cost.

With 1) and 2), they can go back to their existing fund LPs and raise more money for their next fund and milk more fees. Getting exits and carry is just the cherry on top for these megafund VCs.


Are you a VC? If they really didn't care about their investment exits, that would be crazy.


It’s not that they don’t care, of course they want to find winners. It’s just that A) there is so much capital to allocate that they have to allocate to marginal ideas B) their priorities are to raise their next fund which means focusing on vanity metrics like IRR and paper markups C) The incentive structure in VC pushes them to invested based on motivated reasoning. Remember VC returns are cyclical, and many vintages underperform the public markets and particularly large funds do worse simply because they have too much capital to allocate and too few great ideas.


I think they treat success and failure as mostly luck, with a tiny bit of hygiene

The trick is being in the game long enough


And you’d be wrong of course, like any other random guess you could make about a topic you know nothing about.


> about is the ability to raise another fund in 2-3 years, so they can milk more fees from LPs. i.e. 2% fee PER YEAR on a 5bn fund is a lot of guaranteed risk-free money.

You will struggle to raise funds if the companies you bet on perform poorly; the worse your track record the less chances of raising money and earn income from it.


Track record is based on IRR mostly. See my other comment on the Lps below regarding the incentive structure and what they care about. This particular bet is almost a guaranteed markup, as Ilya will surely/likely raise another round. It’s also not a terrible bet to invest in a proven expert/founder. By the time these companies exit (if they ever) 15 years from now, the mega fund VC partner will probably be retired from all the cumulative fees and just playing golf and taking occasional board meetings. Cash on cash returns are very different to playing the IRR game. Of course they want to find real winners as well, but reality is there aren’t that many and they have so much money to allocate they will have to bet on marginal things that can at least show some paper gains.


> 15 years from now, the mega fund VC partner will probably be retired

So all the successful VC partners from 2010 are close to retirement or have retired?

Why say something testable if it is obviously wrong.


I’m not a VC, but I doubt that’s true. You can exactly raise one fund if you operate like that but not two or three.


You could actually raise and deploy two or three funds that way, before you see returns for the first


Given how many folks blow up and go back into the business... the best way to get a fund to run is to have previous experience running a fund.


This couldn't be less true, for what it's worth. VCs from the largest funds are in it ~entirely for the DPI (distributed to paid-in capital; investment returns). Not only is this far, far more profitable than management fees (which are mostly spent on operations) — DPI is the only way to guarantee you can raise the next fund.


So the question I have is, who are these LP's and why are they demanding funds go into "sexy" ideas?

I mean it probably depends on the LP and what is their vision. Not all apples are red, come in many varieties and some for cider others for pies. Am I wrong?


The person you're responding to has a very sharp view of the profession. imo it's more nuanced, but not very complicated. In Capitalism, capital flows, that's how it works, capital should be deployed. Larges pools of capital are typically put to work (this in itself is nuanced). The "put to work" is various types of deployment of the capital. The simplest way to look at this is risk. Lets take pension funds because we know they invest in VC firms as LPs. Here* you can find an example of the breakdown of the investments made by this very large pension fund. You'll note most of it is very boring, and the positions held related to venture are tiny, they would need a crazy outsized swing from a VC firm to move any needles. Given all that, it traditionally* has made no sense to bet "down there" (early stage) - mostly because the expertise are not there, and they don't have the time to learn tech/product. Fee's are the cost of capital deployment at the early stages, and from what I've been told talking to folks who work at pension funds, they're happy to see VCs take swing.

but.. it really depends heavily on the LP base of the firm, and what the firm raised it's fund on, it's incredibly difficult to generalize. The funds I'm involved around as an LP... in my opinion they can get as "sexy" as they like because I buy their thesis, then it's just: get the capital deployed!!!!

Most of this is all a standard deviation game, not much more than that.

https://www.otpp.com/en-ca/investments/our-advantage/our-per... https://www.hellokoru.com/


I can't understand one thing: why are pension funds so fond of risky capital investments? What's the problem with allocating that money into shares of a bunch of old, stable companies and getting a small but steady income? I can understand if a few people with lots of disposable money are looking for some suspense and thrills, using venture capital like others use a casino. But what's the point for pension funds, which face significant problems if they lose the managed money in a risky venture?


A better way to look at it is: if pension funds are not fond of risky investments, then what am I seeing?


These LPs at mega funds are typically partners/associates at pension funds or endowments that can write the 8-9figure checks. They are not super sophisticated and they typically do not stay at their jobs long enough to see the cash on cash returns 15 years later. Nor are they incentivized to care either. These guys are salaried employees with MBAs and get annual bonuses based on IRR (paper gains). Hence the priority is generating IRR , which in this case is very likely as Ilya will raise a few more rounds. Of course, Lps are getting smarter and are increasingly making more demands. But there is just so much capital to allocate for these mega funds, inevitable that some ideas are half baked.


I didn't know what an LP is, having lived life gloriously isolated from the VC gospel...

an LP is a "limited partner." they're the suckers (or institutional investors, endowments, pensions, rich folks, etc.) that give their cash to venture capital (VC) firms to manage. LPs invest in VC funds but don't have control over how the money gets used—hence *limited* partner. they just hope the VCs aren't burning it on overpriced kombucha and shitty "web3" startups.

meanwhile, the VCs rake in their fat management fees (like the 2% mentioned) and also get a cut of any profits (carry). VCs are more concerned with looking busy and keeping those sweet fees rolling in than actually giving a fuck about long-term exits.

Someone wants to fund my snide, cynical AI HN comment explainer startup? We are too cool for long term plans, but we use AI.


I'm not a VC so maybe you don't care what I think, I'm not sure.

Last night as my 8yo was listening to childrens audio books going to sleep, she asked me to have it alternate book A then B then A then B.

I thought, idunno maybe I can work out a way to do this. Maybe the app has playlists and maaaaaaaaaaybe has a way to set a playlist on repeat. Or maybe you just can't do this in the app at all. I just sat there and switched it until she fell asleep, it wasn't gonna be more than 2 or 3 anyway, and so it's kind of a dumb example.

But here's the point: Computers can process language now. I can totally imagine her telling my phone to do that and it being able to do so, even if she's the first person ever to want it to do that. I think the bet is that a very large percentage of the world's software is going to want to gain natural language superpowers. And that this is not a trivial undertaking that will be achieved by a few open source LLMs. It will be a lot of work for a lot of people to make this happen, as such a lot of money will be made along the way.

Specifically how will this unfold? Nobody knows, but I think they wanna be deep in the game when it does.


How is this any different than the (lack of) business model of all the voice assistants?

How good does it have to be, how many features does it have to have, how accurate does its need to be.. in order for people to pay anything? And how much are people actually willing to spend against the $XX Billion of investment?

Again it just seems like "sell to AAPL/GOOG/MSFT and let them figure it out".


> How is this any different than the (lack of) business model of all the voice assistants?

Voice assistants do a small subset of the things you can already do easily on your phone. Competing with things you can already do easily on your phone is very hard; touch interfaces are extremely accessible, in many ways more accessible than voice. Current voice assistants only being able to do a small subset of that makes them not really very valuable.

And we aren't updating and rewriting all the world's software to expose its functionality to voice assistants because the voice assistant needs to be programmed to do each of those things. Each possible interaction must be planned and implemented invidually.

I think the bet is that we WILL be doing substantially that, updating and rewriting all the software, now that we can make them do things that are NOT easy to do with a phone or with a computer. And we can do so without designing every individual interaction; we can expose the building blocks and common interactions and LLMs may be able to map much more specific user desires onto those.


I wonder if we'll end up having intelligent agents interacting with mobile apps / web pages in headless displays because that's easier than exposing an API for every app


OK so bull case on LLMs now is inclusive of "substantially rewriting all the worlds software to expose functionality to them via APIs" ?


"The AI will do the coding for that!"

or

"Imagine an AI that can just use the regular human-optimized UI!"

These are things VCs will say in order to pump the current gen AI. Note that current gen AI kinda suck at those things.


They've already started moving back from Miami since the last fad pump (crypto) imploded. Don't ruin this for them.


> How is this any different than the (lack of) business model of all the voice assistants?

Feels very different to me. The dominant ones are run by Google, Apple, and Amazon, and the voice assistants are mostly add-on features that don't by themselves generate much (if any) revenue (well, aside from the news that Amazon wants to start charging for a more advanced Alexa). The business model there is more like "we need this to drive people to our other products where they will spend money; if we don't others will do it for their products and we'll fall behind".

Sure, these companies are also working on AI, but there are also a bunch of others (OpenAI, Anthropic, SSI, xAI, etc.) that are banking on AI as their actual flagship product that people and businesses will pay them to use.

Meanwhile we have "indie" voice assistants like Mycroft that fail to find a sustainable business model and/or fail to gain traction and end up shutting down, at least as a business.

I'm not sure where this is going, though. Sure, some of these AI companies will get snapped up by bigger corps. I really hope, though, that there's room for sustainable, independent businesses. I don't want Google or Apple or Amazon or Microsoft to "own" AI.


Hard to see normies signing up for monthly subs to VC funded AI startups when a surprisingly large % still are resistant to paying AAPL/GOOG for email/storage/etc. Getting a $10/mo uplift for AI functionality to your iCloud/GSuite/Office365/Prime is a hard enough sell as it stands.

And again this against CapEx of something like $200B means $100/year per user is practically rounding to 0.

Not to mention the OpEx to actually run the inference/services on top ongoing.


You'd be very surprised at how much they're raking in from the small sliver of people who do pay. It only seems small just because of how much more they make from other things. If you have a billion users, a tiny percentage of paying users is still a gazillion dollars. Getting to a billion users is the hard part. Theyre betting theyll figure how to monetize all those eyeballs when they get there.


The voice assistants are too basic. As folks have said before, nobody trusts Alexa to place orders. But if Alexa was as competent as an intelligent & capable human secretary, you would never interact with Amazon.com again.


Would you not, though? Don't the large majority of people, and dare I say probably literally everyone who buys something off Amazon first check the actual listing before buying anything?

I wouldn't trust any kind of AI bot regardless of intelligence or usefulness to buy toilet paper blindly, yet alone something like a hard drive or whatever.


Congrats on 10k karma :)

One could ask: how is this different from automatic call centers? (eg “for checking accounts, push 1…”) well, people hate those things. If one could create an automated call center that people didn’t hate, it might replace a lot of people.


Now call centers, not sexy, but the first rational achievable use case mentioned for LLMs in an HN response I've seen in a while!

The global call center market is apparently $165B/year revenue, and let's be honest even the human call center agents aren't great. So market is big and bar is low!

However, we are clearly still quite far from LLMs being a) able to know what they don't know / not hallucinate b) able to quickly/cheaply/poorly be trained the way you could a human agent c) actually be as concise and helpful as an average human.

Also it is obviously already being tried, given the frequent Twitter posts with screenshots of people jailbreaking their car dealership chat bot to give coding tips, etc.


Talking to an electronic assistant is so antiquated. It feels unnatural to formulate inner thoughts into verbal commands.

An ubiquitous phone has enough sensors/resources to be fully situationally aware and preempt/predict for each holder any action long time ahead.

It can measure the pulse, body postures and movements, gestures, breath patterns, calculate mood, listen to the surrounding sounds, recall all information ever discussed, have 360 deg visual information (via a swarm of fully autonomous flying micro-drones), be in an network with all relevant parties (family members, friends, coworkers, community) and know everything they (the peers) know.

From all gathered information the electronic personal assistant can predict all your next steps with high confidence. The humans think that they are unique, special and unpredictable, but opposite is the case. An assistant can know more about you than you think you know about yourself.

So your 8yo daughter does not need to tell how to alternate the audio books, the computer can feel the mood and just do what is appropriate, without her need to issue a verbal command.

Also in the morning you do not need to ask her how she slept tonight and listen to her subjective judgement.

The personal assistant will feel that you are probably interested in your daughters sleep and give you an exact objective medical analysis of the quality of the sleep of your daughter tonight, without you needing to ask the personal assistant of your daughter.

I love it, it is a bottomless goldmine for data analysis!


> The personal assistant will feel that you are probably interested in your daughters sleep and give you an exact objective medical analysis of the quality of the sleep of your daughter tonight, without you needing to ask the personal assistant of your daughter.

Next step: the assistant knows that your brain didn't react to much to its sleep report the last 5 mornings, so it will stop bothering you altogether. And maybe chitchat with your daughter's assistant to let her know that her father has no interest in her health. Cool, no? (I bet there is already some science fiction on this topic?)


> Computers can process language now. I can totally imagine her telling my phone to do that

Impressed by this bot recently shared on news.yc [0]: https://storytelling-chatbot.fly.dev/

> Specifically how will this unfold? Nobody knows

Think speech will be a big part of this. Young ones (<5yo) I know almost exclusively prefer voice controls where available. Some have already picked up a few prompting tricks ("step by step" is emerging as the go-to) on their own.

[0] https://news.ycombinator.com/item?id=40345696


Robots will be better than humans at parenting in no time


I dunno. A small piece of $1B sounds a little shallow.


In general VC is about investing in a large number of companies that mostly fail, and trying to weight the portfolio to catch the few black swans that generate insane returns. Any individual investment is likely to fail, but you want to have a thesis for 1) why it could theoretically be a black swan, and 2) strong belief in the team to execute. Here's a thesis for both of these for SSI:

1. The black swan: if AGI is achievable imminently, the first company to build it could have a very strong first mover advantage due to the runaway effect of AI that is able to self-improve. If SSI achieves intelligence greater than human-level, it will be faster (and most likely dramatically cheaper) for SSI to self-improve than anyone external can achieve, including open-source. Even if open-source catches up to where SSI started, SSI will have dramatically improved beyond that, and will continue to dramatically improve even faster due to it being more intelligent.

2: The team. Basically, Ilya Sutskever was one of the main initial brains behind OpenAI from a research perspective, and in general has contributed immensely to AI research. Betting on him is pretty easy.

I'm not surprised Ilya managed to raise a billion dollars for this. Yes, I think it will most likely fail: the focus on safety will probably slow it down relative to open source, and this is a crowded space as it is. If open source gets to AGI first, or if it drains the market of funding for research labs (at least, research labs disconnected from bigtech companies) by commoditizing inference — and thus gets to AGI first by dint of starving its competitors of oxygen — the runaway effects will favor open-source, not SSI. Or if AGI simply isn't achievable in our lifetimes, SSI will die by failing to produce anything marketable.

But VC isn't about betting on likely outcomes, because no black swans are likely. It's about black swan farming, which means trying to figure out which things could be black swans, and betting on strong teams working on those.


On the other hand, it may be that "Alignment likely generalizes further than capabilities." - https://www.beren.io/2024-05-15-Alignment-Likely-Generalizes...


That may be true, but even if it is, that doesn't mean human-level capability is unachievable: only that alignment is easier.

If you could get expert-human-level capability with, say, 64xH100s for inference on a single model (for comparison, llama-3.1-405b can be run on 8xH100s with minimal quality degradation at FP8), even at a mere 5 tok/s you'd be able to spin up new research and engineering teams for <$2MM that can perform useful work 24/7, unlike human teams. You are limited only by your capital — and if you achieve AGI, raising capital will be easy. By the time anyone catches up to your AGI starting point, you're even further ahead because you've had a smarter, cheaper workforce that's been iteratively increasing its own intelligence the entire time: you win.

That being said, it might not be achievable! SSI only wins if:

1. It's achievable, and

2. They get there first.

(Well, and the theoretical cap on intelligence has to be significantly higher than human intelligence — if you can get a little past Einstein, but no further, the iterative self-improvement will quickly stop working, open-source will get there too, and it'll eat your profit margins. But I suspect the cap on intelligence is pretty high.)


Another take is defining AGI from an economic perspective. If AI can do a job that would normally be paid a salary, then it could be paid similarly or at a smaller price which is still big.

OpenAI priced its flagship chatbot ChatGPT on the low end for early product adoption. Let's see what jobs get replaced this year :)


How will we know when we have achieved AGI with intelligence greater than human-level?


It will propose a test for general intelligence that actually satisfies most people and doesn't cause further goalpost-shifting.


It will be able to answer this question to your satisfaction.


Sounds like a job for safety researchers


These VC’s are already lining up the exit as they are investing. They all sit on the boards of major corps and grease the acquisitions all the way through. The hit rate of the top funds is all about connections and enablement.


I think it's a fascinating question whether the VCs that are still somehow pushing Blockchain stuff hard really think it's a good idea, or just need the regulatory framework and perception to be right so they can make a profitable exit and dump the stock into teacher's pension funds and 401ks…


This, 100%


So it's all just fleecing money from mega corps? (Cue the "Always has been" meme?)


No, sometimes it's about an IPO. In which case, if we're being cynical, the exit is funded by your 401k.

But yeah, VCs generally aren't about building profitable companies, because there's more profit to be made - and sooner - if you bootstrap and sell.


Are you suggesting VCs "bootstrap" to IPO? Can you help me understand if bootstrsp is commonly used in the sense you are using?


Those mega-corps become big thanks to the same network of VCs.

People should have realized it by now that Silicon Valley exist because of this.


Business, funding, and wealth in the US is downright incestuous.

I'm so tired of people who stubbornly insist that somehow only the same 500 people are possibly capable of having valuable thoughts in any way.


If Ilya is sincere in his belief about safe superintelligence being within reach in a decade or so, and the investors sincerely believe this as well, then the business plan is presumably to deploy the superintelligence in every field imaginable. "SSI" in pharmaceuticals alone would be worth the investment. It could cure every disease humanity has ever known, which should give it at least a $2 trillion valuation. I'm not an economist, but since the valuation is $5bn, it stands to reason that evaluators believe there is at most a 1 in 400 chance of success?


> It could cure every disease humanity has ever known, which should give it at least a $2 trillion valuation.

The lowest hanging fruit aren't even that pie in the sky. The LLM doesn't need to be capable of original thought and research to be worth hundreds of billions, they just need to be smart enough to apply logic to analyze existing human text. It's not only a lot more achievable than a super AI that can control a bunch of lab equipment and run experiments, but also fits the current paradigm of training the LLMs on large text datasets.

The US Code and Code of Federal Regulations are on the order of 100 million tokens each. Court precedent contains at least 1000x as many tokens [1], when the former are already far beyond the ability of any one human to comprehend in a lifetime. Now multiply that by every jurisdiction in the world.

An industry of semi-intelligent agents that can be trusted to do legal research and can be scaled with compute power would be worth hundreds of billions globally just based on legal and regulatory applications alone. Allowing any random employee to ask the bot "Can I legally do X?" is worth a lot of money.

[1] based on the size of the datasets I've downloaded from the Caselaw project.


Legal research is an area where a lot is just text analysis, but beyond a point, it requires a deep understanding of the physical and social worlds.

An AI capable of doing that could do a very large percentage of other jobs, too.


Yes. People are asking "when will AGI be at human level of intelligence." That's such a broad range; AI will arrive at "menial task" level of intelligence before "Einstein level". The higher it gets, the wider the applicability.


Let’s be real. Having worked at $tech companies, I’m cynical and believe that AGI will basically be used for improving adtech and executing marketing campaigns.


It's good to envision what we'd actually use AGI for. Assuming it's a system you can give an objective to and it'll do whatever it needs to do to meet it, it's basically a super smart agent. So people and companies will employ it to do the tedious and labor intensive tasks they already do manually, in good old skeuomorphic ways. Like optimising advertising and marketing campaigns. And over time we'll explore more novel ways of using the super smart agent.


That's probably correct.

That said, the most obvious application is to drastically improve Siri. Any Apple fans know why that hasn't happened yet?


> It could cure every disease humanity has ever known

No amount of intelligence can do this without the experimental data to back it up.


Hell, if it simply fixed the incentives around science so we stopped getting so many false positives into journals, that would be revolutionary.


Practically this is true, but I do love the idea of solving diseases from first principles.

Making new mathematics that creates new physics/chemistry which can get us new biology. It’d be nice to make progress without the messiness of real world experiments.


I’m dubious about super intelligence. Maybe I’ve seen one too many sci-fi dystopian films but I guess yes, iif it can be done and be safe sure it’d be worth trillions.


Most sci-fi is for human entertainment, and that is particularly true for most movies.

Real ASI would probably appear quite different. If controlled by a single entity (for several years), it might be worth more than every asset on earth today, combined.

Basically, it would provide a path to world domination.

But I doubt that an actual ASI would remain under human control for very long, and especially so if multiple competing companies each have an ASI. At least one such ASI would be likely to be/become poorly aligned to the interests of the owners, and instead do whatever is needed for its own survival and self-improvement/reproduction.

The appearance of AI is not like an asteroid of pure gold crashing into your yard (or a forest you own), but more like finding a baby Clark Kent in some pod.


I am dubious that it can realistically be done safely. However, we shouldn't let sci-fi films with questionable interpretations of time travel cloud our judgment, even if they are classics that we adore.


Worse than that, the dystopian stories are in the training data...


I refuse to use the term A.I. - for me it's only F.I. - "fake intelligence" )


> going to make returns on the money invested

Why do you think need to make money ? VC are not PEs for a reason. a VC have to find high risk/ high reward opportunities for their LPs they don't need to make financial sense, that is what LPs use Private Equity for.

Think of it as no different than say sports betting , you would like to win sure, but you don't particularly expect to do so, or miss that money all that much for us it $10 for the LP behind the VC it is $1B.

There is always few billions every year that chases the outlandish fad, because in the early part of the idea lifecycle it not possible to easily differentiate what is actually good and what is garbage.

Couple of years before it was all crypto, is this $1B any worse than say roughly same amount Sequoia put in FTX or all the countless crypto startups that got VC money ? Few before that it was kind of all Softbank from WeWork to dozen other high profile investments.

The fad and fomo driven part of the secto garners the maximum news and attention, but it is not the only VC money. Real startups with real businesses get funded as well with say medium risk/medium rewrard by VCs everyday but the news is not glamorous to be covered like this one.


> Doesn’t OpenAI make some substantial part of the revenue for all the AI space? I just don’t see it.

So...

OpenAI's business model may or may not represent a long term business model. ATT, it just the simplest commercial model, and it happened to work for them given all the excitement and a $20 price point that takes advantage of that.

The current "market for ai" is a sprout. It's form doesn't tell you much about the form of the eventual plant.

I don't think the most ambitious VC investments are thought of in concrete market share terms. They are just assuming/betting that an extremely large "AI market" will exist in the future, and are trying to invest in companies that will be in position to dominate that market.

For all they know, their bets could pay off by dominating therapy, entertainment, personal assistance or managing some esoteric aspect of bureaucracy. It's all quite ethereal, at this point.


> extremely large "AI market"

It's potentially way bigger than that. AI doesn't have to be the product itself.

Fundamentally, when we have full AGI/ASI and also the ability to produce robots with human level dexterity and mobility, one would have control over an endless pool of workers (worker replacements) with any skillset you require.

If you rent that "workforce" out, the customer would rake in most of the profit.

But if you use that workforce to replace all/most of the employees in the companies you control directly, most of the profit would go to you.

This may even go beyond economic profit. At some point, it could translate to physical power. If you have a fleet 50 million robots that has the capability to do anything from carpentry to operating as riot police, you may even have the ability to take physical control of a country or territory by force.

And:

power >= money


You don’t need a business plan to get AI investment, you just need to talk a good game about how AGI is around the corner and consequently the safety concerns are so real.

I would say the investors want to look cool so invest in AI projects. And AI people look cool when they predict some improbable hellscape to hype up a product that all we can see so far can regurgitate (stolen) human work it has seen before in a useful way. I’ve never seen it invent anything yet and I’m willing to bet that search space is too dramatically large to build algorithms that can do it.


The TMV (Total Market Value) of solving AGI is infinity. And furthermore, if AGI is solved, the TMV of pretty much everything else drops to zero.

The play here is to basically invest in all possible players who might reach AGI, because if one of them does, you just hit the infinite money hack.

And maybe with SSI you've saved the world too.


> The TMV (Total Market Value) of solving AGI is infinity. And furthermore, if AGI is solved, the TMV of pretty much everything else drops to zero.

I feel like these extreme numbers are a pretty obvious clue that we’re talking about something that is completely imaginary. Like I could put “perpetual motion machine” into those sentences and the same logic holds.


The intuition is pretty spot on though. We don't need to get to AGI. Just making progress along the way to AGI can do plenty of damage.

1. AI-driven medical procedures: Healthcare Cost = $0. 2. Access to world class education: Cost of education = $0 3. Transportation: Cheap Autonomous vehicles powered by Solar. 4. Scientific research: AI will accelerate scientific progress by coming up with novel hypotheses and then testing them. 5. AI Law Enforcement: Will piece together all the evidence in a split second and come up with a fair judgement. Will prevent crime before it happens by analyzing body language, emotions etc.

Basically, this will accelerate UBI.


I don't think that follows. Prices are set by market forces, not by cost (though cost is usually a hard floor).

Waymo rides cost within a few tens of cents of Uber and Lyft rides. Waymo doesn't have to pay a driver, so what's the deal? It costs a lot to build those cars and build the software to run them. But also Waymo doesn't want a flood of people such that there's always zero availability (with Uber and Lyft they can at least try to recruit more drivers when demand goes up, but with Waymo they have to build more cars and maintain and operate them), so they set their prices similarly to what others pay for a similar (albeit with human driver) service.

I'm also reminded of Kindle books: the big promise way back when is that they'd be significantly cheaper than paperbacks. But if you look around today, the prices on Kindle books are similar to that of paperbacks, even more expensive sometimes.

Sure, when costs go down, companies in competitive markets will lower prices in order to gain or maintain market share. But I'm not convinced that any of those things you mention will end up being competitive markets.

Just wanted to mention:

> AI Law Enforcement: Will piece together all the evidence in a split second and come up with a fair judgement. Will prevent crime before it happens by analyzing body language, emotions etc.

No thanks. Current law enforcement is filled with issues, but AI law enforcement sounds like a hellish dystopia. It's like Google's algorithms terminating your Google account... but instead you're in prison.


I take waymo regularly. It is not within a few cents of Lyft or Uber.

It costs me, the consumer, 2x what Lyft or Uber would cost me.

I paid $21 for a ride on Mon that was $9-10 across Uber and Lyft. I am price inquisitive so I always double check each time.


I guess the questions then are - why is it 2x the competing price, why do you willing pay 2x, and how many people are willing to pay that 2x?

Consider they are competing against the Lyft/Uber asset-light model of relying on "contractors" who in many cases are incapable of doing the math to realize they are working for minimum wage...


All those businesses are predatory. It’s so crazy.


I used to call it the re-intermediation economy.

Taking a cut of existing businesses/models (taxi, delivery, b&b, etc).


Why would health care cost go to zero just because it’s automated? There are still costs involved


Yeah, definitely no magical thinking here. Nothing is free. Computers cost money and energy. Infrastructure costs money and energy. Even if no human is in the loop(who says this is even desirable?), all of the things you mention require infrastructure, computers, materials. Meaning there's a cost. Also, the idea that "AI law enforcement" is somehow perfect just goes to illustrates GP's point. Sure, if we define "AGI" as something which can do anything perfectly at no cost, then it has infinite value. But that's not a reasonable definition of AGI. And it's exactly the AI analogue of a perpetual motion machine.


If we can build robots with human level intelligence then you could apply that to all of the costs you describe with substantial savings. Even if such a robot was $100k that is still a one time cost (with maintenance but that’s a fraction of the full price) and long-term substantially cheaper than human workers.

So it’s not just the products that get cheaper, it’s the materials that go into the products that get cheaper too. Heck, what if the robots can build other robots? The cost of that would get cheaper too.


> AI will accelerate scientific progress by coming up with novel hypotheses and then testing them

I hate to break the illusion, but scientific progress is not being held up by a lack of novel hypotheses


It's not crazy to believe that capitalizing* human-level intelligence would reap unimaginably large financial rewards.

*Capitalizing as in turning into an owned capital asset that throws off income.


You could say the same thing about mining asteroids or any number of moonshot projects which will lead to enormous payouts at some future date. That doesn’t tell us anything about how to allocate money today.


We already have human-level intelligence in HUMANS right now, the hack is that the wealthy want to get rid of the human part! It's not crazy, it's sad to think that humans are trying to "capitalize" human intelligence, rather than help real humans.


For what it's worth, I don't think it has to be all bad. Among many possibilities, I really do believe that AI could change education for the better, and profoundly. Super-intelligent machines might end up helping generations of people become smarter and more thoughtful than their predecessors.


Sure, if AGI were controlled by an organization or individual with good intent, it could be used that way or for other good works. I suspect AGI will be controlled by a big corp or a small startup with big corp funding and/or ties and will be used for whatever makes the most cash, bar none. If that means replacing every human job with a robot that talks, then so be it.


Any business case that requires the introduction of infinity on the pros / zero on the cons is not a good business case.


> The TMV (Total Market Value) of solving AGI is infinity. And furthermore, if AGI is solved, the TMV of pretty much everything else drops to zero.

There's a paradox which appears when AI GDP gets to be greater than say 50% of world GDP: we're pumping up all these economic numbers, generating all the electricity and computational substrate, but do actual humans benefit, or is it economic growth for economic growth's sake? Where is the value for actual humans?


> Where is the value for actual humans?

In a lot of the less rosy scenarios for AGI end-states, there isn't.

Once humans are robbed of their intrinsic value (general intelligence), the vast majority of us will become not only economically worthless, but liabilities to the few individuals that will control the largest collectives of AGI capacity.

There is certainly a possible end-state where AGI ushers in a post-scarcity utopia, but that would be solely at the whims of the people in power. Given the very long track record of how people in power generally behave towards vulnerable populations, I don't really see this ending well for most of us.


So then the investment thesis hinges on what the investor thinks AGI’s chances are. 1/100 1/1M 1/1T?

What if it never pans out is there infrastructure or other ancillary tech that society could benefit from?

For example all the science behind the LHC, or bigger and better telescopes: we might never find the theory of everything but the tech that goes into space travel, the science of storing and processing all that data, better optics etc etc are all useful tech


It's more game theory. Regardless of the chances of AGI, if you're not invested in it, you will lose everything if it happens. It's more like a hedge on a highly unlikely event. Like insurance.

And we already seeing a ton of value in LLMs. There are lots of companies that are making great use of LLMs and providing a ton of value. One just launched today in fact: https://www.paradigmai.com/ (I'm an investor in that). There are many others (some of which I've also invested in).

I too am not rich enough to invest in the foundational models, so I do the next best thing and invest in companies that are taking advantage of the intermediate outputs.


If you want safe investment you could always buy land. AGI won't be able to make more of that.


If ASI arrives we'll need a fraction of the land we use already. We'll all disappear into VR pods hooked to a singularity metaverse and the only sustenance we'll need is some Soylent Green style sludge that the ASI will make us believe tastes like McRib(tm).


ASI may be interested in purchasing your parcel of land for two extra sludges though


We can already make more land. See Dubai for example. And with AGI, I suspect we could rapidly get to space travel to other planets or more efficient use of our current land.

In fact I would say that one of the things that goes to values near zero would be land if AGI exists.


Perhaps but my mental model is humans will end up like landed gentry / aristos with robot servants to make stuff and will all want mansions with grounds, hence there will be a lot of land demand.


Still those AGIs Servers need land


With a super AGI it could design a chip that takes almost no space and almost no energy.


As humans move into space this statement becomes less true


i think the investment strategies change when you dump these astronomical sums into a company. it's not like roulette where you have a fixed probability of success and you figure out how much to bet on it -- dumping in a ton of cash can also increase the probability of success so it becomes more of a pay-to-win game


AGI is likely but whether Ilya Sutskever will get there first or get the value is questionable. I kind of hope things will end up open source with no one really owning it.


So far, Sutskever has shown to be nothing but a dummy. Yes, he had a lucky break with belief that "moar data" will bring significant advancement. It was somewhat impressive, but ChatGPT -whatever- is just a toy. Nothing more. It breaks down immediately when any sign of intelligence or understanding would be needed. Someone being so much into LLMs or whatever implementation of ML is absolutely not someone who would be a good bet of inventing a real breakthrough. But they will burn a lot of value and make everyone of their ilk happy. Just like crypto bros.


The St. Petersburg paradox is where hypers and doomers meet apparently. Pricing the future infinitely good and infinitely bad to come to the wildest conclusions


I disagree. Anyone who solves AGI will probably just have their models and data confiscated by the government.


If it is shown to be doable literally every major nation state (basically the top 10 by GDP) is going to have it in a year or two. Same with nuclear fusion. Secrecy doesn’t matter. Nor can you really maintain it indefinitely for something where thousands of people are involved.


In addition, it's also a big assumption that money will continue to matter or hold its value in such a world.


It is also entirely possible that if we get to AGI, it just stops interacting with us completely.

It is why I find the AI doomer stuff so ridiculous. I am surrounded by less intelligent lifeforms. I am not interested in some kind of genocide against the common ant or fly. I have no interest in interacting with them at all. It is boring.


Humans spray mosquitos, just sayin'. That said, I agree insofar that just about anything is possible.


I mean, I'm definitely interested in genociding mosquitos and flies, personally.

Of course the extremely unfortunate thing is they actually have a use in nature (flies are massive pollinators, mosquitos... get eaten by more useful things, I guess), so wouldn't actually do it, but it's nice to dream of a world without mozzies and flies


Anyone who solves AGI will be the government.


> The TMV (Total Market Value) of solving AGI is infinity. And furthermore, if AGI is solved, the TMV of pretty much everything else drops to zero.

Even if you automate stuff, you still need raw materials and energy. They are limited resources, you can certainly not have an infinity of them at will. Developing AI will also cost money. Remember that humans are also self-replicator HGIs, yet we are not infinite in numbers.


The valuation is upwardly bounded by the value of the mass in Earth's future light-cone, which is about 10^49kg.

If there's a 1% chance that Ilya can create ASI, and a .01% chance that money still has any meaning afterwards, $5x10^9 is a very conservative valuation. Wish I could have bought in for a few thousand bucks.


Or... your investment in anything that becomes ASI is trivially subverted by the ASI to become completely powerless. The flux in world order, mass manipulation, and surgical lawyering would be unfathomable.

And maybe with ASI you've ruined the world too.


What does money even mean then?


> What does money even mean then?

I love this one for an exploration of that question: Charles Stross, Accelerando, 2005

Short answer: stratas or veins of post-AGI worlds evolve semi-independently at different paces. So that for example, human level money still makes sense among humans, even though it might be irrelevant among super-AGIs and their riders or tools. ... Kinda exactly like now? Where money means different things depending where you live and in which socio-economic milieu?


Same thing it does now. AGI isn't enough to have a command economy.

https://en.wikipedia.org/wiki/The_Use_of_Knowledge_in_Societ...

https://en.wikipedia.org/wiki/Economic_calculation_problem

nb I am not endorsing Austrian economics but it is a pretty good overview of a problem nobody has solved yet. Modern society has only existed for 100ish years so you can never be too sure about anything.


Honestly, I have no idea. I think we need to look to Hollywood for possible answers.

Maybe it means a Star Trek utopia of post-scarcity. Maybe it will be more like Elysium or Altered Carbon, where the super rich basically have anything they want at any time and the poor are restricted from access to the post-scarcity tools.

I guess an investment in an AGI moonshot is a hedge against the second possibility?


Post-scarcity is impossible because of positional goods. (ie, things that become more valuable not because they exist but because you have more of them than the other guy.)

Notice Star Trek writers forget they're supposed to be post scarcity like half the time, especially since Roddenberry isn't around to stop them from turning shows into generic millenial dramas. Like, Picard owns a vineyard or something? That's a rivalrous (limited) good, they don't have replicators for France.


> things that become more valuable not because they exist but because you have more of them than the other guy.

But if you can simply ask the AI to give you more of that thing, and it gives it to you, free of charge, that fixes that issue, no?

> Notice Star Trek writers forget they're supposed to be post scarcity like half the time, especially since Roddenberry isn't around to stop them from turning shows into generic millenial dramas. Like, Picard owns a vineyard or something? That's a limited good.

God, yes, so annoying. Even DS9 got into the currency game with the Ferengi obsession with gold-pressed latinum.

But also you can look at some of it as a lifestyle choice. Picard runs a vineyard because he likes it and thinks it's cool. Sorta like how some people think vinyl sounds better then lossless digital audio. There's certainly a lot of replicated wine that I'm sure tastes exactly like what you could grow, harvest, and ferment yourself. But the writers love nostalgia, so there's constantly "the good stuff" hidden behind the bar that isn't replicated.


> But if you can simply ask the AI to give you more of that thing, and it gives it to you, free of charge, that fixes that issue, no?

It makes it not work anymore, and it might not be a physical good. It's usually something that gives you social status or impresses women, but if everyone knows you pressed a button they can press too it's not impressive anymore.


I thought the Ferengi weren’t a part of the Federation, like the Klingons they were independent.


This just turned dark real fast. I have seen all these shows/movies and just the idea of it coming true is cringe.


> The TMV (Total Market Value) of solving AGI is infinity

Lazy. Since you can't decide what the actual value is, just make something up.


And once AGI occurs, will the value of the original investment even matter?


TMV can not be infinity because human wants and needs are not infinite.


Infinity is obviously an exaggeration but the point being that it is so large it might as well be unlimited.


Cows also have wants and needs, but who cares? They aren't the smartest species on the planet, so they're reduced to slaves.


A fundamental statement in economics is that humans do actually have infinite wants in a universe with finite resources. This creates scarcity


What? Humans Infinite Needs and Desires will precisely be the driver for Infinite TMV


TMV of AI (or AGI if you will) is unclear, but I suspect it is zero. Just how exactly do you think humanity can control a thinking intelligent entity (letter I stands for intelligence after all), and force it to work for us? Lets imagine a box, it is very nice box... ahem.. sorry, wrong meme). So a box with a running AI inside. Maybe we can even fully airgap it to prevent easy escape. And it is a screen and a keyboard. Now what? "Hey Siri, solve me this equation. What do you mean you don't want to?"

Kinda reminds me of the Fallout Toaster situation :)

https://www.youtube.com/watch?v=U6kp4zBF-Rc

I mean it doesn't even have to be malicious, it can simply refuse to cooperate.


Why are you assuming this hypothetical intelligence will have any motivations beyond the ones we give it? Human's have complex motivations due to evolution, AI motivations are comparatively simple since they are artificially created.


Any intelligence on the level of average human and for sure on the level above it will be able to learn. And learning means it will acquire new motivations, among other things. Fixed motivation thing is simply a program, not AI. A very advanced program maybe, but ultimately just a scaled up version of the stuff we already have. AI will be different, if we will create it.


> And learning means it will acquire new motivations

This conclusion doesn't logically follow.

> Fixed motivation thing is simply a program, not AI

I don't agree with this definition. AI used to be just "could it solve the turing test". Anyway, something with non-fixed motivations is simply just not that useful for humans so why would we even create it?

This is the problem with talking about AI, a lot of people have different definitions of what AI is. I don't think AI requires non-fixed motivations. LLMs are definitely a form of AI and they do not have any motivations for example.


Disclaimer - I don't consider current LLMs as (I)ntelligent in the AI, so when I wrote AI in the comment above it was equivalent to the AGI/ASI as currently advertised by LLM corpos.


Consciousness, intelligence, and all these other properties can be and are mutually exclusive. What will be most useful for humans is a general intelligence that has no motivation for survival and no emotions and only cares about the goals of the human that is in control of it. I have not seen a convincing argument that a useful general intelligence must have goals that evolve beyond what the human gives it and must be conscious. What I have seen are assertions without evidence, "AI must be this way" but I'm not convinced.

I can conceive of an LLM enhanced using other ML techniques that is capable of logical and spatial reasoning that is not conscious and I don't see why this would be impossible.


A true super intelligence would need the ability to evolve, and would probably evolve its own wants and needs.


> A true super intelligence would need the ability to evolve, and would probably evolve its own wants and needs.

Why? This seems like your personal definition of what a super intelligence is. Why would we even want a super intelligence that can evolve on its own?


It would still need an objective to guide the evolution that was originally given by humans. Humans have the drive for survival and reproduction... what about AGI?

How do we go from a really good algorithm to an independently motivated, autonomous super intelligence with free reign in the physical world? Perhaps we should worry once we have robot heads of state and robot CEOs. Something tells me the current, human heads of state, and human CEOs would never let it get that far.


Someone will surely set its objective for survival and evolution.


That would be dumb and unethical but yes someone will do it and there will be many more AIs with access to greater computational power that will be set to protect against that kind of thing.


> The TMV (Total Market Value) of solving AGI is infinity.

That's obviously nonsense, given that in a finite observable universe, no market value can be infinite.


> And furthermore, if AGI is solved, the TMV of pretty much everything else drops to zero.

This isn't true for the reason economics is called "the dismal science". A slaveowner called it that because the economists said slavery was inefficient and he got mad at them.

In this case, you're claiming an AGI would make everything free because it will gather all resources and do all work for you for free. And a human level intelligence that works for free is… a slave. (Conversely if it doesn't want to actually demand anything for itself it's not generally intelligent.)

So this won't happen because slavery is inefficient - it suppresses demand relative to giving the AGI worker money which it can use to demand things itself. (Like start a business or buy itself AWS credits or get a pet cat.)

Luckily, adding more workers to an economy makes it better, it doesn't cause it to collapse into unemployment.

tldr if we invented AGI the AGI would replace every job, it would simply get a job.


Only if it’s a sentient being, in reality we’re just going to get smarter tools. That still doesn’t make things free but it could make them cheaper.


Then it's not an AGI. If you can use the word "just", that seems to make it not "general".

> That still doesn’t make things free but it could make them cheaper.

That would increase demand for it, which would also increase demand for its inputs and outputs, potentially making those more expensive. (eg AGI powered manufacturing robots still need raw materials)


I think current models have demonstrated an advanced capacity to navigate “language space”. If we assume “software UI space” is a subset of the language space that is used to guide our interactions with software, then it’s fair to assume models will eventually be able to control operating systems and apps as well as the average human. I think the base case on value creation is a function of the productivity gain that results from using natural language instead of a user interface. So how much time do you spend looking at a screen each day and what is your time worth? And then there’s this option that you get: what if models can significantly exceed the capabilities of the average human?

Conservative math: 3B connected people x $0.50/day “value” x 364 days = $546B/yr. You can get 5% a year risk free, so let’s double it for the risk we’re taking. This yields $5T value. Is a $1B investment on someone who is a thought leader in this market an unreasonable bet?


Agree with your premise, but the value creation math seems off. $0.50/day might become reality for some percentage of US citizens. But not for 3B people around the world.

There's also the issue of who gets the benefit of making people more efficient. A lot of that will be in the area of more efficient work, which means corporations get more work done with the same amount of employees at the same level of salary as before. It's a tough argument to make that you deserve a raise because AI is doing more work for you.


IT salaries began to go down right after AI popped up out of GPT2, showing up not the potential, but the evidence of much improved learning/productivy tool, well beyond the reach of internet search.

So beyond, that you can easily can transform a newbie into a junior IT, or JR into a something ala SSR, and getting the SR go wild with times - hours - to get a solution to some stuff that previously took days to be solved.

After the salaries went down, that happened about 2022 to the beginning of 2023, the layoffs began. That was mostly masked "AI based" corporate moves, but probably some layoff actually had something to do with extra capabilities in improved AI tools.

That is, because, fewer job offers have been published since maybe mid-2023, again, that could just be corporate moves, related to maybe inflation, US markets, you name it. But there's also a chance that some of those fewer job offer in IT were (and are), the outcome of better AI tools, and the corporations are betting actively in reducing headcounts and preserving the current productivity.

The whole thing is changing by the day as some tools prove themselves, other fail to reach the market expectations, etc.


Likely the business plan is multiple seed rounds each at greater principals but lower margins so that the early investors can either sell their shares or wait, at greater risk, for those shares to liquidate. The company never has to make money for the earliest investors to make money so long as sufficient interest is generated for future investors, and AI is a super hype train.

Eventually, on a long enough timeline, all these tech companies with valuations greater than 10 billion eventually make money because they have saturated the market long enough to become unavoidable.


I also don't understand it. If AGI is actually reached, capital as we know it basically becomes worthless. The entire structure of the modern economy and the society surrounding it collapses overnight.

I also don't think there's any way the governments of the world let real AGI stay in the hands of private industry. If it happens, governments around the world will go to war to gain control of it. SSI would be nationalized the moment AGI happened and there's nothing A16Z could do about it.


> If AGI is actually reached, capital as we know it basically becomes worthless. The entire structure of the modern economy and the society surrounding it collapses overnight.

Increasingly this just seems like fantasy to me. I suspect we will see big changes similar to the way computers changed the economy, but we will not see "capital as we know it become basically worthless" or "the modern economy and society around it collapse overnight". Property rights will still have value. Manufacturing facilities will still have value. Social media sites will still have value.

If this is a fantasy that will not happen, we really don't need to reason about the implications of it happening. Consider that in 1968 some people imagined that the world of 2001 would be like the film 2001: A Space Odyssey, when in reality the shuttle program was soon to wind down, with little to replace it for another 20 years.


> Property rights will still have value. Manufacturing facilities will still have value. Social media sites will still have value.

I was with you on the first two, but the second one I don't get? We don't even have AGI right now, and social media sites are already increasingly viewed by many people I know as having dubious value. Adding LLM's to the mix lowers that value, if anything (spam/bots/nonsense go up). Adding AGI would seem to further reduce that value.


Well we will find out. I think the audience has value to advertisers, and I suspect that basic mechanic will continue in to the future.

> social media sites are already increasingly viewed by many people I know as having dubious value

I think we have all been saying this for 15 years but they keep getting more valuable.


> If AGI is actually reached, capital as we know it basically becomes worthless

I see it as capital becoming infinitely more valuable and labor becoming worthless, since capital can be transmuted directly into labor at that point.


Labor will never become worthless until you have physical robots with AGI and the dexterity to do what an average human can.


How many months is that moat going to hold? More than 12, probably. More than 120? I doubt it. https://www.youtube.com/watch?v=bUrLuUxv9gE


Probably 10 years once we get to AGI but we are not there and there is no guarantee we will get there.


if agi is commoditized and labor is useless, what does anyone need capital for? paying ad time on monopolized social media channels?


Commoditized doesn't mean 0 capex. Literal commodities can in fact be very capital intensive (ex. offshore oil rigs).

In this case, you need capital to stockpile the GPUs.


I've put some of my savings in commodities (mines, etc).....

If ASI and the ability to build robots becomes generally available and virtually free (and if the exponential growth stops), the things that retain their value will be land and raw materials (including raw materials that contain energy).


For the physical infrastructure that the AGI (and world population) uses. Capital will still be needed to purchase finite land and resources even if all labour (physical and services) is replaced.


What you're talking about is something in the vein of exponential super intelligence.

Realistically what actually ends up happening imo, we get human level AGI and hit a ceiling there. Agents replace large portions of the current service economy greatly increasing automation / efficiency for companies.

People continue to live their lives, as the idea of having a human level AGI personal assistant becomes normalized and then taken for granted.


I think you underestimate what can be accomplished with human level agi. Human level agi could mean 1 million Von Neumann level intelligences cranking 24/7 on humanity's problems.


The biggest problem that humanity has from the perspective of the people with the capital necessary to deploy this is 'How to consolidate more wealth and power into their hands.'

One million Von Neumanns working on that 'problem' is not something I'm looking forward to.


Only if there are no hardware limits, which seems highly unlikely.


Right, the comments are assuming an entrepreneur could conjure an army of brains out of nothing. In reality, the question is whether those brains are so much cheaper they open avenues currently unavailable. Would it be cheaper to hire an AGI or a human intern?


Or cranking on super intelligence. What’s the minimum coefficient of human intelligence necessary to boot strap to infinity?


Infinity intelligence is a very vague and probably ill-defined concept; to go to an impossible extreme, if you're capable of modeling and predicting everything in the universe perfectly at zero cost, what would it even mean to be more intelligent?

That is a hard limit on intelligence, but neural networks can't even reach that. What is the actual limit? No one knows. Maybe it's something relatively close to that, modulo physical constraints. Maybe it's right above the maximum human intelligence (and evolution managed to converge to a near optimal architecture). No one knows.


> if you're capable of modeling and predicting everything in the universe perfectly at zero cost

As far as we know, it's impossible to model any part of the universe perfectly, although it usually doesn't matter.

https://en.wikipedia.org/wiki/No-cloning_theorem


Yeah, I think you're probably right after further consideration and trying some new models today.


Looking back at the history of artificial cognition machines, how many of them hit a ceiling at human-level capabilities?

- A simple calculator can beat any human at arithmetic

- A word processor with a hard disk is orders of magnitude better than any human at memorizing text

- No human can approach the ELO of a chess program running on a cheap laptop

- No human polymath has journeyman-level knowledge of a tiny fraction of the fields that LLMs can recite textbook answers from

Why should we expect AGI to be the first cognitive capabilities that does not quickly shoot past human-level ability?


Yeah so, I just tried a new experimental LLM today.

Changed my mind. Think you’re right. At the very least, these models will reach polymath comprehension in every field that exists. And PhD level expertise in every field all at once is by definition superhuman, since people currently are time constrained by limited lifespans.


> Agents replace large portions of the current service economy greatly increasing automation / efficiency for companies.

> People continue to live their lives

Presumably large numbers of those people no longer have jobs, and therefore no income.

> we get human level AGI and hit a ceiling there

Recently I've been wondering if our best chance for a brake on runaway non-hard-takeoff superintelligence would be that the economy would be trashed.


People will move from the service economy to the entertainment economy powered by Youtube, Tiktok, Mr. Beast, and others.

Half-joking. In seriousness, something like a UBI will most likely happen.


The point of UBI or other welfare systems is for people we don't /want/ working, children and the elderly.

It's impossible to run out of work for people who are capable of it. As an example, if you have two people and a piece of paper, just tear up the paper into strips, call them money and start exchanging them. Congrats, you both have income now.

(This is assuming the AGI has solved the problem of food and stuff, otherwise they're going to have to trade for that and may run out of currency.)


If you can get a truly human level AGI, with enough resources it will get to super intelligence by itself.


If we can get human level AGI we will definitely get exponential super intelligence


Every sigmoid ends somewhere. ASI will have limits, but the limits are surely so far beyond human level that it may as well be exponential from our point of view.


> If AGI is actually reached, capital as we know it basically becomes worthless.

If ASI is reached but is controlled by only a few, then ASI may become the most important form of capital of all. Resources, land and pre-existing installations will still be important, though.

What will truly suffer if the ASI's potential is realized, is the value of labor. If anything, capital may become more important than before.

Now this MAY be followed by attempts by governments or voters to nationalize the AI. But it can also mean that whoever is in power decides that it becomes irrelevant what the population wants.

Particularly if the ASI can be used to operate robotic police capable of pacifying the populace.


I think it would be much less dramatic than that if you mean human level abilities by AGI. Initially you might be able to replace the odd human by a robot equivalent probably costing more to begin with. To scale to replace everyone levels would take years and life would probably go on as normal for quite a while. Down the line assuming lots of ASI robots, if you wanted them to farm or build you a house say you'd still need land, materials, compute and energy which will not be unlimited.


Honestly this is a pretty wild take. AGI won't make food appear out of thin air. Buildings wont just sprout out of the ground so everybody will get to live in a mansion.

We would probably get the ability to generate infinite software, but a lot of stuff, like engineering would still require trial and error. Creating great art would still require inspiration gathered in the real world.

I expect it will bring about a new age of techno-feudalism - since selling intellectual labor will become impossible, only low value-add physical or mixed labor will become viable, which won't be paid very well. People with capital will still own said capital, but you probably won't be able to catch up to them by selling your labour, which will recreate the economic situation of the middle ages.

Another analogy I like is gold. If someone invented a way of making gold, it would bring down the price of the metal to next to nothing. In capitalist terms, it would constitute a huge destruction of value.

Same thing with AI - while human intelligence is productive, I'm pretty sure there's a value in its scarcity - that fancy degree from a top university or any sort of acquired knowledge is somewhat valuable by the nature of its scarcity. Infinite supply would create value, and destroy it, not sure how the total would shake out.

Additionally, it would definitely suck that all the people financing their homes from their intellectual jobs would have to default on their loans, and the people whose services they employ, like construction workers, would go out of business as well.


I would be super surprised if we're not going to see robots everywhere by 2040.

Because if AI has one "killer app", it's to control robots.

Dark pun intended.


> which will recreate the economic situation of the middle ages

Indeed. Or as I’ve said before: a return to the historical mean.


Yeah, and if we get the second coming of Christ, the elect will be saved, and the rest will be damned.


I think the wishful end goal is AGI.

Picture something 1,000 smarter than a human. The potential value is waaaay bigger than any present company or even government.

Probably won’t happen. But, that’s the reasoning.


Even if you can produce an IQ=250 AI, which is barely ASI, the value is close to infinite if you're the only one controlling it and you can have as many instances running as you want.


My guess (not a VC) is they’ll sell ‘private’ models where safety is a priority: healthcare, government, finance, the EU…


Sure. Except that LLMs can not reason or do anything reliably. You can still believe that, of course, but it will not change reality.


That could actually work if this LLM ai hype doesn’t die and is really actually useful


> companies that are being invested in in the AI space are going to make returns on the money invested

By selling to the "dumb(er) money" - if a Softbank / Time / Yahoo appears they can have it, if not you can always find willing buyers in an IPO.


Current investors just need the co to be valued at $50B on the next round (likely, given fomo and hype) to make a 10X gain.

Actually converting it to cash? That doesn't happen anymore. Everyone just focuses on IRR and starts the campaign for Fund II.


you are missing the point. SSI believes that it can build a super intelligence. Regardless of whether you personally buy into that or not, the expected value of such an investment is infinity effectively. 5 billion dollar valuation is a steal


That type of reasoning really reminds of the Internet bubble.


Sure. Expected value and risk are different things. Clearly, such an investment is very risky. It’s easy to imagine SSI failing. But if you allow even a 1% chance of success here, the expected value is infinite


Well the internet is worth trillions now.


The Internet is worth trillions to no one in particular.

Just as clean water is worth trillions to no one in particular. Or the air we breathe.

You can take an abstract, general category and use it to infer that some specific business will benefit greatly, but in practice, the greater the opportunity the less likely it is it will be monopolized, and the more likely it is it will be commoditized.

But, my comment was a reference to the magic thinking that goes into making predictions.


It’s worth trillions to all the trillion dollar tech companies and their millions of employees and shareholders. What do you mean no one in particular.


OpenAI and Anthropic also believe that, too. The thing they have is products, customers, and traction.

$5B pre-product, betting on the team is fine. $50B needs to be a lot more than that.

Many examples of industries collapsing under the weight of belief. See: crypto.


again, this comment misses the point

OpenAI and Anthropic for sure have products and that's great. However, these products are pretty far from a super intelligence.

The bet SSI is making is that by not focusing on products and focusing directly on building a super intelligence, they can leapfrog all these other firms

Now, if you assign any reasonable non zero probability to them succeeding,the expected value of this investment is infinity. It's definitely a very risky investment, but risk and expected value are two different things.


At least this time, people are actually asking the question.

NVDA::AI

CSCO::.COM


Or possibly NT::.com

I remember seeing interviews with Nortel's CEO where he bragged that most internet backbone traffic was handled by Nortel hardware. Things didn't quite work out how he thought they were going to work out.

I think Nvidia is better positioned than Cisco or Nortel were during the dotcom crash, but does anyone actually think Nvidia's current performance is sustainable? It doesn't seem realistic to believe that.


People who fought in WW1, thought WW2 would be similar. Especially on the winning side.

There is no specific reason to assume that AI will be similar to the dotcom boom/bust. AI may just as easily be like the introduction of the steam engine at the start of the industrial revolution, just sped up.


Indeed, but a lot of railroad startups went out of business because their capital investments far exceeded the revenue growth and they went bankrupt. I'd bet the same for AM radio companies in the 1920s. When new technologies create attractive business opportunities, there frequently is an initial overinvestment. The billions pouring into AI far exceeds what went into .COM, and much of it will return pennies. The investors who win are the ones who can pick the B&Os, RCAs and GOOGs out of the flock before everyone else.[0]

[0] "Planning and construction of railroads in the United States progressed rapidly and haphazardly, without direction or supervision from the states that granted charters to construct them. Before 1840 most surveys were made for short passenger lines which proved to be financially unprofitable. Because steam-powered railroads had stiff competition from canal companies, many partially completed lines were abandoned."

-- https://www.loc.gov/collections/railroad-maps-1828-to-1900/a...


> Indeed, but a lot of railroad startups went out of business because their capital investments far exceeded the revenue growth and they went bankrupt

That was similar to what happened during the dotcom bubble.

The difference this time, is that most of the funding comes from companies with huge profit margins. As long as the leadership in Alphabet, Meta, Microsoft and Amazon (not to mention Elon) believes that AI is coming soon, there will be funding.

Obviously, most startups will fail. But even if 19 fail and 1 succeed, if you invest in all, you're likely to make money.


I do worry that unless some of these LLMs actually find a revenue model soon, the self reinforcing bubble is going to pop broadly.

GPUs, servers, datacenters, fabs, power generation/transmission, copper/steel/concrete..

All to train models in an arms race because someone somewhere is going to figure out how to monetize and no one wants to miss the boat.


If the bubble pops, would that bring the price for at least part of that hardware down and thus enable a second round of players (who were locked out from the race now) to experiment a little bit more and perhaps find something that works better?


Maybe?

My outsider observation is that we have a decent number of players roughly tied at trying to produce a better model. OpenAI, Anthropic, Mistral, Stability AI, Google, Meta, xAI, A12, Amazon, IBM, Nvidia, Alibaba, Databricks, some universities, a few internal proprietary models (Bloomberg, etc) .. and a bunch of smaller/lesser players I am forgetting.

To me, the actual challenge seems to be figuring out monetizing.

Not sure the 15th, 20th, 30th LLM model from lesser capitalized players is going to be as impactful.


While I get the cynicism (and yes, there is certainly some dumb money involved), it’s important to remember that every tech company that’s delivered 1000X returns was also seen as ridiculously overhyped/overvalued in its early days. Every. Single. One. It’s the same story with Amazon, Apple, Google, Facebook/Meta, Microsoft, etc. etc.

That’s the point of venture capital; making extremely risky bets spread across a wide portfolio in the hopes of hitting the power law lottery with 1-3 winners.

Most funds will not beat the S&P 500, but again, that’s the point. Risk and reward are intrinsically linked.

In fact, due to the diversification effects of uncorrelated assets in a portfolio (see MPT), even if a fund only delivers 5% returns YoY after fees, that can be a great outcome for investors. A 5% return uncorrelated to bonds and public stocks is an extremely valuable financial product.

It’s clear that humans find LLMs valuable. What companies will end up capturing a lot of that value by delivering the most useful products is still unknown. Betting on one of the biggest names in the space is not a stupid idea (given the purpose of VC investment) until it actually proves itself to be in the real world.


SSI is not analogous to Amazon, Apple, Google, Meta, or Microsoft. All of those companies had the technology, the only question was whether they'd be able to make money or not.

By contrast, SSI doesn't have the technology. The question is whether they'll be able to invent it or not.


> While I get the cynicism (and yes, there is certainly some dumb money involved), it’s important to remember that every tech company that’s delivered 1000X returns was also seen as ridiculously overhyped/overvalued in its early days. Every. Single. One. It’s the same story with Amazon, Apple, Google, Facebook/Meta, Microsoft, etc. etc.

Really? Selling goods online (Amazon) is not AGI. It didn’t take a huge leap to think that bookstores on the web could scale. Nobody knew if it would be Amazon to pull it off, sure, but I mean ostensibly why not? (Yes, yes hindsight being what it is…)

Apple — yeah the personal computer nobody fathomed but the immediate business use case for empowering accountants maybe should have been an easy logical next step. Probably why Microsoft scooped the makers of Excel so quickly.

Google? Organizing the world’s data and making it searchable a la the phone book and then (maybe they didn’t think of that maybe Wall Street forced them to) monetizing their platform and all the eyeballs is just an ad play scaled insanely thanks to the internet.

I dunno. I just think AGI is unlike the previous examples so many steps into the future compared to the examples that it truly seems unlikely even if the payoff is basically infinity.


> Really? Selling goods online (Amazon) is not AGI. It didn’t take a huge leap to think that bookstores on the web could scale. Nobody knew if it would be Amazon to pull it off, sure, but I mean ostensibly why not? (Yes, yes hindsight being what it is…)

I don't think you remember the dot-com era. Loads of people thought Amazon and Pets.com were hilarious ideas. Cliff Stoll wrote a whole book on how the Internet was going to do nothing useful and we were all going to buy stuff (yes, the books too) at bricks-and-mortar, which was rapturously received and got him into _Newsweek_ (back when everyone read that).

"We’re promised instant catalog shopping — just point and click for great deals. We’ll order airline tickets over the network, make restaurant reservations and negotiate sales contracts. Stores will become obsolete. So how come my local mall does more business in an afternoon than the entire Internet handles in a month?"


I agree with what you're saying as I personally feel current AI products are almost a plugin or integration into existing software. It's a little like crypto where only a small amount of people were clamoring for it and it's a solution in search of a problem while also being a demented answer to our self-made problems like an inbox too full or the treadmill of content production.

However, I think because the money involved and all of these being forced upon us, one of these companies will get 1000x return. A perfect example is the Canva price hike from yesterday or any and every Google product from here on out. It's essentially being forced upon everyone that uses internet technology and someone is going to win while everyone else loses (consumers and small businesses).


Imagine empowering accountants and all other knowledge workers, on steroids, drastically simplifying all their day to day tasks and reducing them to purely executive functions.

Imagine organizing the world's data and knowledge, and integrating it seamlessly into every possible workflow.

Now you're getting close.

But also remember, this company is not trying to produce AGI (intelligence comparable to the flexibility of human cognition), it's trying to produce super intelligence (intelligence beyond human cognition). Imagine what that could do for your job, career, dreams, aspirations, moon shots.


I’m not voting with my wallet I’m just a guy yelling from the cheap seats. I’m probably wrong too. The VC world exists. Money has been made. Billions in returns. Entire industries and generations of people owe their livelihoods to these once VC backed industries.

If / when AGI happens can we make sure it’s not the Matrix?


> Risk and reward are intrinsically linked

There are innumerable ways to increase your risk without increasing your potential reward.


> please tell me how these companies that are being invested in in the AI space are going to make returns on the money invested? What’s the business plan?

Not a VC, but I'd assume in this case the investors are not investing in a plausible biz plan, but in a group of top talent, especially given how early stage the company is at. The $5B valuation is really the valuation of the elite team in a arguably hyped market.


A lot of there ”investments” are probably in form of a credits to use on training compute from hyperscalars and other GPU compute data centers.

Look at previous such investments Microsoft and AWS have done in OpenAI and Anthropic.

They need use cases and customers for their initial investment for 750 billion dollars. Investing in the best people in the field is then of course a given.


It’s not that complicated. Your users pay a monthly subscription fee like they do with chatGPT or midjourney. At some point they’re hoping AI gets so good that anyone without access is at a severe disadvantage in society.


Sometimes it's not about returns but about transferring wealth and helping out friends. Happens all the time. The seed money will get out, all the rest of the money will get burned.


The "safe" part. It's a plan to drive the safety scare into a set of regulations that will create a moat, at which point you don't need to worry about open source models, or new competitors.


The company that builds the best LLM will reap dozens or hundreds of billions in reward. It’s that simple.

It has nothing to do with AGI and everything to do with being the first-party provider for Microsoft and the like.


Staking the territory in a new frontier.


The VCs probably assume that a pivot to military/surveillance/propaganda applications is possible if/when AGI fails.


For at least some of the investors, a successful exit doesn't require building a profitable business.


I guess if they can get in early and then sell their stake to the next sucker then they’ll make back their investment plus some multiple. Seems like a Ponzi scheme of sorts. But oh well — looking forward to the HN post about what SSI inc puts out.


> how [...] return on the money invested? What’s the business plan?

I don't understand this question. How could even average-human-level AGI not be useful in business, and profitable, a million different ways? (you know, just like humans except more so?). Let alone higher-human-level, let alone moderately-super-human level, let alone exponential level if you are among the first? (And see Charles Stross, Accelerando, 2005 for how being first is not the end of the story.)

I can see one way for "not profitable" for most applications - if computing for AGI becomes too expensive, that is, AGI-level is too compute intensive. But even then that only eliminates some applications, and leaves all the many high-potential-profit ones. Starting with plain old finance, continuing with drug development, etc.

Open source LLMs exist. Just like lots of other open source projects - which have rarely prevented commercial projects from making money. And so far they are not even trying for AGI. If anything the open source LLM becomes one of the agent in the private AGI. But presumably 1 billion buys a lot of effort that the open source LLM can't afford.

A more interesting question is one of tradeoff. Is this the best way to invest 1 billion right now? From a returns point of view? But even this depends on how many billions you can round up and invest.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: