As always with Zitron, grab a beverage before settling in.
Like Zitron says in the article, we’re 3 years into the AI era and there is not a single actually profitable company. For comparison, the dot-com bubble was About 5-6 years from start to bust. It’s all smoke and mirrors and sketchy accounting.
Even if/when the AI hype settles and perhaps the tech finds its true (profitable) calling, the tech itself is still insanely expensive to run and train. It’s going to boil down to Microsoft and/or X owning nuclear power plants, and everyone else renting usage from them.
People are making money in AI, but like always, it’s the founders and C-suite, while the staff are kicked to the curb. It’s all a shell game and everyone that has integrated AI into their lives and company workflows, is gonna get the rug pulled out from under them.
and there is not a single actually profitable company
This is a little misleading, because obviously FAANG (and others) are all building AI systems, and are all profitable. There are also tons of companies applying machine learning to various areas that are doing well from a profitability standpoint (mostly B2B SaaS that are enhancing extant tools). This statement is really only true for the glut of “AI companies” that do nothing but produce LLMs to plug into stuff.
My personal take is that this is just revealing how disconnected from the tech industry VCs are, who are the ones buying into this hype and burning billions of dollars on (as you said) smoke and mirrors companies like Anthropic and OpenAI.
The thing is, companies like Google, Facebook, Amazon and Microsoft are already profitable, so it could lose them huge amounts of money, with no real meaningful benefit to user retention or B2B sales, but the companies as a whole would still be profitable. It could be a huge money black hole, but they continue to chase it out of unjustified FOMO and in an attempt to keep share prices high through misplaced investor confidence.
Apple’s share price has taken a pretty big hit from the perception that they’re “falling behind” on AI, even if they’ve mostly just backed away from it because users didn’t like it when it was shoved in their face. Other companies are probably looking at that and saying “hey, we’d rather keep the stock market happy and our share prices high rather than stop wasting money on this”.
The fang companies that are in on the llm hype are still lighting money on fire in their llm endeavors so I fail to see how the point that they may be otherwise profitable is relevant.
I should reframe what I said: there is not a single profitable AI-focused company. There are tons of already profitable companies that are now deeply embedding AI into everything they do.
This is an interesting take in that only doing one thing but doing it well has been, historically, how businesses thrived. This vertical integration thing and startups looking to be bought out instead of trying to make it on their own (obviously, VCs play a role in this) has led to jacks of all trades.
I don’t think it’s going to come down to these absurd datacentres. We’re only a few years off from platform-agnostic local inference at mass-market prices. Could I get a 5090? Yes. Legally? No.
We’re only a few years off from platform-agnostic local inference at mass-market prices.
What makes you confident in that? What will change?
There are already large local models. It’s a question of having the hardware, which has historically gotten more powerful with each generation. I don’t think it’s going to be phones for quite some time, but on desktop, absolutely.
For business use, laptops without powerful graphics cards have been the norm for quite some time. Do you see businesses deciding to change to desktops to accommodate the power for local models? I think it’s pretty optimistic to think that laptops are going to be that powerful in the next 5 years. The advancement in chip capability has dramatically slowed, and to put them in laptops they’d need to be incredibly more power efficient as well.
Keywords: NPU, unified RAM
Apple is doing it, AMD is doing it, phones are doing it.
GPUs with dedicated VRAM are an inefficient way of doing inference. They’ve been great for research purposes, into what type of NPU may be the best one, but that’s been answered already for LLMs. Current step is, achieving mass production.
5 years sounds realistic, unless WW3.
For the security tradeoff of sensitive data not heading to the cloud for processing? Not all businesses, but many would definitely see value in it. We’re also discussing this as though the options are binary … models could also be hosted on company servers that employees VPN into.
I have to think that most people won’t want to do local training.
It’s like Gentoo Linux. Yeah, you can compile everything with the exact optimal set of options for your kit, but at huge inefficiency when most use cases might be mostly served by two or three pre built options.
If you’re just running pre-made models, plenty of them will run on a 6900XT or whatever.
I don’t expect anyone other than … I don’t even know what the current term is … geeks? batshit billionaires? to be doing training.
I’m very much of the belief that our next big leap in LLMs is local processing. Once my interactions stay on my device, I’ll jump in.
Oh god, another AI hot take 🙄
Yes, OpenAI and Cursor both are waaaaayyyy overhyped & overvalued.
So were pets.com and yahoo.com back in 1999. But that didn’t stop FAANG from becoming honestly trllion-dollar valuation because while there was breathless Internet hype, the Internet was about to completely change the way the world works.
AI today is like the Internet in 1999.
I’ve seen this argument way to often and it is completely pointless. The argument that this will succeed because something in the past succeeded is exactly the same as arguing it will fail because something in the past failed.
If you want to draw the conclusion that they’re similar enough to use history in prediction, you’ll have to show that they’re similar and make a case for why those similarities are relevant.
I haven’t seen anyone making this argument bother with this exercise, but I have seen people that actually look at the economics discuss why they’re different animals.
There is also the tech itself.
-
internet - connect everything together across vast distances. Obvious limitless possibilities.
-
smart phones (you didn’t mention here but this is the other one people use for this argument most frequently) - Anything a computer can do in the palm of your hand.
-
llms - can do some powerful stuff like rifle through and summarize text, or generate text, or generate code… Except you can’t really trust it to do any of these things accurately, and that is a fundamental aspect of how the technology works rather than something that can be fixed, so it can’t be used responsibly for anything critical.
-
People immediately knew how internet could help us even during the dot com bubble. Anyone who had used Google (or before that, Yahoo) would immediately fall in love with them with how they help their live. AI (LLM)? Not so.
The Internet boom didn’t have the weird you’re-holding-it-wrong vibe too. Legit “It doesn’t help with my use case concerns” seem to all too often get answered with choruses of “but have you tried this week’s model? Have you spent enough time trying to play with it and tweak it to get something more like you want?” Don’t admit limits to the tech, just keep hitting the gacha.
I’ve had people say I’m not approaching AI in “good faith”. I say that you didn’t need “good faith” to see that Lotus 1-2-3 was more flexible and faster than tallying up inventory on paper, or that AltaVista was faster than browsing a card catalog.
Perhaps you are unaware that AI has solved the Proteome. This was expected to be a 100 year project.
I’m aware of machine learning being used in all kinds of science, but it is not llms and therefore not the topic of discussion here.
Au contraire. The proteome was solved by LLM transformers trained on genetic strings
A transformer model isn’t always an llm, nor does a type of algorithm/data model/whatever being useful for one purpose mean it is equally useful for all other purposes.
I was at a startup in 1999 … in Seattle. I actually ducked out because it was clear that about all they could do was arrange outings for the staff.
Ah, yes, Yahoo!, the elephant graveyard of good ideas.