In the last week, we’ve had no less than three different pieces asking whether the massive proliferation of data centers is a massive bubble, and though they, at times, seem to take the default position of AI’s inevitable value, they’ve begun to sour on the idea that
People immediately knew how internet could help us even during the dot com bubble. Anyone who had used Google (or before that, Yahoo) would immediately fall in love with them with how they help their live. AI (LLM)? Not so.
The Internet boom didn’t have the weird you’re-holding-it-wrong vibe too. Legit “It doesn’t help with my use case concerns” seem to all too often get answered with choruses of “but have you tried this week’s model? Have you spent enough time trying to play with it and tweak it to get something more like you want?” Don’t admit limits to the tech, just keep hitting the gacha.
I’ve had people say I’m not approaching AI in “good faith”. I say that you didn’t need “good faith” to see that Lotus 1-2-3 was more flexible and faster than tallying up inventory on paper, or that AltaVista was faster than browsing a card catalog.
Perhaps you are unaware that AI has solved the Proteome. This was expected to be a 100 year project.
I’m aware of machine learning being used in all kinds of science, but it is not llms and therefore not the topic of discussion here.
Au contraire. The proteome was solved by LLM transformers trained on genetic strings
https://en.wikipedia.org/wiki/AlphaFold
A transformer model isn’t always an llm, nor does a type of algorithm/data model/whatever being useful for one purpose mean it is equally useful for all other purposes.