Not really, since “AI” is a pre-existing and MUCH more general term which has been intentionally commandeered by bad actors to mean a particular type of AI.
I completely agree. Using AI to refer specifically to LLMs does reflect the influence of marketing from companies that may not fully represent the broader field of artificial intelligence. Sounds ironic to those who oppose LLM usage might end up sounding like the very bad actors they criticize if they also use the same misleading terms.
I don’t get to decide if the marketing terms used by the companies I hate end up becoming the common terms.
If I stubbornly refuse to use the common terms and instead only use the technical terms, then I’m only limiting the reach of my message.
OpenAI marketing has successfully made LLM one of the definitions of the term AI, and the most common term used to refer to the tech, in public spaces.
If I stubbornly refuse to use the common terms and instead only use the technical terms …
That’s where your role takes part as someone who knows the correct term. I myself often teach my close ones about tech and its terms in my field. I don’t want to normalize using wrong terms in a technical discussion. It’s just depending on us to teach what’s right or just being comfortable what is already wrong and doing nothing about it. Activists are educators as much as they are advocates.
As a non-English main, Deepl is useful for my locals (and for me). It’s just how it’s implemented. Still being open-minded, yeah, the extensive resource usage is bad for the earth tho, wishing there would be optimization.
AI remains a broader field of study, an active field of study which tons of people are invested in,and they use AI to refer to the broader field of study in which they’re professionally invested.
I’m just describing how language works.
No you’re not. And you’re not as smart as you think you are.
If everyone says a word means a thing
It’s not literally everybody, and you know it, and you also know that LLMs are not the entire actual category of AI.
That is how language works. Word definitions are literally just informal consensus agreement. Dictionaries are just descriptions of observed usage. Not literally everyone needs to agree on it.
This isn’t some kind of independent conclusion I came to on my own; I used to think like you appear to, but then I watched some explanations from authors and from professional linguists, and they changed my mind about language prescriptivism.
If you say “AI” in most contexts, more people will know what you mean than if you say “LLM”. If your goal is communication, then by that measure “AI” is “more correct” (but again, correctness isn’t even applicable here)
People still know what LLMs are, and they know that it’s a subset of AI. If the internet is swamped with bots actively trying to set linguistic habits for marketing reasons, you’re not required to perpetuate and validate that.
Shills and goons are trying to make “AI” refer to LLMs specifically. It’s an ad campaign. You’re not getting paid to perpetuate this stupidity.
So are you saying that a slur (for Black people, for example) is linguistically “correct by definition” ? And it actually describes members of the demographic?
A slur is still a word.
I know youre trying to trap me in some stupid gotcha, but idk what you think that’d prove.
What would you consider “linguistically correct” if not “follows grammar rules and conveys the intended meaning”?
If I say something absolutely heinous about your mother, does it stop being valid English just because it is morally reprehensible and fallacious? Of course not.
Do you think all the words we use today meant exactly the same thing 300 years ago?
No, people used it “incorrectly” and that usage gains popularity, and that makes it correct.
What you call illiteracy is literally how etymology works.
Just to clarify, do you personally agree that LLMs are a subset of AI, with AI being the broader category that includes other technologies beyond LLMs?
I come from a technical background and have worked in AI to help people and small businesses whether it’s for farming, business decisions, and more. I can’t agree with the view that AI is inherently bad; it’s a valuable tool for many. What’s causing confusion is that ‘AI’ is often used to mean LLMs, which is inaccurate from a technical perspective. My goal is simply to encourage precise language use to avoid misunderstandings. People often misuse words in ways that stray far from their original etymology. For example, in Indonesia, we use the word ‘literally’ as it’s meant — in a literal sense, not figuratively, as it’s often misused in English nowadays. The word ‘literally’ in Indonesian would be translated as ‘secara harfiah,’ and when used, it means exactly as stated. Just like ‘literally,’ words should stay connected to their roots, whether Latin, Greek, or otherwise, as their original meanings give them their true value and purpose.
Depending on context, jargon and terminology change.
In this context, I’d agree that LLMs are a subset tech under the umbrella term “AI”. But in common English discourse, LLM and AI are often used interchangeably. That’s not wrong because correctness is defined by the actual real usage of native speakers of the language.
I also come from a tech background. I’m a developer with 15 years experience, and I work for a large company, and my job is currently integrating LLMs and more traditional ML models into our products, because our shareholders think we need to.
Specificity is useful in technical contexts, but in these public contexts, almost everyone knows what we’re talking about, so the way we’re using language is fine.
… almost everyone knows what we’re talking about, so the way we’re using language is fine.
You said it — almost. Not everyone knows or understands, so wouldn’t it be better to use the correct term instead of still using the wrong one? You’re saying almost because we’re on Lemmy, and yes, most Fediverse software users are techies. I have friends who talk about “AI,” I edited this to lessen confusion to the second paragraph. For more specific, they were talking that ‘AI is going to take our job, it can do copywriting for me’ but when I ask further, they’re actually talking about LLMs — which is not the same thing. And you yourself know it’s wrong, since you work in the related field. When I hear that, I just tell them, “It’s LLM, and LLMs are bla bla bla.” Whether they nod or not is on them, but at least they’ve been told the correct thing.
I accept being called a language prescriptivist in this case, because we’re here on Lemmy, most people are techies or nerds, and we’re discussing technology. In everyday conversation I’m not pedantic, but in technical contexts, precision matters.
This isn’t ‘whataboutism.’ I’m not opposing the substance of what’s being said, I’m pointing out how it’s being said. If we already know the correct term, why not use it? That’s not gatekeeping — that’s making the discussion clearer for everyone. As already being said on my previous comment, as an activist, that’s also your role being an educator. Without education, activism turns into noise.
I think this is how it should end. I agree with the substance of what’s being said, and you’ve already acknowledged my earlier point about where LLMs fit within the AI field. Since saying “AI is bad” as activism should also involve educating people with the correct term, I see this as a technical context rather than a public one. I respect your view since you’ve provided argumentation. Thanks.
Language is descriptive not prescriptive.
If people use the term “AI” to refer to LLMs, then it’s correct by definition.
Not really, since “AI” is a pre-existing and MUCH more general term which has been intentionally commandeered by bad actors to mean a particular type of AI.
AI remains a broader field of study.
I completely agree. Using AI to refer specifically to LLMs does reflect the influence of marketing from companies that may not fully represent the broader field of artificial intelligence. Sounds ironic to those who oppose LLM usage might end up sounding like the very bad actors they criticize if they also use the same misleading terms.
I don’t get to decide if the marketing terms used by the companies I hate end up becoming the common terms.
If I stubbornly refuse to use the common terms and instead only use the technical terms, then I’m only limiting the reach of my message.
OpenAI marketing has successfully made LLM one of the definitions of the term AI, and the most common term used to refer to the tech, in public spaces.
That’s where your role takes part as someone who knows the correct term. I myself often teach my close ones about tech and its terms in my field. I don’t want to normalize using wrong terms in a technical discussion. It’s just depending on us to teach what’s right or just being comfortable what is already wrong and doing nothing about it. Activists are educators as much as they are advocates.
This hype cycle is insane, and the gross psychology of the hype obscures the real usefulness of LLMs.
As a non-English main, Deepl is useful for my locals (and for me). It’s just how it’s implemented. Still being open-minded, yeah, the extensive resource usage is bad for the earth tho, wishing there would be optimization.
It doesn’t matter what you want, I’m just describing how language works.
If everyone says a word means a thing, then it means that thing. Words can have multiple meanings.
AI remains a broader field of study, an active field of study which tons of people are invested in, and they use AI to refer to the broader field of study in which they’re professionally invested.
No you’re not. And you’re not as smart as you think you are.
It’s not literally everybody, and you know it, and you also know that LLMs are not the entire actual category of AI.
That is beyond pedantry.
That is how language works. Word definitions are literally just informal consensus agreement. Dictionaries are just descriptions of observed usage. Not literally everyone needs to agree on it.
This isn’t some kind of independent conclusion I came to on my own; I used to think like you appear to, but then I watched some explanations from authors and from professional linguists, and they changed my mind about language prescriptivism.
If you say “AI” in most contexts, more people will know what you mean than if you say “LLM”. If your goal is communication, then by that measure “AI” is “more correct” (but again, correctness isn’t even applicable here)
People still know what LLMs are, and they know that it’s a subset of AI. If the internet is swamped with bots actively trying to set linguistic habits for marketing reasons, you’re not required to perpetuate and validate that.
Shills and goons are trying to make “AI” refer to LLMs specifically. It’s an ad campaign. You’re not getting paid to perpetuate this stupidity.
If people use [slur] to refer to [demographic] that does not make it correct by definition.
Linguistically correct, and morally correct, are not the same thing.
So are you saying that a slur (for Black people, for example) is linguistically “correct by definition” ? And it actually describes members of the demographic?
A slur is still a word.
I know youre trying to trap me in some stupid gotcha, but idk what you think that’d prove.
What would you consider “linguistically correct” if not “follows grammar rules and conveys the intended meaning”?
If I say something absolutely heinous about your mother, does it stop being valid English just because it is morally reprehensible and fallacious? Of course not.
It’s partially correct but AI don’t always mean it’s LLM. Etymology is important here. Don’t normalize illiteracy.
This is how etymology works.
Do you think all the words we use today meant exactly the same thing 300 years ago?
No, people used it “incorrectly” and that usage gains popularity, and that makes it correct.
What you call illiteracy is literally how etymology works.
Just to clarify, do you personally agree that LLMs are a subset of AI, with AI being the broader category that includes other technologies beyond LLMs?
I come from a technical background and have worked in AI to help people and small businesses whether it’s for farming, business decisions, and more. I can’t agree with the view that AI is inherently bad; it’s a valuable tool for many. What’s causing confusion is that ‘AI’ is often used to mean LLMs, which is inaccurate from a technical perspective. My goal is simply to encourage precise language use to avoid misunderstandings. People often misuse words in ways that stray far from their original etymology. For example, in Indonesia, we use the word ‘literally’ as it’s meant — in a literal sense, not figuratively, as it’s often misused in English nowadays. The word ‘literally’ in Indonesian would be translated as ‘secara harfiah,’ and when used, it means exactly as stated. Just like ‘literally,’ words should stay connected to their roots, whether Latin, Greek, or otherwise, as their original meanings give them their true value and purpose.
Depending on context, jargon and terminology change.
In this context, I’d agree that LLMs are a subset tech under the umbrella term “AI”. But in common English discourse, LLM and AI are often used interchangeably. That’s not wrong because correctness is defined by the actual real usage of native speakers of the language.
I also come from a tech background. I’m a developer with 15 years experience, and I work for a large company, and my job is currently integrating LLMs and more traditional ML models into our products, because our shareholders think we need to.
Specificity is useful in technical contexts, but in these public contexts, almost everyone knows what we’re talking about, so the way we’re using language is fine.
You know it’s bad when someone with my username thinks you’re being too pedantic lol. Dont be a language prescriptivist.
You said it — almost. Not everyone knows or understands, so wouldn’t it be better to use the correct term instead of still using the wrong one? You’re saying almost because we’re on Lemmy, and yes, most Fediverse software users are techies. I have friends who talk about “AI,” I edited this to lessen confusion to the second paragraph. For more specific, they were talking that ‘AI is going to take our job, it can do copywriting for me’ but when I ask further, they’re actually talking about LLMs — which is not the same thing. And you yourself know it’s wrong, since you work in the related field. When I hear that, I just tell them, “It’s LLM, and LLMs are bla bla bla.” Whether they nod or not is on them, but at least they’ve been told the correct thing.
I accept being called a language prescriptivist in this case, because we’re here on Lemmy, most people are techies or nerds, and we’re discussing technology. In everyday conversation I’m not pedantic, but in technical contexts, precision matters.
This isn’t ‘whataboutism.’ I’m not opposing the substance of what’s being said, I’m pointing out how it’s being said. If we already know the correct term, why not use it? That’s not gatekeeping — that’s making the discussion clearer for everyone. As already being said on my previous comment, as an activist, that’s also your role being an educator. Without education, activism turns into noise.
I think this is how it should end. I agree with the substance of what’s being said, and you’ve already acknowledged my earlier point about where LLMs fit within the AI field. Since saying “AI is bad” as activism should also involve educating people with the correct term, I see this as a technical context rather than a public one. I respect your view since you’ve provided argumentation. Thanks.