My worry with the current generation of LLMs branded as "AI" that at some point in the future long time after LLMs become fully obsolete, a new kind of AI tech will come along and it will be difficult to brand it because "AI" was already used before (see LLMs) and has negative snake-oil connotations.
It's like humans calling the period they live in "contemporary". It make as little sense for stone age people to do it, as it did for medieval people and as it does for us. We need to start thinking more about the future and not the 10-years from now future but the 300 years from now future.
My prediction, based on no evidence or theory other than "the future will be like today, only in the future" is that AI will be called AI 300 years from now.
“AI” is a vague non-term. In a way, it’s not unlike “DEI”. For some it denotes a useful idea and a mark of progress, for others it is a manifestation of doom, and in practical reality it is a trigger word to obtain investment, serve corporate optics and increase shareholder value. No one agrees on what “AI” means and most people do not have a good working definition, other than “hype alias for ML”.
ML is not a perfect term for marketing use, either. It is obscure, and to many people “learning” hints at some sort of human-like education process (though it is an industry term that does not imply anything of the sort).
I believe the most correct term to use in marketing and branding is… well, none. It is an implementation detail that presumably makes the product better at addressing certain problems or otherwise improve customer’s life. Talk about that instead.
The semantic debate over the label “AI” continues. While I think some of these concerns of AI for haphazardly inserted llms or other learning algorithms inserted into a product might have some merit i think ultimately it might fail to recognize the source of the problem isnt in bad use of labels but is fundamental to label usage itself. The ship of Theseus conundrum has some lessons applicable to this.
really its easy to point a label at tangible visual stuff, but pointing a label at more ephemeral stuff creates all manners of issues.
First i’d be willing to say this following statement is true: ‘humans invented object identities and invented the labels, and also both the label machine and the labels pointing at identities they want to label’, well i guess darwin invented the machinery but humans willfully made the first use out of the machinery for that purpose. if humans choose to make labels and define where labels are applicable, theres never going to be an objective true valid standard sense of a word, except by convention but even that rests on ‘objective’ quicksand, unfortunately.
beyond an argument on linguistics and the philosophy of language, that still leaves the substances of intelligence itself. while llms have clear huge gaps in inadequacies, i suspect its not far off from human capabilities by algorithm adjustments, adding architecture modularly (like reasoning), and ultimately through distillation and transfer learning, plus new techniques. Because of that, even dumb haphazardly shoehorned llms might surprisingly be closer to intelligence (or a core component thereof) than what even the most ardent semantic nazis would consider ( or archnemesis to LLms, PhD, Gary Marcus, orrrr also famous naysayer and contrarian, Yann Lecunn).
300 years on the exponential singularity timescale might actually be 5 to 10 years.
but maybe it might be akin to TV standards of black before TVs became capable of true black. grey was black until oleds and now previous tv formats of black, aka really grey, can now retroactively be reclassified as grey.
or how about usage of the words unlimited when it comes to cellular data. those naughty marketers and their definition bending usage of words.
I think that's the least of the problems, if there's anything we're good at it's adding superlatives to product names. You can call it Hyperintelligence, Quantum Intelligence. Although you have a point with the snake-oil connotations, which is hilarious because it's only because of overpromising. The tech that LLMs are today is absolutely mindblowing and not something I could have predicted 5 years ago in the days of GPT2.
Apple is exceptionally good at this, they have their own names for everything – from retina display to spatial computing, including calling AI “apple intelligence”. But this is popular outside of apple too, e.g. some years ago we decided to call servers “the cloud”.
That new AI tech might be branded as something else but I doubt it would be a problem.
My concern with Apple Intelligence is Apple devs being forced to shoehorn AI into the UX of a wide variety of apps, I’m not looking forward to the fallout from the added complexity.
I suspect disabling this is absolutely the right thing to do.
But I'm curious, has anyone really made any effort to test this so-called AI first to see if it's at all useful, or lives up to any level of expectation?
Or is there some a priori evil element to justify this (Tim Cook slurping up all your data and using it for training and advertising without consent or opt out) that I don't know about?
This is such an infuriating problem with these companies. Generative AIs hold promise, but they all rushed to inject these features, which are in its infancy, directly interfering with the user experience.
In iOS, if you search in settings for something in the realm of Apple Intelligence. The results will be dominated by Apple Intelligence results, which is a good chance that that's not what you want. On Amazon.com, they threw their entire review history into Rufus, and the results are often are just plain outdated and wrong. It's still garbage in, garbage out with generative AI, and these companies have completely ignored that.
You also don’t need to entirely disable it. For example you can just turn off notification summaries, the only benefit of which (it seems) is that sometimes they’re so wrong that it’s a little bit funny.
I don't think anyone here advocates for "just ship it" when you're talking about breaking updates to a system used by a significant chunk of the world's population.
It’s mostly useful as a hands free shortcut to get replies from chatgpt. Beyond that it’s useless
All the promises of integrating deeply into the OS and have APIs directly into apps so you can use natural language to get any app to do stuff for you is vaporware
So much potential but none of it delivered yet, I hope it will change soon
tl;dr - Settings > Apple Intelligence & Siri > Disable top toggle labeled "Apple Intelligence"
I would be surprised if most iPhone users couldn't figure this out themselves so my personal hunch is this article is mostly clickbait for anti-AI folks who already have these features disabled. In particular, because the only way to disable it is to have opted in to it in the first place and that flow is very similar to the disable flow.
I'll just reiterate: you had to find the setting at least once (because these features are opt-IN) for the need to disable it to ever occur.
Sure. Lots of people are too tech-unsavvy to figure that out. But those same people won't be googling how to do it. They'll just give up and ask their kid to do it for them or something.
So many opt ins are deceptively worded (e.g. "would you like this personalized just for you?") or deceptively bundled, 'if you want feature Y, you need to enable X', I could see people opting in without fully knowing what they're doing.
The deliberate grouping of Apple Intelligence & Siri together in one menu makes me feel like if I disable AI with that main switch at the top then I will also disable Siri, and I am an IT professional who understands these things.
They dont need to group the titles together like that, it feels deceptive. I feel like the average user will be put off disabling AI if they still want to use Siri like they always have.
Yeah but, cynically, in todays age people put questions in a search or LLM instead of reading docs/manuals let alone reading what’s in their screen… so this article will be useful for them till they can out-SEO’d by Link farms.
Let alone reading what’s on their screen when they enable features, promptly forgetting they enabled it, thus asking how to disable them…
The Verge article is missing a setting under Apps > Mail > Summarize to disable.
My worry with the current generation of LLMs branded as "AI" that at some point in the future long time after LLMs become fully obsolete, a new kind of AI tech will come along and it will be difficult to brand it because "AI" was already used before (see LLMs) and has negative snake-oil connotations.
It's like humans calling the period they live in "contemporary". It make as little sense for stone age people to do it, as it did for medieval people and as it does for us. We need to start thinking more about the future and not the 10-years from now future but the 300 years from now future.
AI has survived as a name for a few decades now.
My prediction, based on no evidence or theory other than "the future will be like today, only in the future" is that AI will be called AI 300 years from now.
Do you feel better now?
“AI” is a vague non-term. In a way, it’s not unlike “DEI”. For some it denotes a useful idea and a mark of progress, for others it is a manifestation of doom, and in practical reality it is a trigger word to obtain investment, serve corporate optics and increase shareholder value. No one agrees on what “AI” means and most people do not have a good working definition, other than “hype alias for ML”.
ML is not a perfect term for marketing use, either. It is obscure, and to many people “learning” hints at some sort of human-like education process (though it is an industry term that does not imply anything of the sort).
I believe the most correct term to use in marketing and branding is… well, none. It is an implementation detail that presumably makes the product better at addressing certain problems or otherwise improve customer’s life. Talk about that instead.
The semantic debate over the label “AI” continues. While I think some of these concerns of AI for haphazardly inserted llms or other learning algorithms inserted into a product might have some merit i think ultimately it might fail to recognize the source of the problem isnt in bad use of labels but is fundamental to label usage itself. The ship of Theseus conundrum has some lessons applicable to this.
really its easy to point a label at tangible visual stuff, but pointing a label at more ephemeral stuff creates all manners of issues.
First i’d be willing to say this following statement is true: ‘humans invented object identities and invented the labels, and also both the label machine and the labels pointing at identities they want to label’, well i guess darwin invented the machinery but humans willfully made the first use out of the machinery for that purpose. if humans choose to make labels and define where labels are applicable, theres never going to be an objective true valid standard sense of a word, except by convention but even that rests on ‘objective’ quicksand, unfortunately.
beyond an argument on linguistics and the philosophy of language, that still leaves the substances of intelligence itself. while llms have clear huge gaps in inadequacies, i suspect its not far off from human capabilities by algorithm adjustments, adding architecture modularly (like reasoning), and ultimately through distillation and transfer learning, plus new techniques. Because of that, even dumb haphazardly shoehorned llms might surprisingly be closer to intelligence (or a core component thereof) than what even the most ardent semantic nazis would consider ( or archnemesis to LLms, PhD, Gary Marcus, orrrr also famous naysayer and contrarian, Yann Lecunn).
300 years on the exponential singularity timescale might actually be 5 to 10 years.
but maybe it might be akin to TV standards of black before TVs became capable of true black. grey was black until oleds and now previous tv formats of black, aka really grey, can now retroactively be reclassified as grey. or how about usage of the words unlimited when it comes to cellular data. those naughty marketers and their definition bending usage of words.
No need to worry, we’ve gone through this several times before
https://en.m.wikipedia.org/wiki/AI_winter
I think that's the least of the problems, if there's anything we're good at it's adding superlatives to product names. You can call it Hyperintelligence, Quantum Intelligence. Although you have a point with the snake-oil connotations, which is hilarious because it's only because of overpromising. The tech that LLMs are today is absolutely mindblowing and not something I could have predicted 5 years ago in the days of GPT2.
Apple is exceptionally good at this, they have their own names for everything – from retina display to spatial computing, including calling AI “apple intelligence”. But this is popular outside of apple too, e.g. some years ago we decided to call servers “the cloud”.
That new AI tech might be branded as something else but I doubt it would be a problem.
My concern with Apple Intelligence is Apple devs being forced to shoehorn AI into the UX of a wide variety of apps, I’m not looking forward to the fallout from the added complexity.
I suspect disabling this is absolutely the right thing to do.
But I'm curious, has anyone really made any effort to test this so-called AI first to see if it's at all useful, or lives up to any level of expectation?
Or is there some a priori evil element to justify this (Tim Cook slurping up all your data and using it for training and advertising without consent or opt out) that I don't know about?
Unfortunately it seems pretty half-baked and rushed out just so they can check the box without being as useful as it could be.
Here is the best commentary that I've read on the subject from someone who works regularly in generative AI: https://xeiaso.net/blog/2025/squandered-holy-grail/
This is such an infuriating problem with these companies. Generative AIs hold promise, but they all rushed to inject these features, which are in its infancy, directly interfering with the user experience.
In iOS, if you search in settings for something in the realm of Apple Intelligence. The results will be dominated by Apple Intelligence results, which is a good chance that that's not what you want. On Amazon.com, they threw their entire review history into Rufus, and the results are often are just plain outdated and wrong. It's still garbage in, garbage out with generative AI, and these companies have completely ignored that.
Well, you're used to a mature stable UI, but this is a fairly new feature. It might be the next big thing, or it might be the next Clippy/Bob.
Hold on, because there's a ton of money riding on massive adoption, whether you want to adopt it or not.
It’s currently not all that useful, but also never gets in my way when I don’t need it, so I don’t really see the point in disabling it.
You also don’t need to entirely disable it. For example you can just turn off notification summaries, the only benefit of which (it seems) is that sometimes they’re so wrong that it’s a little bit funny.
You forget that Apple Maps was crap when it first came out, up to the point where people were questioning why Apple was bothering in the first place.
Early adopters should recognize first versions of forward-looking tech are going to be less-than-optimal, and that the warts will eventually fall off.
Y’all around here advocate for shipping whatever you have, but when a larger org applies your principles, everyone is in a tizzy.
I don't think anyone here advocates for "just ship it" when you're talking about breaking updates to a system used by a significant chunk of the world's population.
What is a breaking change with the update that introduced Apple Intelligence?
How iPhone notifications work. These are relied upon by humans and they started displaying incorrect facts for news apps
Incorrect facts in the news? Hasn't this been happening for years?
It’s mostly useful as a hands free shortcut to get replies from chatgpt. Beyond that it’s useless
All the promises of integrating deeply into the OS and have APIs directly into apps so you can use natural language to get any app to do stuff for you is vaporware
So much potential but none of it delivered yet, I hope it will change soon
Interesting. Thanks for the information.
How to turn on Apple Intelligence on EU iPhone?
tl;dr - Settings > Apple Intelligence & Siri > Disable top toggle labeled "Apple Intelligence"
I would be surprised if most iPhone users couldn't figure this out themselves so my personal hunch is this article is mostly clickbait for anti-AI folks who already have these features disabled. In particular, because the only way to disable it is to have opted in to it in the first place and that flow is very similar to the disable flow.
I would be surprised if more than 99% of iPhone users go through all of the settings.
You and I might be knowledgeable enough to realize that we can turn such options off, but the vast, vast, vast majority of people are not.
I'll just reiterate: you had to find the setting at least once (because these features are opt-IN) for the need to disable it to ever occur.
Sure. Lots of people are too tech-unsavvy to figure that out. But those same people won't be googling how to do it. They'll just give up and ask their kid to do it for them or something.
So many opt ins are deceptively worded (e.g. "would you like this personalized just for you?") or deceptively bundled, 'if you want feature Y, you need to enable X', I could see people opting in without fully knowing what they're doing.
The deliberate grouping of Apple Intelligence & Siri together in one menu makes me feel like if I disable AI with that main switch at the top then I will also disable Siri, and I am an IT professional who understands these things.
They dont need to group the titles together like that, it feels deceptive. I feel like the average user will be put off disabling AI if they still want to use Siri like they always have.
Yeah but, cynically, in todays age people put questions in a search or LLM instead of reading docs/manuals let alone reading what’s in their screen… so this article will be useful for them till they can out-SEO’d by Link farms.
Let alone reading what’s on their screen when they enable features, promptly forgetting they enabled it, thus asking how to disable them…
Ok enough. Have a good day :)
[dead]