happy_dog1 6 hours ago

This April 3rd Pew research study is some fairly interesting reading:

https://www.pewresearch.org/internet/2025/04/03/how-the-us-p...

They find that the general public is overall much more skeptical that AI will benefit anyone, much more likely to view it as harmful and much less excited about its potential than "AI experts". A majority of Americans are more concerned than excited. There is interestingly a large gender gap between men and women -- women are much less likely to view AI favorably, to use it frequently or to be excited about its potential than men.

There is some research to suggest that consumers are less likely to buy a product and less likely to trust it (less "emotional trust") when AI is used prominently to market it:

https://www.tandfonline.com/doi/full/10.1080/19368623.2024.2...

So I think the data suggests that while there is excitement around AI, overall consumers are much less excited about AI than people in the industry think and that it may actually impact their buying decisions negatively. Will this gap go away over time? I don't know. For any of you working in tech at the time, was there a similar gap in perceptions around the Internet back in the days of the dot com bubble?

The other problem as pointed out is that MANY things are labeled as AI, ranging from logistic regression to chatbots, and probably there is more enthusiasm around some of these things than others.

  • swatcoder 6 hours ago

    Prior to the dot-com bubble itself, hype for the growing potential of the internet was modest and mostly in line with organic adoption and exploration. People at large weren't anticipating a revolution. They were just enjoying the growing areay of new products and opportunities that were appearing.

    During the dot-com bubble, inasmuch as it represented a turning tide, this trickle had reached a tipping point and we witnessed a tsunami of innovative products that consumers were genuinely fascinated by. There were just too many of them for the market to sustain them all, and a correction followed, as you would expect.

    This AI story is basically the opposite, much like the blockchain story. Many investors and some consumers who have living or borrowed memory of dot-com bubble or the smartphone explosion really really want another opportunity to cash in on a exponentially expanding market and/or live through a new technological revolution and are basically trying to will the next one into existence as soon as possible, independent of any organicity or practicality.

    In contrast to blockchain hype, maybe it'll work here. Maybe it won't. But it's fundamentally a different scenario from the dot-com bubble either way.

    • JohnFen 5 hours ago

      > are basically trying to will the next one into existence

      I think this hits the nail on the head. At least, it's the only explanation I've heard that makes any sense.

      • saltcured 5 hours ago

        What is the venture capitalist rhyme for "fake it until you make it" but starts with "hype it..."?

        • NoFunPedant 4 hours ago

          "Market the hype till the market is ripe"?

  • ARandumGuy 6 hours ago

    I find it interesting how a lot of other comments are saying how "HN users are a bubble, the public is actually really excited about AI", when the research indicates that the general public is even less interested in AI then HN is.

    It's fine to express your opinion on AI, whether positive or negative. It's even fine to share anecdotes about how other people feel. Just don't say that's how "most people" feel without providing some actual evidence.

    • UtopiaPunk 2 hours ago

      I think HN is a space where practically everyone has a grasp of what AI is and is not capable of, and of what tools could theoretically exist in the near future. I also think that HN is a space where there is not a consensus on whether AI is "good" or "bad," and there is a lot of discourse on the subject.

      In my experience, this makes HN probably the most pro-AI spaces around. Most people in my life feel more negatively about AI, without a lot of defense for it (even if they do use it). The only space in my life that is more pro-AI than HN is when people from the C-suite are speaking about it at work meetings :/

    • spacemadness 4 hours ago

      How is HN an anti-AI bubble? People post their AI projects here all the time. That's not something the general public is doing.

    • anthonypasq 5 hours ago

      chatgpt has 400 million weekly users and youre under the impression most people dont want ai?

      • ARandumGuy 5 hours ago

        I believe most people don't want AI because I read the Pew Research report linked by the parent comment, which indicated most non-experts don't want AI. That report has a pretty large sample size, the methodology seems sound, and Pew is an organization that's historically pretty good at studying this sort of thing.

        Obviously one report is not the end of the discussion. And if more research is done that indicates that most people really are interested in AI, I'll shift my beliefs on the matter.

        I was interested in that 400 million weekly user number you posted, so I did a little digging and found this source [1] (I also looked through their linked sources and double checked elsewhere, and this info seems reasonably accurate). It seems like that 400 million figure is what OpenAI is self-reporting, with no indication how that number is being calculated. Weekly user count is a figure that's fairly easy to manipulate or over-count, which makes me skeptical of the data. For example, is this figure just counting users that are directly interacting with ChatGPT, or is it counting users of services that utilize the ChatGPT API?

        In addition, someone can use ChatGPT while having a neutral or negative opinion on it. My linked source [1] indicates that around 10 million people are actively paying for a ChatGPT subscription, which is a much more modest number then 400 million weekly users. There clearly are a lot of people who use and like AI, but that doesn't mean the majority of the population feels positively about it.

        [1]: https://backlinko.com/chatgpt-stats

        • hollerith 4 hours ago

          I use an AI chat service, but would prefer that research and investment that might yield more powerful AIs be banned. Maybe that is what the survey respondents meant when they said that they don't want AI.

      • no_wizard 5 hours ago

        How many are paying?

        Thats the ultimate test, how many users will pay for something like this?

        I also have usage pattern questions but I don’t think OpenAI publishes much data as to how their platform is most commonly used

      • habinero 5 hours ago

        Absolutely. People using it to write shitposts and spam and a first draft of something is one thing, but "fun toy" is not the same thing as "sea change"

  • eszed 5 hours ago

    > For any of you working in tech at the time, was there a similar gap in perceptions around the Internet back in the days of the dot com bubble?

    I wasn't drawing a paycheck from tech at the time, but I was a massive nerd, and from my recollection: Yes, absolutely. Dialup modems were slow, and you only had The Internet on a desktop computer. Websites were ugly (yes, the remaining 1.0 sites are charming, but that's mainly our nostalgia speaking), and frequently broke. It was or could be) expensive: you had to pay for a second phone (land!) line (or else deal with the hassle of coordinating phone calls), and probably an "internet package" from your phone company, or else pay by the minute to connect; and, of course, rural phone providers were slow to adopt any of those avenues of adoption. Commerce, pre-PayPal, was difficult - I remember ordering things online and then mailing a paper check to the address on the invoice!

    Above all, we underestimate (especially in fora like this) how few people actually were online. I don't remember exact numbers at any particular times, but I remember being astonished a few times - the 'net was so ubiquitous in my and my friends' lives that "What do you mean, only X minority of people have ever used the internet?" For people who weren't interested in tech (the vast majority), seeing web addresses and "e[Whatever]" all over the place was mainly irritating.

    Those elements and attitudes are certainly analogous to AI Hype today. Whether everything else along that path will turn out roughly the same remains to be seen. From my point of view, looking back, the most-hyped (or maybe just most-memorable) 1.0 failures were fantastic ideas that just arrived ahead of their time. For instance, Webvan = InstaCart; Pets.com = Chewy; Netbank = any virtual bank you care to name; Broadcast.com = any streaming video company you care to name; honorable mention: Beenz (though this might be controversial) was the closest we ever came to a viable micro-payments model.

    The necessary infrastructure for (love it or hate it) a commercialized web was the smart-phone, and 'always on' portable connectivity. By analogy, the necessary infrastructure for widespread, democratized AI (whether for good or for ill) may not yet exist.

  • psadauskas 3 hours ago

    I'm not skeptical about AI. I'm skeptical that the companies behind AI will deliver a product that makes my life better. Anything genuinely useful will be a toy or get bought and shut down, while the ones that survive will steal my personal data and serve me ads.

  • lukev 5 hours ago

    I think the concept that AI can be used to sell itself -- that a product is more valuable simply because it incorporates AI -- has to end, and soon.

    If you can actually use it to build a better product on it's own terms, great. But as has ALWAYS been true, a product has to actually be good.

    • the_snooze 5 hours ago

      >I think the concept that AI can be used to sell itself -- that a product is more valuable simply because it incorporates AI -- has to end, and soon.

      I can't help but think of the iPhone 16 series's top-line marketing: "Built for Apple Intelligence." In practice, the use cases have been lackluster at best (e.g., Genmoji), if not outright garbage (e.g., misleading notification summaries).

      I feel like a lot of AI use cases are solutions looking for a problem, and really sucking at solving those problems where the rubber meets the road. I can't even get something as low-stakes and well-bounded as accurate sports trivia and stats out of these systems reliably, and there's a plethora of good data on that out there.

  • more-nitor 6 hours ago

    idk have you ever been stuck on some AI chatbot?

    some credit card companies have botched chatbot process: "lost/forged credit card report" / "talk-to-person" are essential support process, but they require you to enter your PIN to get thru that process

    (if the request for a new credit-card is faked, you're out of luck)

  • kranke155 6 hours ago

    Incredible. Thanks for sharing.

azan_ 7 hours ago

In articles like this I’m always surprised that author did not take 5 minutes to think that well, maybe I’m living in a bubble and there’s lots of people that are actually excited about AI.

  • nritchie 6 hours ago

    I wonder the complete opposite. On Hacker News, people are excited about AI. Outside this bubble, in the real world, less so.

    • uh_uh 6 hours ago

      I read more sceptical takes about AI on Hacker News than anywhere else (since I stopped following Gary Marcus, at least). My hunch is that some people here might feel professionally threatened about it so they want to diminish it. This is less of an issue with some of the 'normies' that I know. For them AI is not professionally threatening but use it to translate stuff, ideate about cupcake recipes, use it as a psychologist (please don't shoot the messenger) or help them lesson plan to teach kids.

      • JohnFen 5 hours ago

        > My hunch is that some people here might feel professionally threatened about it so they want to diminish it.

        I don't think it's this. At least, I don't see a lot of that. What I do see a lot of is people realizing that AI is massively overhyped, and a lot of companies are capitalizing on that.

        Until/unless it moves on from the hype cycle, it's hard to take it that seriously.

      • habinero 5 hours ago

        Speaking as a software engineer, I'm not at all threatened by it. I like Copilot as fancy autocomplete when I'm bashing out code, but that's the easy part of my job. The hard part is understanding problems and deciding what to build, and LLMs can't do that and will never be able to do that.

        What I am annoyed by is having to tell users and management "no, LLMs can't do that" over and over and over and over and over. There's so much overhype and just flat out lying about capabilities and people buy into it and want to give decision making power to the statistics model that's only right by accident. Which: No.

        It's a fun toy to play with and it has some limited uses, but fundamentally it's basically another blockchain: a solution in search of a problem. The set of real world problems where you want a lot of human-like writing but don't need it to be accurate is basically just "autocomplete" and "spam".

        • uh_uh 5 hours ago

          I disagree with the characterisation of AI as "another blockchain: a solution in search of a problem". The two industries have opposite problems: crypto people are struggling to create demand, AI people are struggling to keep up with demand.

    • johnfn 6 hours ago

      HN is a highly technical audience, and AI is showing the most benefit on highly technical tasks, so it seems logical to me that HN would be more excited than "the real world". (What is the real world, btw? Do people on HN not exist in the real world?)

      • bigstrat2003 6 hours ago

        > AI is showing the most benefit on highly technical tasks

        It must be truly abysmal everywhere else then, because it doesn't show much value on highly technical tasks when I try.

      • Conscat 6 hours ago

        My sister, who is a pretty technical kinesiology PhD student, does not know how to input Alt+F4 and insists that is esoteric knowledge. There's a litmus test for how out of touch HN users may be with the way normal people use computers.

      • paulkrush 6 hours ago

        I meet a non-technical woman using it all day long to help manage a landscaping business. This was a data point for me.

    • oytis 6 hours ago

      Anectodally all of my non-tech friends seem to be using ChatGPT much more than I do.

    • jhickok 6 hours ago

      Is that true? I have three kids now, two of them in high school, that are perhaps more AI-savvy than me (both good and bad). I think the article, and my limited professional view, is informed by SoftwareDev, IT infrastructure and Enterprise technology. I think a lot of younger people are happily plugging AI into their life.

    • coder543 6 hours ago

      ChatGPT is the number one free iPhone app on the US App Store, and I'm pretty sure it has been the number one app for a long time. I googled to see if I could find an App Store ranking chart over time... this one[0] shows that it has been in the top 2 on the US iPhone App Store every month for the past year, and it has been number one for 10 of the past 12 months. I also checked, and ChatGPT is still the number one app on the Google Play Store too.

      Unless both the App Store and Google Play Store rankings are somehow determined primarily by HN users, then it seems like AI isn't only a thing on HN.

      [0]: https://app.sensortower.com/overview/6448311069?tab=category...

    • brandall10 6 hours ago

      It's because we're excited about the possibilities. It's potentially revolutionary tech from a product perspective. Some claim that it increases their speed of development by a not insignificant amount.

      The average consumer does not appear to be particularly excited about products w/ AI features though. A big example that comes to mind is Apple Intelligence. It's not like the second coming of the iPhone, which it should be, given the insane amount of investment capital and press in the tech sphere.

    • eru 6 hours ago

      I don't know, I know many people (including non-technical people) that use a lot of the chatbots. (And I even heard some parents at the playground talk about it to each other. Parents that I didn't know, it was a random public playground.)

      Not sure if they are 'excited', but they are definitely using it.

      Lots of interns and students also use the bots.

    • firstplacelast 6 hours ago

      I was at a get-together last weekend with mostly non-tech friends and the subject was brought up briefly. Seemed to be a fair amount of excitement and use by everyone in the conversation, minus one guy who thought it was the "devil"...only slightly joking.

      • falcor84 6 hours ago

        If I were to write a "hard" sci-fi story of how the devil might take over the world in the near future, AI would be my top choice, and it would definitely fit with The Usual Suspects' "The greatest trick the devil ever pulled was convincing the world he didn't exist".

    • scotty79 5 hours ago

      The real world is made of bubbles.

  • ksec 6 hours ago

    Exactly. People are actually paying to use ChartGPT. 10 Millions subscribers and 1 million in Business and Enterprise. Number 1 in Productivity Download on App Store. My 10 years old nephew are using ChartGPT to do all sort of things, and she told me her whole class are using it. I have heard a few real life conversation about ChartGPT being what Google ( as in search engine ) should have been all along.

    And these people dont know a thing about C, Java, CPU or RAM. They are not tech people.

    Over the decades the moment I hear real life conversation from non tech people in public talking about certain tech and being somewhat enthusiastic about it, is the moment that piece of tech has reached escape velocity. And it will go mainstream. And somewhat strangely enough, I only started using more ChartGPT because every non-tech people are starting to use it. And they use it much more than I do.

    Just like people laughed about "Smartphone" as in iPhone era. Lots of tech people including i believe MKBHD only got their first Smartphone with iPhone 4, most consumer are even later. While I have watched the introduction of iPhone Keynote a dozen times before the thing was even shipped. The adoption curve of any tech will never be linear.

    The example pointed out at the start of the article is somewhat bizarre. AWS is only pausing Colo. Apple Intelligence has more to blame with Apple themselves rather than AI. Intel ( or PC ) not selling AI enhanced chip is because consumer dont buy AI hardware, they buy AI functions. And so far nothing on Windows OS seems to be AI enhanced and specifically requires the AI Intel CPU.

    And I am not even Pro AI or AI Optimist to see all that.

  • SrslyJosh 6 hours ago

    Amazing. Presented with a study that found that the general public isn't excited about AI-enshittifying everything around them, you ask "What if people who aren't excited about AI are in a bubble?"

    • azan_ 5 hours ago

      I think you should read the study before jumping to this conslucsion. The fact that people are not excited about incorporating AI in shitty way into some apps does not imply that people are not excited about AI.

  • awkward 6 hours ago

    AI excitement is all supply side. Lots of people are excited to automate their own labor and smooth out production. Very few people want to accept raw AI generated slop.

    That isn't pure doomerism - there's plenty of room for AI assist, and people like using AI experiences themselves. AI as a product is here to stay, but the second order of products openly using AI is showing it's limits.

  • EA-3167 6 hours ago

    They probably looked at the biggest player in the game, OpenAI, losing money hand-over-fist and concluded that a lack of demand must play a role.

    And they're right. They cited consumer research to show the ambivalence of consumers towards these products as well.

  • tayo42 6 hours ago

    What are people excited about right now?

    • saidinesh5 6 hours ago

      Google Gemini for what used to be Amazon Alexa tasks, chatgpt image filters (that Studio Ghibli one?), YouTube AI channels putting out content like If Danny Devito was the little mermaid, if Linkin park sang song X... Etc..

      • spacemadness 4 hours ago

        The novelty wears off quick, though. I think it's really technically interesting and fascinating that these models can produce what they produce. I love learning about them. But as far as the videos themselves and such, even with that interest, I know that whoever produced the video didn't do much. That doesn't make me feel really engaged with it. It feels discardable. I was watching a claymation dark fantasy piece someone put together recently, shot on their iPhone, which required a lot of work. They are an amateur, but did a good job. And I felt a lot more engaged with it in its jankiness than any of the AI produced videos I've seen. I still think about it from time to time. All the AI entertainment is momentarily interesting at best but I don't think about it much afterward.

      • cruzcampo 6 hours ago

        All of those feel like gimmicks imo. Fun, sure. But exciting? Revolutionary?

        • saidinesh5 5 hours ago

          I mean those were the items my non technical friends shared with me so far...

          Personally - i was able to get mockups of interior designs based on the photo of a building under construction using chatgpt - this would've cost me both time and money if i went to a real designer.

          Gemini has been summarising meetings i could not attend (scheduling conflicts etc...) and saving me from hours of watching meeting recordings.

          I was really skeptical of things because of the horrible results I've had 2 years ago with copilot and chatgpt, but things have improved drastically. To the point that it's already empowering certain people/jobs while having the opposite effect on others.

          Is it perfect? Nope. The mockups did have weird glitches. But they were 75% there and good enough for the task I wanted. The meeting notes were as good as a real human.

          So it's definitely eroding more of these kinds of jobs and so we are

        • justlikereddit 5 hours ago

          I like to to use LLMs for rude poetry.

          No one is going to get a billion dollar investment due to that. It's why all the corporate speak and marketing is harping on about productivity, robot takeover and deus ex machina in corporate language.

          Normal people will use it for creative writing aid, scrapbooking and other extremely unprofitable non-technical stuff.

      • SrslyJosh 6 hours ago

        None of those things seem useful, and I'm including the Alexa tasks.

    • johnfn 6 hours ago

      I'm pretty excited about Cursor.

    • vntok 6 hours ago

      Having a smart-but-no-smartass intern by my side, that is always eager to help, that is at the very least superficially knowledgeable about most things, that autonomously gets better at absolutely everything non-physical (yet), that is loyal yet immediately replaceable should I get bored with it or should a better intern pop up overnight (literally happened yesterday with qwen3), that never tires or gets annoyed at me.... well that's pretty exciting.

      • SrslyJosh 6 hours ago

        In other words, a slave. =)

        • vntok 5 hours ago

          What a weird and deshumanizing thing to say. Slaves are definitely people, a machine definitely isn't.

          Moreover, slaves aren't productive over the medium/long term. You don't get useful and lengthy performance from raw coercion, the same way you don't get truthful and actionable intelligence from torturing prisoners.

          So no, an LLM has very little in common with a slave.

kstrauser 6 hours ago

This is called "begging the question".

Obviously lots of people like it, especially when they don't specifically think of it as AI.

"Do you want an AI to filter all your information?" "No way!"

"Do you want Google to summarize your search results?" "Yes please!"

  • CobrastanJorji 6 hours ago

    I doubt many people besides Google executives really want Google to summarize your search results.

    • jankeymeulen 6 hours ago

      I was thinking the same, who on earth would want that, so did my technical colleagues, but since the AI summaries rolled out over here, non-technical folks I've asked about it actually seem to like it.

      • falcor84 6 hours ago

        I'm also a big fan of perplexity, using it for about a quarter of my searches.

    • kstrauser 5 hours ago

      Seriously? I think that's what most people want from a search engine, even if they wouldn't phrase it that way.

      Today I heard my wife ask our HomePod which US state was most similar in size to Germany. First, I was absolutely shocked that it gave a useful and correct answer. Well done, little dingus, and sorry to have doubted you. But more relevant, her goal wasn't to do a search. Her goal was to get an answer.

      For the most part, people want an answer from Google, not a list of pages that might potentially answer them if they're lucky. Sometimes I do want to see a long list of results I can skim through for the most likely answer, especially if I'm looking for technical details on something. But if I ask "how long do I bake a frozen 20 lb turkey?", I really just want a correct answer.

      So maybe people wouldn't actually say they want Google to summarize their search results if you phrased it exactly like that. But I bet most people, most of the time, would say that they wish the thing would just look at the 437 pages of results and tell them the answer.

      • aaronbaugher 5 hours ago

        True. We've gotten used to searching the web being like this: Type in "how long to cook a frozen 20-pound turkey," scroll past a few ads, and then either spot the direct answer in the blurb of one of the results, or see a promising-looking one and click through.

        There's a lot of skimming and scanning that we've come to expect as part of the process in locating a piece of information, and we do it quickly with practice, but that doesn't mean it has to be that way.

  • EA-3167 6 hours ago

    The article itself is a lot more robust than the headline, which is ideally only designed to create engagement rather than stand on its own.

sb8244 6 hours ago

Maybe my position on this is obvious, I honestly don't know a lot of how others see it.

Preface: I'm generally an AI skeptic.

There are a LOT of people who are doing "business work" for a living, which is significantly different than hands-on coding. AI gives these people a way to just automate all of the (maybe necessary) work that they don't want to do.

The final product being 80% good enough is fine. It is done and doesn't require them to spend time on something they don't want to do.

More often than not, it is at 80% today.

  • gjsman-1000 6 hours ago

    I think it's interesting how many people (and artists) are yelling about how AI can't do a service as good as a human.

    That's true, but quality was never the problem. Business leaders typically don't bat an eye about outsourcing to the second world or third world, even if the quality might be subpar.

    Being a business leader is not letting "perfect" be the enemy of "good enough;" and there's apparently a mountain of fields where AI is "good enough." Or, at least, good enough to replace where the third world would have been doing the work.

    • eru 6 hours ago

      That's an interesting perspective.

      Btw, it's not even that workers in developing countries are intrinsically worse---it depends on the task and the people. But no matter how good they are, communicating half a world away and across cultures definitely makes turning your business requirements into good work harder.

    • nothercastle 6 hours ago

      Yeah there are a lot of Tasks where it’s just cost and no value do quality does not matter. Take customer service at comcast, it’s already terrible nobody would protest if they made it worse

zitterbewegung 6 hours ago

I think that incrementally adding features (how OpenAI and their competitors) is a much better release strategy than a monolithic one by windows 11 and macOS and that’s a big part of negative feedback. Also Microsoft and Google never should have kept renaming their products multiple times such as ditching Cortana which has the largest brand recognition of any AI system. (Saying it was Cortana plus like Alexa is now is best ).

I’ve used many features of Apple Intelligence and Google Gemini and they have made me more productive after I have learned how to use them. Generally you get more complainers on a new product then people who use it. Being in the HN bubble doesn’t help either IMHO.

throwanem 6 hours ago

Because that hero image is exactly the graphite sketch (touched with white and gold paint markers) of a PCB the writer wanted, and it took five seconds and five cents to produce, and it is worth more than that to no one including the human artist who did not draw it, because it is only here so people don't assume this stock website layout is broken for its absence.

That the human artist still deserves and requires paying I enthusiastically agree. I would rather that happen in a way which will demean their skills less than having to subsist on makework like this garbage would.

  • psunavy03 6 hours ago

    Throughout history, that's what artists DID. The great works only exist because of the Church or because some rich patron paid for it. And everyone else just ended up copying those for middle-class people's houses and such, because that was the Renaissance version of "Live, Laugh, Love" wall art.

    Art classes go gaga today over the Clothed and Nude Majas, when the whole reason they existed was so some rich noble could have a "respectable" decoration that he could then hoist away when the party got raunchy enough and go "heh, look, she's nekkid now."

    • throwanem 6 hours ago

      Yes. The loudly embittered ones now will be making sandwiches or something in a decade, around when today's quietly thoughtful ones really begin making themselves known to history as the precursor generation of the first great - diffusion artists? Collageurs d'inference? Oh, I'm sure it won't be anything I would come up with, anyway. But what does life look like when keeping the deep-pocketed philistines satisfied takes only two hours a week, plus the usual coffee dates or "coffee" dates or "coffee dates?"

Ekaros 6 hours ago

I think it is in: "Oh well that is neat" "Anyways..."

Category. So generating some text, some image, something else occasionally is pretty cool. Maybe asking some questions and getting something explained. Or search when needed. And ofc, chatting when bored.

But for general person. I do not really think there is that frequent use. And then I really doubt that this can be sold a service for vast majority of population. Like say search can not.

ttul 7 hours ago

Lots of hyperbolic language in this post, but little substance.

  • crowcroft 6 hours ago

    Endless tautology with very little argument.

beloch 6 hours ago

> "Secondly, there is an insane amount of money tied up in AI."

Money attracts attention, both passively and actively. People see OpenAI, etc. spending billions on training and figure there must be something to it. OpenAI and others also probably spend quite a bit on marketing and social media bombing too, and a lot of that will likely be done by humans. If you can spend billions on training, what's a few million more on social media?

It doesn't matter what AI can actually do now. If companies like OpenAI can attract enough investment and customers to stay afloat long enough, then they may yet become indispensable in the future.

The line between bubble and self-fulfilling prophecy is thin.

thdxr 7 hours ago

i think the idea that no one wants it is off

even if the models do not get better there is so much demand even just b2b

we have all these problems we can fix but we are totally limited by availability

cloud providers are overwhelmed by the demand - you can see this in how stingy they are with rate limits and how they don't even talk to you unless you've already spent a lot with them

  • nyarlathotep_ 38 minutes ago

    > we have all these problems we can fix but we are totally limited by availability

    Outside of generating React codebases, what are problems that, thus far, as of today, have been fixed by LLMs?

  • disgruntledphd2 6 hours ago

    If what you're saying is true, how come both Microsoft and Amazon are slowing down capital expenditure?

  • alabastervlog 6 hours ago

    > there is so much demand even just b2b

    Yes... there are a lot of expensive efforts of dubious actual value under way. Basically every company is trying to see if they can replace workers with AI, driving this demand. I've had insight into several of these, and from what I've seen, not a lot of people need to worry about their jobs in the next few years.

  • tonyedgecombe 6 hours ago

    Yet the AI companies are still spending far more money than they earn. Something doesn't add up here.

    • oytis 6 hours ago

      Growth phase aka market capture via price dumping. People wouldn't be using AI that much if it costed like what it takes to build and run one.

tim333 3 hours ago

>So why is AI getting so much attention when it seems that there is limited actual demand for it?

He skips the main reason which is it will get better in the future. Today chatbots, tomorrow I, Robot/Terminator/Her/AI/The Matrix etc. Or better as the movies tend to be biased towards disaster.

Incidentally there aren't really movies about future crypto/NFT/dotcom bubbles. AI and robots are different.

jzellis 6 hours ago

Me: sees headline Golly, I wonder if anyone in the HN comments is going to be angrily pro-AI

Jtsummers 6 hours ago

> But everything is getting labelled as AI

Welcome to ~15 years ago when everything was labelled data science. Rebranding statistics as data science was hot because it got investor dollars, and it could get you hired if you went to a bootcamp. Companies everywhere were hiring data "scientists" that barely knew how to program because that's what someone on their board or their investors wanted to see (or they thought they wanted to see it). Today it's AI (machine learning) which is an extension of that earlier data science phase, which itself was an extension of applied statistics branded with a trendier name.

And LLMs (generative AI) fall under the same trend as crypto systems a decade or so ago. If you toss it into your product (actually or just claimed) you get investor money. Because it's a fad. There may be some value from it, but the majority is not valuable it's just trend following.

tompark 6 hours ago

"Why is AI so popular when nobody wants it?"

This sounds like that Yogi Berra-ism, "No one comes here anymore, it's too crowded."

I suppose well over half of the people who read HN are too young to remember the dotcom craze. Everyone had every scrap of money tied up in tech stocks. IMO, the hype over AI is relatively small compared to other hype cycles. The goofiest part was the endless predictions about the singularity, and "you don't know what exponential growth looks like, man!". I mean, it can still happen, but for a while that's what AI was all about.

  • bluefirebrand 6 hours ago

    > IMO, the hype over AI is relatively small compared to other hype cycles

    I think this is a really interesting observation actually

    The dotcom boom describes a period of time where money was flying around like crazy, the economy went wild

    During this "AI boom", investment and the economy is cratering. It's not even remotely comparable

fionic 5 hours ago

I know I’ll get downvoted hard for this but I can’t tell if these posts are supposed to be satire.

Literally every person I talk to in every single industry uses AI daily: Community managers for sending different email content, sales managers for emailing marketing content and researching prospects and are actively researching agents to help communicate with people automatically, govt workers for generating RFPs, defense industry, coders, band members researching audio engineering, real estate marketing house descriptions, just to name a few. Everyone also says they love it and makes their job so much easier. Not a single person has ever said what these articles headline or try to claim: “man this is awful seriously AI is such a dumb concept and it’s making life worse, no one asked for all this AI to get in my way all the time.”

Obviously my experience is anecdotal but makes it very hard for me to understand this kind of negative content who it’s for and who it’s serving. I think people are aware of auto generated content and the words here ring so empty to me and I feel like it has to be the case for others as well.

Pessimists will get left in the dusts of these machines whose shoulders the optimists ride on.

nancyminusone 6 hours ago

> extra large high res image of generated useless fake unrealistic circuit board

> "Why don't people like AI?"

> many such cases

It's a damn cliche at this point, why does everyone still do it?

oytis 7 hours ago

A lot of people seem to want it. At least they want it more than dealing with a human to cover their needs.

oldjim69 6 hours ago

Because no one wants to pay anyone for work

  • gjsman-1000 6 hours ago

    Before AI, it was called "outsourced to Philippines, India, or China."

    AI (as Indian companies are currently panicking about) is good enough to mostly replace their role as the lower-tier budget option.

AIPedant 6 hours ago

Beyond codegen, it seems to me that the resolution to this paradox is that ChatGPT is a very popular toy in peoples' home life, but a lot of those very same people are wise enough to not use it for enterprise applications. Or, if they foolishly trusted Satya Nadella, LLM-assisted work eventually blew up in their face and they stopped using it. So gen AI is quite popular, but badly falls short of tech's aspirations.

I hate generative AI and refuse to use it, but I hear of people using it all the time in low-stakes contexts:

1) recipes (the cookies might suck but they won't be poisonous)

2) low-quality infotainment (NotebookLM)

3) OpenAI proudly celebrating that horrible Studio Ghibli crap - unlike dishonest math benchmark scores, garish slop on demand actually brings in customers!

4) ChatGPT boyfriend scams :( https://news.ycombinator.com/item?id=42710976

And I've also heard of people using it at work and being severely criticized:

1) ChatGPT-drafted license agreements that the executives would never agree to

2) summarizing documents you were too lazy to read and missing crucial context

3) coworkers being personally offended (or superiors being angry) about a ChatGPT email

Programmers and bottom-barrel creatives have the only reliable success with LLMs if there's real money at stake. Then there are notably but low-margin use cases like dyslexia assistance, Be My Eyes, etc. For everyone else, it's just a nifty doo-dad.

dustingetz 6 hours ago

it’s a tech culture war, devs are gaslighting the business sucking down incredible salaries while reporting that the jira ticket is delayed by tech debt, the business is gaslighting the devs with the inept technology leadership, stupid HR games (my wife’s workplace dragged everybody in for fucking friendship bracelet day), secret DEI policies while paying high performers the same as negative performers because they can’t tell the difference.

These two groups hate each other and AI promises holy grail to both parties - devs can more easily learn the 1000th tech stack in between yoga classes while pretending to work, the business dreams of finally firing all the devs and keeping all that money for themselves. And neither group has a clue how this dream will be fulfilled, but they want to believe because computers can now talk, so enter the entrepreneurs and VCs to gaslight everyone involved with fake stories of how 35% of LOC at Google was coded by AI (erm, accepted IDE autocomplete), laughing all the way to the bank while they vacuum up all that dumb money, poor befuddled executives that roleplayed their way into an 8 figure budget responsibility by being tall white men with blue eyes

BuckRogers 6 hours ago

I'm excited about AI after using Grok. It quickly replaced Google for me. I'm not sure I'd actually pay for it, I'd just go use another free AI, but it's very, very good. I definitely think a locally hosted AI running on my GPU that can use the internet, something like Nvidia ChatRTX will be everywhere soon. I know it'll replace most knowledge and arts jobs within 1, maybe 5, maybe 10 years. But nothing is going to stop that. It definitely helps me get through some real drudgery as a developer as well. I definitely focus on the bigger picture now more, and less on digging into mundane drudgery like a syntax issue. I think anyone that doesn't fear losing their job as an artist or as a doctor or developer wants it. It's going to do a better job in many things. Improving lives, quality of life, and lifespans. No doubt in my mind.

rvz 6 hours ago

Only ChatGPT and Deepseek are seen as new "AI" in the mainstream.

The rest? 95% of people have not heard about.

coretx 6 hours ago

Shareholders are not nobodies.

gjsman-1000 7 hours ago

Quite the opposite: It would appear, especially with the rise of the ChatGPT, that the vocal "I don't want it" people are the extreme minority.

ChatGPT alone has over 100 million active users per month. Not necessarily paying users; and not necessarily a number that's going to double overnight again requiring another server blitzscale; but it's comfortably cemented.

https://techcrunch.com/2025/03/06/chatgpt-doubled-its-weekly...

  • nancyminusone 6 hours ago

    Going to ChatGPT to ask for a quick python script isn't quite the same as "AI Overview" being inserted into every search result page, a fake generated image at the top of every post, entire fake websites taking up SEO with whole articles devoid of information, people "writing" and "illustrating" generated books and trying to sell them for no effort put in...

tcbawo 7 hours ago

I think people want to use it for their own benefit. Years of invasive advertisements has many people convinced that AI integrated into consumer products are more for corporate benefit, not the user's benefit.

  • lupusreal 6 hours ago

    I agree. I use AI stuff, but on my own terms. I don't want it integrated into anything which I'm not buying specifically for the AI.