TrackerFF 4 hours ago

I'm guilty here. For years I've been meaning to learn modern webdev, but every time I've sat down to read the docs, tutorials, books, and what have you - I just give up after a couple of hours. Getting seemingly easy stuff done is just a drag.

The other day I decide to try ChatGPT 4o with canvas. For a solid year, I've planned to create some easy membership registration and booking system for this small club I'm part of - just simple stuff to book rooms in a building.

Well, to my absolute amazement - I had a working product up and running after 4 hours of working with ChatGPT. One block at a time, one function at a time. After a day I had built on a bunch of functionality.

So while I'm not completely clueless on back-end programming, my front-end skills are solidly beginner. But it felt like a breeze working with ChatGPT. I think I manually modified at tops 10 lines during all this, everything else was just copy/paste and upload source files to ChatGPT.

Any errors I'd get, I'd either copy/paste, or provide a screenshot.

I actually tried doing something similar when GPT3.5 came out almost two years ago, but it was just too cumbersome then. What I experienced the other day felt lightyears beyond that.

So, did I learn anything ? No - not really. But did it solve a problem for me? yes.

EDIT: But I will add, it did provide solid explanations to any questions I had. Dunno how well it would have worked if my 70 year old mom had tried the same thing, but a gamechanger for people like me.

  • swatcoder 3 hours ago

    > So, did I learn anything ? No - not really. But did it solve a problem for me? yes.

    And this is exactly the concern.

    The tools are genuinely useful for some tasks. But unlike club organizers getting to DIY some hobby project for their club, students aren't yet being tasked to produce useful things in the best way possible. They're being tasked to do fairly rudimentary things so that they can learn some fundamentals by way of practice.

    And likewise, in trades like ours, juniors are tasked to do useful things, but they're given affordance to deliver those things in ways that help them learn some fundamentals by way of practice.

    Students and juniors who skip the practice are basically just trading their future expertise and readiness to accomplish trivial things that either don't or barely matter. Some of them may become the first generation of expert prompt engineers, accomplishing things in totally new ways in what amounts to a novel trade, but many of them are just going to be shooting themselves in the foot.

    • echelon an hour ago

      This is experience though, and people do learn this way.

      This is the exact same exercise as my first time slapping Dynamic Drive scripts together to customize EzBoard back in the 2000s. I didn't understand any of it at the time.

      This style of learning is hands on. You learn a little bit about the shape of the problem before you sit down and learn the theory.

      Not everyone learns by opening the book first. Some people like to get their hands wet. Introduction through practical osmosis can lead to a fertile appreciation for the theory.

      • swatcoder 35 minutes ago

        > This style of learning is hands on. You learn a little bit about the shape of the problem before you sit down and learn the theory.

        We're talking about getting some project or task done. It's a practical exercise by definition. Any learning experience to be had is going to be hands on, and for student/junior-level tasks, it's not going to be some product of knowing deep theory in the first place.

        But the process of identifying the boilerplate that needs to be written, the process of manually entering it, the process of debugging your own code that you wrote, the process of scouring for examples and explanations, the process of being held accountable in a teacher or colleague's review, the process of discussing your experience of the task with someone who already understands it well... these all provide extra opportunities for hands-on learning that are short-circuited when having an AI put it together for you.

        Yes, script kiddies and VBA/Excel junkies in the sales department have been slapping together programs they didn't understand for decades, and many people have now joined the industry thinking that they might secure a career as an "engineer" by following tutorials well and pasting StackOverflow snippets efficiently. And while some people who found themselves starting on that path have eventually come to transcend it and learn fundamentals more deeply, the "slap it together" mentality, the "find a tutorial" mentality, and now the "have a chatbot do it" mentality easily become quiet traps for people who don't realize that they need to actively transcend them at some point.

        You can genuinely learn a lot about football by playing Madden on your couch, but if you don't get out on the field and actually play some games, your dreams of making it into the NFL are probably not going to pan out.

      • fao_ an hour ago

        It's not learning if the AI is feeding the person an incorrect model of the world, though. There's less likelihood of that if someone is reading information curated by a human that understands the shape of the problem and the domain. The AI doesn't "understand" any of that and just spits out words in an order that "seems correct" — that's precisely the problem.

        • dr_dshiv 21 minutes ago

          The idea that AI doesn’t “understand” seems implausible with current models. We can say “machine understanding” if normal understanding requires felt experience. Otherwise, for all intents and purposes, the power of these tools rests in their understanding.

          • swatcoder 7 minutes ago

            The power of these tools rests in how common certain patterns of text are in both immediate and superstrucral ways.

            They force us to admit that with 8B people in the world, many of the questions we have and tasks we pursue have already been approximated countless times. They reveal that much of what we do is not so original.

            Understanding -- human or machine -- is something different, and enables invention/originality/reflection in a way that recent innovations are still not yet able to acheive on their own.

            Importantly, though, students and juniors are specifically being assigned challenges that are already known not to be novel or inventive, which is why these tools can so easily do the work for them. But when when they let the tool do so, they sidestep the unique growth opportunity they were given in the first place.

        • echelon 30 minutes ago

          Unless the learner is being purposely mislead, it's still learning no matter what the entry point is.

          Even if the learner is climbing a suboptimal hill, they're still learning the subject landscape and getting a sense of it. It's still a gradient.

          The entire subject of chemistry is like this. They feed you lies and half truths for the first few years of your undergraduate career so that you develop a sense for things. The real model is far too complicated and scary to introduce.

          • anon22981 7 minutes ago

            It’s possible to learn literally nothing when using a gen ai. You can copy paste stuff without even glancing your code. For student work sized projects I’m sure it’s very doable to have a working product without knowing anything about how it works.

            Today I wanted to try to create a tool for a game: snapshot a picture and a program recognizes the clipboard event and does image recog things and gives me data. I had a working poc in 3 hours and learned nothing. (Tbf I knew what I wanted and how to do it in general terms so the process might be different for a beginner.)

    • valval 29 minutes ago

      I’m at least twice the programmer I was before LLMs, and I spend maybe a tenth of the time reading docs I then did.

  • aimazon 4 hours ago

    The point missed here is that you didn’t need to write any code at all, with or without ChatGPT. ChatGPT helped you with busy work: you reinvented something that already exists, instead of using a mature and established membership platform, you built your own. The reason this has parallels to education is because that’s what education traditionally is: busy work.

    You did learn something, by the way: you learned how to use modern tools. You didn’t do things most efficiently but it was more efficient than writing code without the help of ChatGPT.

    • stackghost 3 hours ago

      >The reason this has parallels to education is because that’s what education traditionally is: busy work.

      Busy work is work that is assigned merely for the purpose of occupying one's time.

      That's not the same thing as practice. We drill children in arithmetic not to keep them busy but because it turns out repeatedly solving multiplication problems is an effective way to teach children their times tables.

      • hbosch 3 hours ago

        >That's not the same thing as practice.

        Exactly right. In terms of education, there generally seems to be a blurry line between was is considered learning and what is considered memorization. If you memorize your times tables, it doesn't mean you've learned multiplication for example... oftentimes the ability to memorize and recall things is opposed to learning, which means leveraging previous knowledge to solve something new.

        In the case of AI, it usually presents facts and opinions simultaneously (something a calculator famously does not do, for example). Facts are memorized, opinions are learned. In all core studies it's always been more important to understand what you're solving for, and why, rather than "how" to solve it. The continued dissolution of the "how" barrier is a net benefit for all of civilization, and when experts of "why" are valued more than experts of "how" the world will be a much better place.

        • nunez 2 hours ago

          This is one reason why many educators are phasing out homework. It's great practice but can easily lead students to regurgitating information instead of understanding and retaining knowledge. This is also why quizzes and tests are vital in a well-designed curriculum: they test understanding (or at least are supposed to).

      • ghostpepper 3 hours ago

        This may be why the practice was invented but I bet there are plenty of teachers who see it more as a way to keep them busy

    • pclmulqdq 3 hours ago

      When you are learning something, that busy work helps. What you think of as busy work when you are a professional is actually often sort of novel to learners and is a simple example of how to do stuff.

    • whimsicalism 3 hours ago

      I love that AI negativity on HN is so strong that we reclassify whatever work AI can do into “busy work” as soon as it is possible.

      • aimazon 3 hours ago

        That’s not my point. I’m not disparaging AI. I described AI as modern tooling that is beneficial to learn. I’m sure there are many professional developers saving time using AI to generate code they would have otherwise written. My point is that in this specific case, AI didn’t enable anything useful. I would have said exactly the same if the OP had written the code without AI. If a problem is long solved, reinventing it is busy work. Busy work can be fun, I reinvent things all the time, but that doesn’t change the nature of it. If the project they had built had been something novel (that does not exist) then it would have not been busy work.

        I was shit talking education if anything :)

        • valval 11 minutes ago

          So your definition of busy work isn’t “work AI can do”. It’s “work that accomplishes things that have already been accomplished before”.

          The latter is even sillier. Your position might be indefensible.

    • JTyQZSnP3cQGa8B 3 hours ago

      > you reinvented something that already exists

      Since AI can’t invent new stuff, who will do that? Juniors who haven’t learned anything because of those tools? Or seniors who will disappear one day because they are retiring or are being replaced by AIs?

      I already work with juniors who use ChatGPT and cannot explain what they wrote. They have a fucking engineers degree and don’t know anything. It’s catastrophic and may increase in the future. What will happen if it continues like this?

      • aimazon 3 hours ago

        code is an input not an output. People don’t care about code, they care about products. You can build something new using code that already exists: every product we use today is built on a lot of what came before.

        My point wasn’t that writing code with AI is bad, my point was that writing code for the sake of writing code is bad. If something already exists, use it. If something doesn’t exist, build it, bring something new to the world — whether that’s with hand-typed code or ChatGPT assisted code, I don’t care.

        I think we should write less code.

        • jpc0 2 hours ago

          > You can build something new using code that already exists: every product we use today is built on a lot of what came before.

          I don't disagree with this from a business perspective but for an engineers perspective I find it severely limiting.

          Even very very basic things should probably stay fresh foe you. If you cannot implement a basic parser ( recursive dexent / pratt etc) you will very likely reach for regex when there is likely a better solution that isn't a lot of code.

          You should probably know how to write leftpad... Or how to strip ascii whitespace using an ArrayBuffer and a for loop in JS. These are things that is extremely easy but a little tedius to do but are fundamental skill to building up more complex solutions later.

          You should probably know how to build and reason about some more advanced datastructures in your language. Basic trees, directed graphs, trie. These are things that if they are second nature for you to implement you can come up with novel solutions to actually novel problems when they come up.

          You also get an innate understanding of where the performance characteristics of certian algorithms and datastructures actually lay. Because big O doesn't always tell the full story...

          • nprateem 2 hours ago

            And yet in 20 years of coding I've never needed to write any of these. Even implementing a graph is something I've only needed once or twice.

            It's far more important to know what you want to do rather than how to do it.

            • jpc0 an hour ago

              As the parent comment said. Even if you never need to do it. Being quire familiar with these topics can help you select the correct solution.

              Sometimes you don't need a binary tree, you just need a O(n) linear search but someone who has never played with the actual low level datastructures has no idea when that matters, so in their mind a hashmap makes a ton of sense because searching is between O(1) and O(log n) depending on implementation. But in many cases a flat array will be significantly more performant and is a simpler implementation but in their mind a hashmap is the better solution.

              Now it probably doesn't matter, but when it matters it's better to know the the answer.

              That for me is the big distinction between software engineering and software development.

              Plumbers don't need to be engineers, but there are times when you really need an engineer to design the plumbing system.

              Strive to be the engineer, purely because you will enjoy the craft a lot more, and people recognise drive and ambition.

              It doesn't matter if you are in the right place at the right time if you don't have the skills to back it up.

              Granted if all you want to be is a plumber that pipes APIs together and lives a different life, by all means I encourage you to enjoy life. But don't make students believe thats all their is to the industry.

            • nunez 2 hours ago

              It's not about implementing a graph or a trie. It's about knowing when and why these data structures matter.

              Sure, you (or an LLM) can probably find a package that can quickly search for a file in an extremely large filesystem.

              I'm guessing that the authors of S3 didn't have that luxury when they were building out this service years ago, though. There are very few people on Earth that deal with exabytes of data, and prior art only gets you so far in this scenario.

              The only way something like that can be built is by truly understanding CS fundamentals. Most people study CS to become a SWE. If programming gets reduced to maintaining prompts and optimizing here and there, then there is a real risk of this discipline eroding over time.

        • nprateem 2 hours ago

          Yes, but there's a balance. The HN purists who are wedded to their knowledge struggle with this, but then someone does need debugging skills to go in and fix things when some of the stupid things AI does makes things break.

          Also I've found telling it specifically where it's messed up is way more effective than just shouting at it to fix it after it's failed a second time. And sometimes you just need to manually fix it.

          I wrote an entire library last weekend, then rewrote it on Monday when I realised I'd messed up. Two things I wouldn't have bothered to do without AI doing the coding.

          I know how the important stuff works and I could pick my way through the JS, but glad I didn't have to write it. I mean, I just wouldn't have.

      • valval 27 minutes ago

        Were juniors able to explain what they wrote when stack overflow was the source?

      • weard_beard 2 hours ago

        I feel in some regards this worry is akin to not knowing assembly. When/if it becomes good enough that the entirety of coding is abstracted away we won't care that new entrants don't understand it.

        Let's just not lose the documentation on how to modify/improve the AI when needed...

        Maybe that can be the job of a very select few. Fixing AI the way we fix robots for manufacturing.

  • tempodox 3 hours ago

    You happened to use an LLM for something that is most prominent in its training data. Do something off its beaten path and correcting all the hallucinations will be more work than just plain old learning it.

    • ants_everywhere 2 hours ago

      This is my experience too. Even with pretty common problems in popular languages like Python. The code generated by ChatGPT 4o is full of bugs. If you give it feedback it just thrashes instead of trying to locate and correct the underlying problems.

      Even if you ask it to think about and correct the underlying problems, it still generates buggy code, often with the same problems it was pretty decent at reasoning about.

    • whimsicalism 3 hours ago

      not true in my experience - of course you have to work within its capabilities but i find it to be a capable partner across large segments of tasks

  • simonw 3 hours ago

    I’ve long believed that the best way to learn anything in tech is to attempt to try to build something with it.

    My hunch is that people who use the process you are describing will still get a massive leg-up in learning skills like web development.

    Often it isn’t a choice between using AI-assistance to get some working vs spending 20 hours figuring it out from scratch: it’s a choice between getting somewhere with AI or not doing the project at all, because life is full of things to do that are more rewarding than those 20 hours of frustration.

    Anecdotally, I’ve heard from a bunch of people who always wanted to learn software development skills but were put off by the steep initial learning curve before you see any concrete progress… and who are now building useful things and getting curious about learning more.

    • lovethevoid 2 hours ago

      I agree on building to learn, but disagree on the massive leg-up. It's like believing that using a template off GitHub is going to teach you much (outside of specific use cases). This is also why a lot of GitHub issues now submitted by users don't even follow very basic processes you may have outlined on narrowing down problems. Here's this long copied code from terminal, fix it.

      What makes the most difference in building to learn is the tiny steps you take to build. Printing hello world for the first time, changing it and seeing something else, using inputs for the first time to print hello, [variable], getting that image to animate across the screen. Each step becoming a great foundation for further curiosity, rather than turning your project into a black box.

      In contrast, I've heard from a bunch of people who wanted to learn software development, but now don't see a point since AI can do it. Same with drawing. There's a large growing apathy towards learning skills I've noticed.

      This is why most advocates for it don't do it from the perspective of learning. They do it from the perspective of building fast in the hands of those who already grasp the foundations.

  • treflop 3 hours ago

    If it’s not your daily job, I don’t see the harm. Sometimes I use ChatGPT for something I absolutely don’t care about.

    But if it is, then I think you’re trading it for a career of trial and error.

    Regularly I watch people at work spend a week trying to solve a problem, but because I learned the fundamentals at some point in my past life, I am able to break down the problem, identify the root cause, and solve it quickly.

  • analog31 2 hours ago

    >>> So, did I learn anything ? No - not really. But did it solve a problem for me? yes.

    In my old age (60), I've gotten a little bit philosophical about this issue. I'm old enough to have pored through entire textbooks and manuals, e.g., BASIC, HyperCard, Turbo Pascal, MS-DOS (to name a few). But I can still ask myself at the end of the day:

    So, did I learn anything?

    Those things are all flawed, temporary creations of some individual, and are no longer useful. On the other hand, there are certain things that I've learned, and consider to be "fundamental," such as math, physics, and admittedly, music. Now a philosopher might correct me and point out that my choice of "fundamental" is arbitrary, but if nothing else, those things are long-lasting. The laws of physics that I'm capable of grasping haven't changed in my lifetime, nor has the technique of playing the double bass without injury.

    Perhaps a thing you could do is sit down and decide what things you consider to be fundamental enough (relative to your interests) to learn on a deep level, and what things you can interact with on a superficial basis by letting AI take care of them for you.

  • hammock 3 hours ago

    The whole thing reminds me of how we used to check out BASIC books from the library and manually type in complete programs, games etc from the book and run them. I wouldn’t say I learned NOTHING, far from it, but it definitely wasn’t a path to become fluent in BASIC

  • illwrks 2 hours ago

    Not to put a downer but have you also perhaps created a liability for your club? If you don’t fully grasp what is going on with the code have you potentially left the door open to exploits?

    • dmurray an hour ago

      A reasonable thing to be concerned about, but he described having a solid understanding of the backend, which is the more likely place to introduce a security hole.

  • sibeliuss 3 hours ago

    As someone who already has skills in backend / frontend, AI tooling has made me fearless in terms of new material. I couldn't type it out by hand, but by getting something working through a (much faster) trial and error process, I'm learning so much! I suspect this is your case as well. There's a lot of learning going on underneath, which will only improve your abilities in ways that will come back as astonishingly beneficial if you keep working on your project.

  • furyofantares 3 hours ago

    You also didn't learn modern webdev for the years you'd been meaning to.

    You're actually better poised to learn it now if you care to, now that you have a component you care about that already works that you can work from. Of course maybe you won't, maybe having GPT there will indeed prevent you from ever learning it, I don't know.

  • bdlowery 3 hours ago

    You skipped all the hard parts, all the struggling, and now you have a working product without a mental model and can't level up to doing harder things on your own. Struggling IS learning. You didn't try different paths, piece different info together, and then eventually create a mental model. You just used ChatGPT to skip to the end result.

    It's like enrolling for a Calc 2, cheating on all the homework to get an A, and saying "did i learn anything? No, but it solved all of these annoying homework problems for me!" Now when you have to take the 1st exam you're screwed because you didn't learn anything.

  • drdeca 2 hours ago

    “In the Phaedrus, writing is the pharmakon that the trickster god Theuth offers, the toxin and remedy in one. With writing, man will no longer forget; but he will also no longer think.”

    When I have to navigate to somewhere I haven’t been before, I generally do not read a map, but follow instructions from some navigation software. As a consequence, I often don’t really know where places are, just the route I take to get to a destination. With GPS navigation, I do not get lost, but neither do I have much awareness of how locations are spatially arranged.

    Such technologies seem to always be like this.

    A potion which removes a difficult task, but also dulls the ability to do such tasks oneself.

    It is like that one SMBC comic https://www.smbc-comics.com/comic/identity “ Humans offloaded memory to books, then thought to computers. Now, we're offloading our desires to the network. All that remains are basic bodily functions, which well offload in another generation or two. At that point, well just merge into one united entity so, it all works out.”

aithrowawaycomm 3 hours ago

This condescension is very common and very irritating:

> Q. You say that the best experts of the future will be those who make the most use of AI. Are people who are waiting to use AI making a mistake?

> A. I get it, it’s an unnerving technology. People are freaking out. They’re getting a sense of three sleepless nights and running away screaming. It feels like an essential threat to a lot of careers. I think if you’re a good journalist, the first time you think, “oh no.” But then you start to see how this could help you do things better than before.

There are a lot of white-collar jobs where LLMs do more harm than good because a 1/4 hallucination rate means you waste too much time on wild goose chases. I briefly thought GPT-4 was useful for finding papers given a description of the results - I “kicked the tires” with some AI research and was very impressed. But when I tried to find papers on animal cognition, about 75% of the results were fictional, though supposedly authored by real animal cognition experts. And GPT-4o is even worse! The tools are just not good enough for my use case; Google Scholar is far more reliable.

I just don’t understand the childish motivated reasoning behind assuming the skeptics are scared. Maybe if I spent “three sleepless nights” talking to ChatGPT I would be more enlightened.

  • simonw 3 hours ago

    > I briefly thought GPT-4 was useful for finding papers given a description of the results.

    That’s one of the many poorly documented traps of LLMs: trying to use them to find papers like that is a fast-track to worthless hallucinations. If that was one of your first experiments I can’t blame you for thinking this tech is “more harm than good”.

    LLMs are terrible search engines… except for the times when they are great search engines!

    Learning when and what to use them for continues to be a significantly under-appreciated challenge.

    • aithrowawaycomm an hour ago

      No, it was not my first experiment - again with this unbelievable condescension! I have been playing around with this stuff since GPT-3. That was my first practical use case where GPT wasn't a totally useless waste of money. It works very well with AI-related papers, maybe a 1% hallucination rate. I got the idea from an AI researcher and I was intrigued that ChatGPT might actually be useful for me. But it was not. The problem, as always with ANNs, is that I went slightly off the happy path. Even a skeptic like me assumed the AI papers successes was evidence that GPT-4 was better at remembering its pretraining data; instead I think it's evidence that a data contractor RLHFed the answers and GPT had it memorized.

      > LLMs are terrible search engines… except for the times when they are great search engines!

      But note that what you said about finding papers was wrong, it works extremely well for AI research. The reason LLMs are useless to me across the board is that these unpredictable and arbitrary limitations apply to everything, not just search. "Learning when and what to use them for" is pure trial-and-error because it seems to amount to guessing what tasks the 3rd-party data contractors trained the LLM to solve.

      I am not a Python or JavaScript developer, nor do I write code for extremely well-known libraries. I use F# for oddball projects (often analytics), and GPT-4 was utterly useless for F# codegen. My first experiments with GPT-3.5 showed that it would plagiarize hundreds of lines of public F# projects, including from my own GitHub, without any prompt engineering or trial-and-error - it was just blind plagiarism. GPT-4 isn't quite that bad, but it's still not even close to being good enough to help me - in particular it has no understanding of high-performance F#. I would be spending far more time auditing and optimizing its crappy code. And time spent writing code has never been the limiting factor in my F# development.

      I also do some recreational mathematics on finite geometry and combinatorial group theory; GPT-4 was utterly useless here, even with CoT prompting, and even though it solved more complex graduate-level algebra problems without any difficulty. Of course, those problems were repeated and solved in dozens of graduate textbooks. My cute little groups, not so much. I believe CoT prompting is theoretically incapable of helping GPT here since the computational complexity is too high. What CoT prompting gives you is a bunch of insidious errors that take time and effort to unravel.

      Otherwise there's nothing I do that would even conceivably benefit from an LLM: it can't play guitar, it can't play with my cats, and I would never use it to communicate with friends or family. I guess I could fill my brain with shallow subject knowledge about something, a few choice sentences. But I'd much rather understand something in depth by reading a book. I'm not too busy to read a book. Otherwise... maybe I could use LLMs to write polite no-thank-yous to unsolicited recruiters.

      This tech truly has nothing to offer me. I think you are failing to understand that, as a Python developer who maintains one of the biggest Python web frameworks and writes a popular blog for general tech audiences, LLMs are unusually well-suited towards your use cases, due to reasons that will not extend to people working in more isolated corners of the world.

  • lolinder 2 hours ago

    > There are a lot of white-collar jobs where LLMs do more harm than good because a 1/4 hallucination rate means you waste too much time on wild goose chases. I briefly thought GPT-4 was useful for finding papers given a description of the results - I “kicked the tires” with some AI research and was very impressed. But when I tried to find papers on animal cognition...

    This is less a question of which jobs benefit from AI in general and which don't than it is a question of tasks and specific tools.

    ChatGPT is not a search engine, so if you're looking for existing documents it's a very bad choice. But I've found myself using Perplexity—an LLM-powered search engine—more and more often because it reliably turns up results that Google fails to turn up.

    I suspect Perplexity is still also the wrong tool for scholarly articles, but that's not a fundamental limitation of the tech, it's just a question of the focus of the tools so far.

  • dyauspitr an hour ago

    What are you talking about? I haven’t had a hallucinated link since 3.5 to paper sources. They must have some sort of post processing in place to remove made up links.

karaterobot 3 hours ago

> Q. Isn’t it inevitable that AI will make us lazier?

> A. Calculators also made us lazier. Why aren’t we doing math by hand anymore? You should be taking notes by hand now instead of recording me. We use technology to take shortcuts, but we have to be strategic in how we take those shortcuts.

An unpopular opinion I have is that most of the doomsaying about technology making us dumber is true. Yes, even back to Socrates. I won't say all, but I'd safely say a lot. What happened was that we developed tools, lost certain capacities without necessarily losing the capabilities that came with them, and redefined the level a normal human should function at. My only point is that people don't like to think that maybe they themselves are less intelligent—in many ways—than people who urinated outside and didn't know what the sky was. But I don't see how it could be any other way. When we say things like "I don't need to remember, I can write it down", and "I don't need to do arithmetic in my head, I'll let a calculator do it", or "I don't need to read the article, someone will explain it in the comments" we are accepting the consequences of that, good and bad.

  • master_crab 3 hours ago

    People aren’t dumber, or smarter. They just focus on what’s the next important thing to tackle.

    For example, I doubt any website programmer knows the circuitry, assembly code, OS level calls, networking, etc, that make any webpage element do anything. Let alone can sit there and calculate any of the mathematical requirements needed to do any of that. But they know how to use an IDE and a framework like React.

    All this is a long way of saying:

    …on the shoulders of giants. AI is just the new tool needed for the next step up.

    • mckravchyk 2 hours ago

      It's a completely different thing. You are talking about civilisation constructs, parent is talking about things like mental fitness, abilities to perform tasks in real-time by yourself.

      For example, card payments are a crutch. If you pay by card / phone everywhere and then out of the sudden you are to pay in cash, it becomes mildly challenging vs. if you are used to pay in cash you don't think about it. The brain is capable of a great deal of automation, performing learned actions is effortless. Unlike a calculator or a spreadsheet, the buyer is not doing anything, just buying. It's not a bicycle, it's a crutch. It simply atrophies the mental bandwidth. The mind becomes more lax, less sharp, when it does not engage.

      Now imagine what it will do to people's brains when instead of thinking about solutions themselves, they will ask the AI for everything. Those neurons will atrophy and the person will be even less skilled to ask the AI the right questions than if they did not use the AI in the first place. I think the key will be a balance between doing the work yourself and delegating the stuff to AI, but it will be difficult to find that balance. Just like smartphones can be very useful but in the end are a net negative to society.

    • aliasxneo 2 hours ago

      Yeah, I'm not sure I understand how moving up an abstraction layer necessarily makes you less intelligent. I also don't think it makes you smarter. For the average individual, I feel like it simply moves your "cursor" up the stack, but you're not necessarily increasing your context window.

      Perhaps the confusion comes from the fact that we often produce more complex things as we move up layers. It's then assumed that the people who made them must be more intelligent, but as I said, I don't think that's a fair assessment.

      I would say the real measurement for intelligence here is how much of the abstraction layers you actually understand. In other words, can you move your cursor back down the stack and operate just as well as in the higher layers? Can you do this while unifying the complex interactions between each layer into a cohesive model? I've noticed that even AI tends to be pretty bad at this last step. It often takes prodding to get it to see the subtle errors often introduced when working with complex systems.

  • mamcx 25 minutes ago

    I agree, and maybe a better frame is 'capable'.

    I can do math with a calculator, but if it is taken away?

    I can feed myself with doordash, but if it is taken away?

    I can program a complex web-scale app, but if all those tools are taken away?

    What is left?

    Somebody who will die fast.

    Reliance on all of this is removing agency and resiliency. By the law of numbers, the planet still has people who know some of the fundamentals that make the existence of the rest viable.

    But if it is taken away?

  • aniviacat 3 hours ago

    > redefined the level a normal human should function at

    To a level much higher.

    We stopped doing many repetitive, tedious things, but in return moved to things that are way more abstract and complex.

    And that's happening everywhere. Even farmers are getting ever closer to being full on system architects.

    Oh, you didn't learn to do quickly calculate square roots in your head? Instead you spent that time on learning about relativity in high school physics class.

    By calling the people of the past smarter, you are really underselling the amount and depth of abstract though happening everywhere today.

vunderba 2 hours ago

The danger in the eventual ubiquitous availability of large language models (LLMs) isn't necessarily that they can seemingly answer any question.

The real issue arises when it becomes far too tempting to immediately turn to an LLM for an answer, rather than taking a few moments to quietly ponder the problem on your own, engaging and manipulating, exploring different angles, etc. This kind of abstract thinking is a craft that only improves with consistent practice and deliberate effort.

_tk_ 4 hours ago

Very misleading title. From the article:

„The crutch is a dangerous approach because if we use a crutch, we stop thinking. Students who use AI as a crutch don’t learn anything. It prevents them from thinking. Instead, using AI as co-intelligence is important because it increases your capabilities and also keeps you in the loop.“

  • ksd482 4 hours ago

    I feel like this is exactly what the title is conveying. What’s misleading about it?

  • fgbnfghf 4 hours ago

    I use AI to ask questions when I am not totally sure what the question is, and it is very helpful for narrowing that down. It can be powerful as a tool to help get your foot in the door on new knowledge. Just like google search there is a correct way to use it and an incorrect way.

    Another thing to consider is the motivation of companies like OpenAI. Their products are designed to be used as a crutch. Their money is in total reliance on the product.

lolinder 2 hours ago

The headline implies something other than what the interviewee is saying:

> Q. You don’t like to call AI a crutch.

> A. The crutch is a dangerous approach because if we use a crutch, we stop thinking. Students who use AI as a crutch don’t learn anything. It prevents them from thinking. Instead, using AI as co-intelligence is important because it increases your capabilities and also keeps you in the loop.

> Q. Isn’t it inevitable that AI will make us lazier?

> A. Calculators also made us lazier. Why aren’t we doing math by hand anymore? You should be taking notes by hand now instead of recording me. We use technology to take shortcuts, but we have to be strategic in how we take those shortcuts.

ethn 2 hours ago

‪ZIZEK: that AI will be the death of learning & so on; to this, I say NO! My student brings me their essay, which has been written by AI, & I plug it into my grading AI, & we are free! While the 'learning' happens, our superego satisfied, we are free now to learn whatever we want‬

  • nunez 2 hours ago

    Yes, but therein lies the rub. Those that know how to learn will benefit. Those that don't will regress, possibly for life depending on when AI is introduced.

    This is especially demonstrated in essay writing.

    Many students associate essays with busy work because the topics they're asked to write about are boring. When the typical assignment that's given is "read this boring ass book from the 40s that's been in the curriculum for decades without revisiting its application in today's world, then write a 1000-word essay on a topic that's been discussed to death that you couldn't give less of a shit about; points will be deducted for views that stray too far from the norm," then it's absolutely unsurprising that most students will shove this into ChatGPT and call it a day.

    On the flip side, when English or composition teachers are forced to assign thess assignments knowing full well that it's a crock of shit, then it is equally unsurprising that they will feed GPT into GPT and call it a day.

    Students that know how to learn and are actually interested in becoming better writers will find ways around this. Teachers who have the freedom to design their own curriculums will be more creative about the types of prompts they assign and the books they have their students read.

    The common link between the two? Money, of course!

htk 2 hours ago

Terrible agenda-driven title, implying that the interviewee's main message was about the dangers of relying on AI, where in fact it was the opposite, how AI can be a great tool to elevate human capacity if used well.

sirsuki 2 hours ago

In struggled with this trend even before the AI hype. I love learning and focused my work towards always learning. But it takes energy and lots of work; some days are better than others. My love of learning keeps me motivated over time. However, I’ve noticed a downward trend of juniors who are focused on being spoon fed the answers and avoiding any learning. It drives me nuts and I don’t know how to reconcile this kind of mental model.

Smithalicious 3 hours ago

Damn kids, back in my day we had to copy-paste our homework from stackoverflow uphill both ways

Asraelite 2 hours ago

"Free browsing by accepting cookies" or "subscribe and decline".

Is this legal? It might be, but I've never seen other sites do it so it seems dubious.

pluc 4 hours ago

Technology that renders effort and research pointless makes people lazy and stupid, story at 11

  • throwaway918299 3 hours ago

    I’m sure there were people that said the same thing about Google. I’m pretty sure they even said the same thing about the written word.

    • yoyoyo1122 3 hours ago

      It's such a "Back in my day..." mentality.

      "Farmers today are much less skilled and knowledge than farmers 50 years ago!"

  • gomerspiles 4 hours ago

    I suppose a headline that doesn't have much to do with the content is also such a technology?

pier25 3 hours ago

The less we learn the more stupid we'll be. The brain, like any muscle, atrophies if you don't use it.

This is already happening with genz in college.

https://www.theatlantic.com/magazine/archive/2024/11/the-eli...

  • pavel_lishin 3 hours ago

    I'd wager you can find a copy of an article with that premise about every generation going back to a generation after the invention of writing.

    • pier25 2 hours ago

      What about IQ declining after many generations of growth?

      Or what about the attention span crisis?

      Or the lack of technical skills in genz?

      I seriously doubt this is a generational thing as you seem to be arguing.

      Smartphones and social media didn't exist until 15-20 years ago and we're now seeing the consequences.

bachmeier 2 hours ago

This type of article is so frustrating. "You need to use AI to make yourself more productive." Followed by zero explanations of how I can do that. In addition, no mention of the implications of sending all your personal information to an entity that is waiting for an opportunity to use it against you.

fnordpiglet 3 hours ago

If you use a crutch you won’t learn anything is an age old truism. I don’t know why we would think a new tool somehow changes that dynamic.

My daughter is 10 and she is learning factoring, long division, and other things that a calculator does very well with. But she’s not allowed to use it at this stage because she can’t learn while using a crutch.

She’s also learning to write essays. She writes her essays then puts them into ChatGPT and asks for analysis, feedback, explanatory revisions. Then she revises the essay on her own without being able to refer back to the advice. This is using AI as a complement to learning and it’s been remarkably powerful. She can get feedback immediately, it’s high quality and impartial, and she can do it as many time as she finds useful. So, the fundamentals of learning don’t change no matter how powerful or different the tools become. But ignoring the tools because they can be used in place of learning if used in place of learning is dumb.

  • hyperG 2 hours ago

    It wouldn't be shocking to me that in 30 years, your daughter would be tasked with writing a whole book and not just an essay. Part of the learning process is learning to leverage technology. Something I think we do a really bad job at teaching kids.

    Of course, leveraging technology to do the exact same thing you would do without the technology is a terrible lesson on many levels.

  • ryandrake 3 hours ago

    > My daughter is 10 and she is learning factoring, long division, and other things that a calculator does very well with. But she’s not allowed to use it at this stage because she can’t learn while using a crutch.

    Which seems silly to me, but what do I know, I'm not a teacher. Nobody does long division in real life after K-12 school. It is not a useful skill to have, and it is not a useful concept to know. If I have to divide two numbers I just use a calculator like 99% of the humans on the planet.

    Knowing what division is, and what it means to divide one number by another is valuable, but can you just teach that without teaching the mechanics of "divide the partial dividend by the divisor, then multiply the partial quotient by the divisor, and subtract from the partial dividend, extending to the next blah blah blah blah"? Are we really training the next generation for a world without electricity?

    • ben_w 3 hours ago

      > Nobody does long division in real life after K-12 school. It is not a useful skill to have, and it is not a useful concept to know.

      To my surprise, when I did pure maths at A level I found the same ideas applied to dividing one polynomial by another.

      Of course as a mere memorisable algorithm a computer can also do this, so I'm not sure how useful it is even to pure mathematics, but there is (or was, 20 years ago) some use to the idea.

    • nasmorn 3 hours ago

      Maybe learning to follow a simple algorithm helps some children to structure their thinking. Divisions are not hard to do. Doing a lot of them has little benefit though.

EVa5I7bHFq9mnYK 2 hours ago

Students who use assembly language as a crutch, don't learn the machine codes properly.

brookst 2 hours ago

I’m old enough to remember when the same claims were made of graphic calculators and high lever languages like Basic.

A better formulation is perhaps “students who use AI to reduce work learn different things”. It’s easy for purists to say there’s no value whatsoever in learning to use a tool rather than learning to do the work.

But that’s a judgment about the value of what is learned, and it’s kind of dishonest slight of hand to substitute that opinion.

renewiltord 2 hours ago

Highly useful tools are always described like this. Amusingly, search engines and then, to a lesser extent, Stack Overflow were both described like this. I can’t say that it’s very interesting a statement in its nth incarnation.

adamnemecek 2 hours ago

Do students who don’t use AI learn anything though?

StarterPro 2 hours ago

No shit. Just like people who "create AI art" aren't actually creating anything.

We've let these tech companies distill learning and creating down to a mouse click.

nonrandomstring 3 hours ago

> don't learn anything

I don't think this is true. We learn a lot: Deference. Dependency. Entitlement. Impatience. Conformity. Distraction. Overconfidence. Intemperance...

If something "does the thinking for you" it has a much deeper effect than simply being a "crutch for the mind". It changes our relation to the world, to knowledge, motives, ambition, self-control...

"AI" is going to change our minds, but from what I've seen so far the outcome is a really quite awful kind of person, a net burden to society rather than a creative and productive asset.

  • visarga 2 hours ago

    Maybe we need AIs that take care of our growth as well. Let's say a school or company mandates one hour chatting with an AI that will ask you questions and probe your knowledge, and give in explanations as well for things you don't know. The more time you spend with it, the better, it acts like a tutor not like a subordinate that hides away complexity from you.

    There is no reason AI can't work like a tutor, the current crop is just a first take on the human-AI interaction problem. The motivation part can be solved by gamification and constraints - you need to earn a number of points by chatting the AI, and those points are reported to your teacher/manager. So a triad of student+AI+human coach would solve the motivation part.

iamleppert 4 hours ago

<Shrugs in passive aggressive>

That’s what they said when the calculator was invented. Out with the old, in with the new! Sorry but not sorry life was so hard before but we got AI to do the work for us now.

  • giantg2 4 hours ago

    Not the same at all. The results of calculators are verifiable. The results of AI can be dangerously wrong without easy validation. Calculators don't eliminate the application of concepts from learning, but AI does.

    • ben_w 3 hours ago

      The results of a calculator are only easy to verify with either someone who can do the same arithmetic or who has another calculator.

      I had this happen to me once while shopping, where I could immediately tell that three items costing less than £1 each should not come to a total of more than £3, but the cashier needed that explained to them.

      (And that's aside from anything about asking LLMs to output in a format suitable for mechanical validation, which they can generally do).

    • parpfish 4 hours ago

      Verifiability is part of it, but the other part is that calculators don’t provide a full end-to-end solution (unless you’re doing worksheets for homework).

      Each calculation is just one step, it’s up to the user to figure out which steps to take and how to chain them together. they might even learn that the whole thing would be faster if they could do some of those calculations their head.

      like if you’re trying to figure out how much wood to buy for a deck, you’d still need to break the big problem down into those individual computations to do in the calculator. Unlike an llm where you could just ask it and it’d jump straight to a final answer

    • rthrth45y 4 hours ago

      Good point but also not a new problem. Humans use the same mechanics to assign variable weights and biases in order to validate information. If you look at a banana, you can determine what it is based on the knowledge you contain, and that process triggers a similar cascade of weights. the difference is brains are much more capable of this, but we still misremember things and recite wrong information. The game of telephone is a great example.

      I don't think AI eliminated the application of concepts from learning. I think that has been eliminated enough due to the erosion of our public education systems. If we are not capable of critical thought with the information our own peers present us, why would it be any different when we seek it from AI?

  • givemeethekeys 4 hours ago

    My friends in a top tier high school had no intuition about numbers because they used calculators for everything, including basic addition and subtraction.

    Some of them had customer service jobs where they became utterly confused if the change was 99 cents and the customer gave them an additional penny.

    • pclmulqdq 3 hours ago

      Everyone crying "calculator" about ChatGPT has me convinced that ChatGPT is bad for your education. Learning to do mental math as a child sucked, but now that I can do it, my brain is so much more free to think about stuff that matters. There's no intervening step of "let me pull out a calculator to see what that is," I just know the answer. The thoughts can just flow freely.

      • SoftTalker 2 hours ago

        This is true of all memorized facts. It enables thinking at a higher level. “Why should I learn multiplication when I have a calculator on my phone” ignores this.

        The more you have memorized the more nimble your thinking is. If you have a large vocabulary you can effortlessly express yourself with precision while others are thumbing through a thesaurus (or these days asking an AI to “rewrite”).

        If you know the history of something you can have more interesting perspective and conversations about it.

        There is almost no situation where the person with a lot of memorized knowledge is at a disadvantage to the person who needs to look everything up or rely on tools to do the work.

        • skydhash 2 hours ago

          True. I have friends that have refused to learn algorithms and instead insisted that they only needed to master $FRAMEWORK. Then they got stumped by any problems that can not be solved by $LIBRARY, spending days on it with no result.

          Yes, it takes time, but learning is exponential, and overtime, the pace will increase greatly.

      • ben_w 3 hours ago

        That's true for every skill that comes fluently. We've only got limited time, which skills really matter?

        I'd say yes to basic arithmetic; but I can't really use my own experience as a software developer who started off in video games to justify why a normal person needs to understand trigonometry and triangle formulas, any more than I can justify why they need to study Shakespeare and Alfred Tennyson over e.g. Terry Pratchett and Leonard Cohen — "I find it intellectually stimulating" is perhaps necessary, but certainly not sufficient, given there's more to learn than we can fit in a lifetime.

        • skydhash 2 hours ago

          Because they give you flexibility. One does not need to master everything, if you have a wide and stable foundation, your options become more numerous later in life. And expertise is a pyramid, so the more diverse your basic skills are, the farther you can reach.

    • mistrial9 4 hours ago

      except that the typical case is ... the charge is $1.01, I give you a $5 and a penny. A penny from the client to the house on top of a charge of $0.99 by the house, does xxxxxxxxx ... (edit) as pointed out below, a penny plus a $0.99 charge means that the cashier can return a whole number of bills, avoiding any coins..

      • abanana 3 hours ago

        The change, not the charge. If the change is 99 cents, the customer gives an extra penny, the change is now 1 dollar, avoiding a handful of coins.

        It seems to have become the norm for young cashiers to be unable to understand. And if you try to explain, they'll insist "I can't change it now I've rung it through". Some seem to think the system keeps an exact record of the quantity of each individual coin (or they just don't even know where to begin to think about it).

        • bogdan 2 hours ago

          This totally happened me as well but I don't necessarily see the connection with calculators. This is all anecdotal imo.

        • Dalewyn 3 hours ago

          Another possibility: "The register says 99 cents change and I am not paid enough to give any more of a damn than that."

          • pessimizer 2 hours ago

            If you have basic comfort in arithmetic, this is not a calculation that involves giving a damn. Being confused about why someone would give you an extra penny and having a discussion about it with a stranger burns 100x more calories than knowing it. If basic arithmetic involves taking a deep breath and closing your eyes for half a minute, or looking around the room for a calculator, that's a different cost/benefit analysis.

            It's like the difference between a language you are fluent in and a language you are tentative in. If you're fluent, you have to make an effort not to listen to somebody's loud conversation, or not to pay attention to a billboard. They intrude into your consciousness. There's never a situation when I don't do simple arithmetic when exposed to it. I don't have to consciously figure out what 4 times 9 is. Subjectively, the number just pops into my head when I see the question.

            edit: If you can't do this with explanations of identities or related rates, etc., it's hard or impossible to follow any quantitative or especially probabilistic argument. Even the simplest ones. I think this results in people for whom arithmetic is difficult faking it by trying to memorize the words used during quantitative arguments without having any real understanding. Just sort of memorizing a lot of slogans and repeating them during any argument that shares similar words. I think discomfort with arithmetic ruins people politically (as citizens), so I really do think calculators are a problem.

  • s0ss 3 hours ago

    Lots of nuance that you’re not addressing, IMO. Here’s some more nuance:

    Steroids. It’s not a perfect metaphor, But I think it’s useful. Two people are trying to gain muscle mass. They both have an ideal starting point. First person has a healthy diet, lots of exercise, and sleep. The second person has all of the same things the first person however they also taking growth hormones.

    Lots of folks look at the two results and will see lots of different things. Beauty is in the eye of the beholder I suppose. If you think the end results of the work should yield sculpted bodies with larger than normal muscles… you might opt to use hormones. However, if you think sculpted bodies with larger than normal muscles looks unrealistic or just not your style/goal… you would probably opt for a more natural approach.

    Both have their merits and could be described as “fit” despite their differences. folks may value one over the other. people might fantasize about looking like thor, but if everyone actually looked like thor, things would be weird. My two cents: Thor is fiction, and while we need fiction. Im not going to pretend that anyone should look like thor in order to be in shape or to be described as fit. If we allow ourselves to be fooled into thinking that it can be normal to look like thor, then we are doing something wrong. Fiction should not become reality.

  • StefanBatory 4 hours ago

    No, I can't agree with you here.

    I'm software engineering student. I had a phase year ago where I was using ChatGPT a lot, a lot more than I ever should have.

    And it messed up with my brain a lot. I felt I became utterly lazy; to the point where quick fixes that should have taken me like, 10-15 seconds (?) I had to do with AI, which often took a very long time.

    And the point of studying is to learn. You won't learn anything if you have someone else write your software for you.

  • wredue 3 hours ago

    A new study also shows people using AI produce code with 41% more bugs. And that’s just what the users missed!

    Calculators arent giving you “kind of correct” answers.

    • aspenmayer 21 minutes ago

      > A new study also shows people using AI produce code with 41% more bugs. And that’s just what the users missed!

      Do you have a link or more info? Without further context, the 41% doesn't tell us the whole story; all we have is a numerator lacking a denominator. Did bugs per line of code go up, or down? Did # LOC produced after using AI go up/down? For all we know, the increase in productivity caused average bugs per line to go down, rather than up, which is contrary to the argument you're making.

  • blibble 4 hours ago

    the difference is I still understand everything the calculator can do, and can do it by hand on paper

    the AI generation is not going to know how to do anything other than type into chatgpt

    at which point human progress ends and we start going backwards

    • SoftTalker 2 hours ago

      And worse, the people who control the AIs now have ultimate power to rewrite history and mold opinion. If they want everyone to think the earth is flat they can do it.

  • lawn 3 hours ago

    > That’s what they said when the calculator was invented.

    And it's beneficial to ban calculators for learning, which is the point of the article?

  • nonrandomstring 3 hours ago

    > <passive aggressive> > life was so hard before but we got AI to do the work for us now.

    Aggression against what? Yourself?

    I think you show a tragic misunderstanding of technology and what it is doing in the world. It's not the work that it's doing for you. It's the living. Is it really your life you want a machine to take?

    Nobody wants to "work". Henry David Thoreau said ,"There is no more fatal blunderer than he who consumes the greater part of his life getting his living." All good, no? But that's not what "AI", in the hands of exploiters (or even yourself, as a self-exploiter) is going to do to you. Technology is more "productive" but creates more, not less labour.

    Better to heed Max Frisch who said, "Technology is the knack of so arranging the world that we don't have to experience it." Would you employ a machine to enjoy a music concert for you? To have sex for you or play games for you so you're not troubled by the effort?

    • Dalewyn 3 hours ago

      >To have sex for you or play games for you so you're not troubled by the effort?

      Two of the three games I play on a daily basis largely play themselves, so... yes, actually. I still have plenty of fun watching them.

      • nonrandomstring 3 hours ago

        That is very interesting. Are you talking like city simulation games? I get the entertainment of quite passively tweaking and watching things unfold. But at what point would you say "hey I'm not really a participating player any more, this is just watching TV"? Is it still a game at that point?