• Daedskin@lemm.ee
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 month ago

      I like the sentiment of the article; however this quote really rubs me the wrong way:

      I’m not suggesting we abandon AI tools—that ship has sailed.

      Why would that ship have sailed? No one is forcing you to use an LLM. If, as the article supposes, using an LLM is detrimental, and it’s possible to start having days where you don’t use an LLM, then what’s stopping you from increasing the frequency of those days until you’re not using an LLM at all?

      I personally don’t interact with any LLMs, neither at work or at home, and I don’t have any issue getting work done. Yeah there was a decently long ramp-up period — maybe about 6 months — when I started on ny current project at work where it was more learning than doing; but now I feel like I know the codebase well enough to approach any problem I come up against. I’ve even debugged USB driver stuff, and, while it took a lot of research and reading USB specs, I was able to figure it out without any input from an LLM.

      Maybe it’s just because I’ve never bought into the hype; I just don’t see how people have such a high respect for LLMs. I’m of the opinion that using an LLM has potential only as a truly last resort — and even then will likely not be useful.

      • gamermanh@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Why would that ship have sailed?

        Because the tools are here and not going anyway

        then what’s stopping you from increasing the frequency of those days until you’re not using an LLM at all?

        The actually useful shit LLMs can do. Their point is that using only majorly an LLM hurts you, this does not make it an invalid tool in moderation

        You seem to think of an LLM only as something you can ask questions to, this is one of their worst capabilities and far from the only thing they do

        • merc@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          ·
          1 month ago

          Because the tools are here and not going anyway

          Swiss army knives have had awls for ages. I’ve never used one. The fact that the tool exists doesn’t mean that anybody has to use it.

          The actually useful shit LLMs can do

          Which is?

          • MrLLM@ani.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 month ago

            The actually useful shit LLMs can do

            Which is?

            Waste energy and pollute the environment? I can relate… not useful, tho

    • Guttural@jlai.lu
      link
      fedilink
      Français
      arrow-up
      1
      ·
      1 month ago

      This guy’s solution to becoming crappier over time is “I’ll drink every day, but abstain one day a week”.

      I’m not convinced that “that ship has sailed” as he puts it.

    • Mnemnosyne@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      1 month ago

      “Every time we use a lever to lift a stone, we’re trading long term strength for short term productivity. We’re optimizing for today’s pyramid at the cost of tomorrow’s ability.”

  • SkunkWorkz@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    1 month ago

    Yeah fake. No way you can get 90%+ using chatGPT without understanding code. LLMs barf out so much nonsense when it comes to code. You have to correct it frequently to make it spit out working code.

    • AeonFelis@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago
      1. Ask ChatGPT for a solution.
      2. Try to run the solution. It doesn’t work.
      3. Post the solution online as something you wrote all on your own, and ask people what’s wrong with it.
      4. Copy-paste the fixed-by-actual-human solution from the replies.
    • Artyom@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      1 month ago

      If we’re talking about freshman CS 101, where every assignment is the same year-over-year and it’s all machine graded, yes, 90% is definitely possible because an LLM can essentially act as a database of all problems and all solutions. A grad student TA can probably see through his “explanations”, but they’re probably tired from their endless stack of work, so why bother?

      If we’re talking about a 400 level CS class, this kid’s screwed and even someone who’s mastered the fundamentals will struggle through advanced algorithms and reconciling math ideas with hands-on-keyboard software.

  • kabi@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    If it’s the first course where they use Java, then one could easily learn it in 21 hours, with time for a full night’s sleep. Unless there’s no code completion and you have to write imports by hand. Then, you’re fucked.

    • rockerface 🇺🇦@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      If there’s no code completion, I can tell you even people who’s been doing coding as a job for years aren’t going to write it correctly from memory. Because we’re not being paid to memorize this shit, we’re being paid to solve problems optimally.