• SaharaMaleikuhm@feddit.org
    link
    fedilink
    arrow-up
    57
    arrow-down
    3
    ·
    7 days ago

    Had it write some simple shader yesterday cause I have no idea how those work. It told me about how to use the mix and step functions to optimize for GPUs, then promptly added some errors I had to find myself. Actually not that bad cause now after fixing it I do understand the code. Very educational.

    • TheOakTree@lemm.ee
      link
      fedilink
      arrow-up
      29
      ·
      7 days ago

      This is my experience using it for electrical engineering and programming. It will give me 80% of the answer, and the remainder 20% is hidden errors. Turns out the best way to learn math from GPT is to ask it a question you know the answer (but not the process) to. Then, reverse engineer the process and determine what mistakes were made and why they impact the result.

      Alternatively, just refer to existing materials in the textbook and online. Then you learn it right the first time.

      • Cataphract@lemmy.ml
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        6 days ago

        thank you for that last sentence because I thought I was going crazy reading through these responses.

          • Cataphract@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            4 days ago

            ok, I finally figured out my view on this I believe. I was worried I was being a grumpy old man who was just yelling at the AI (still probably am, but at least I can articulate why I feel this is a negative reply to my concerns)

            It’s not reproducible.

            I personally don’t believe asking an AI with a prompt then “troubleshooting” it is the best educational tool for the masses to be promoted to each-other. It works for some individuals, but as you can see the results will always vary with time.

            There are so many promotional and awesome educational tools that emphasize the “doing” part instead of reading. You don’t need to ask an AI prompt then try to fix all the horrible shit when there is always a statistically likely chance you will never be able to solve it and the AI gave you an impossible answer to fix.

            I get some people do it, some people succeed, and some people are maybe so lonely that this interaction is actually preferable since it seems like some weird sort of collaboration. The reality is that the AI was trained unethically and has so many moral and ethical repercussions that just finding a decent educator or forum/discord to actually engage with is whole magnitudes better for society and your own mental processes.

    • Klear@sh.itjust.works
      link
      fedilink
      arrow-up
      16
      ·
      7 days ago

      Shaders are black magic so understandable. However, they’re worth learning precisely because they are black magic. Makes you feel incredibly powerful once you start understanding them.

    • irelephant [he/him]🍭@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 days ago

      I used it yesterday because I couldn’t get mastodon’s version of http signing working. It spat out a shell script which worked, which is more than my attempts did.

  • Aggravationstation@feddit.uk
    link
    fedilink
    arrow-up
    49
    ·
    7 days ago

    I don’t need Chat GPT to fuck my wife but if I had one and her and Chat GPT were into it then I would like to watch Chat GPT fuck my wife.

    • Dragonstaff@leminal.space
      link
      fedilink
      English
      arrow-up
      19
      ·
      7 days ago

      It’s such a weird question. Why would I need ChatGPT to fuck my wife when we have the Dildoninator 9000 with Vac-u-loc attachments and King fu grip?

  • angrystego@lemmy.world
    link
    fedilink
    arrow-up
    22
    arrow-down
    11
    ·
    6 days ago

    Oh poor baby, do you need the dishwasher to wash your dishes? Do you need the washing mashine to wash your clothes? You can’t do it?

  • max@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    19
    ·
    7 days ago

    To me the worse thing is, my collage uses ai to make the tests, I can see it’s made by it because of multiple correct options, and in a group the teacher said something like “why lost 1h to make when ai can make it in seconds”

    I like to use ai to “convert” citations, like APA to ABNT, I’m lazy for it and it’s just moving the position of the words so yeah

  • Lucky_777@lemmy.world
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    6 days ago

    ChatGPT is learning from my fucking. All males will be amazing at oral sex and learning to last “almost too long”.

    Too bad everyone will be fucking robots by then

    • chetradley@lemm.ee
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      6 days ago

      The ownership, energy cost, reliability of responses and the ethics of scraping and selling other people’s work, but yeah.

  • Vanilla_PuddinFudge@infosec.pub
    link
    fedilink
    arrow-up
    6
    arrow-down
    3
    ·
    6 days ago

    ITT

    I used Ai recently and it got some details wrong so it is entirely useless for anyone, anywhere under any circumstances, even though it’s less than six years old as a technology!

    crosses arms

    You guys are like newspaper men in the 1940s raging about TV being an experimental failure.

  • sheetzoos@lemmy.world
    link
    fedilink
    arrow-up
    32
    arrow-down
    28
    ·
    edit-2
    7 days ago

    People are constantly getting upset about new technologies. It’s a good thing they’re too inept to stop these technologies.

    • Wren@lemmy.world
      link
      fedilink
      arrow-up
      32
      arrow-down
      5
      ·
      7 days ago

      People are also always using one example to illustrate another, also known as a false injunction.

      There is no rule that states all technology must be considered safe.

      • sheetzoos@lemmy.world
        link
        fedilink
        arrow-up
        15
        arrow-down
        6
        ·
        7 days ago

        Every technology is a tool - both safe and unsafe depending on the user.

        Nuclear technology can be used to kill every human on earth. It can also be used to provide power and warmth for every human.

        AI is no different. It can be used for good or evil. It all depends on the people. Vilifying the tool itself is a fool’s argument that has been used since the days of the printing press.

        • FearMeAndDecay@literature.cafe
          link
          fedilink
          English
          arrow-up
          7
          ·
          7 days ago

          My big problems with AI are the climate cost and the unethical way that a lot of these models have been trained. If they can fix those, then yeah I don’t have an issue with people using it when it’s appropriate but currently lots of people are using it out of sheer laziness. If corpos are just using it to badly replace workers and kids are using it instead of learning how to write a fucking paragraph properly, then yeah, I’ll hate on AI

        • FrChazzz@lemm.ee
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          7 days ago

          Been this way since the harnessing of fire or the building of the wheel.

        • wolframhydroxide@sh.itjust.works
          link
          fedilink
          arrow-up
          11
          arrow-down
          6
          ·
          edit-2
          7 days ago

          While this may be true for technologies, tools are distinctly NOT inherently neutral. Consider the automatic rifle or the nuclear bomb. In the rifle, the technology of the mechanisms in the gun is the same precision-milled clockwork engineering that is used for worldwide production automation. The technology of the harnessing of a nuclear chain reaction is the same, whether enriching uranium for a bomb or a power plant.

          HOWEVER, BOTH the automatic rifle and the nuclear bomb are tools, and tools have a specific purpose. In these cases, that SOLE purpose is to, in an incredibly short period of time, with little effort or skill, enable the user to end the lives of as many people as possible. You can never use a bomb as a power plant, nor a rifle to alleviate supply shortages (except, perhaps, by a very direct reduction in demand). Here, our problem has never been with the technology of Artificial Neural Nets, which have been around for decades. It isn’t even with “AI” (note that no extant “AI” is actually “intelligent”)! No, our problem is with the tools. These tools are made with purpose and intent. Intent to defraud, intent to steal credit for the works of others, and the purpose of allowing corporations to save money on coding, staffing, and accountability for their actions, the purpose of having a black box a CEO can point to, shrug their shoulders, and say “what am I supposed to do? The AI agent told me to fire all of these people! Is it my fault that they were all <insert targetable group here>?!”

          These tools cannot be used to know things. They are probabilistic models. These tools cannot be used to think for you. They are Chinese Rooms. For you to imply that the designers of these models are blameless — when their AI agents misidentify black men as criminals in facial recognition software; when their training data breaks every copyright law on the fucking planet, only to allow corporations to deepfake away any actual human talent in existence; when the language models spew vitriol and raging misinformation with the slightest accidental prompting, and can be hard-limited to only allow propagandized slop to be produced, or tailored to the whims of whatever despot directs the trolls today; when everyone now has to question whether they are even talking to a real person, or just a dim reflection, echoing and aping humanity like some unseen monster in the woods — is irreconcilable with even an iota of critical thought. Consider more carefully when next you speak, for your corporate-apologist principles will only help you long enough for someone to train your beloved “tool” on you. May you be replaced quickly.

          • sheetzoos@lemmy.world
            link
            fedilink
            arrow-up
            5
            arrow-down
            3
            ·
            7 days ago

            You’ve made many incorrect assumptions and setup several strawmen fallacies. Rather than try to converse with someone who is only looking to feed their confirmation bias, I’ll suggest you continue your learnings by looking up the Dunning Kruger effect.

            • erin (she/her)@lemmy.blahaj.zone
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              6 days ago

              Can you point out and explain each strawman in detail? It sounds more like someone made good analogies that counter your point and you buzzword vomited in response.

              • sheetzoos@lemmy.world
                link
                fedilink
                arrow-up
                5
                ·
                edit-2
                6 days ago

                Dissecting his wall of text would take longer than I’d like, but I would be happy to provide a few examples:

                1. I have “…corporate-apologist principles”.

                — Though wolfram claims to have read my post history, he seems to have completely missed my many posts hating on TSLA, robber barons, Reddit execs, etc. I completely agree with him that AI will be used for evil by corporate assholes, but I also believe it will be used for good (just like any other technology).

                1. “…tools are distinctly NOT inherently neutral. Consider the automatic rifle or the nuclear bomb” “HOWEVER, BOTH the automatic rifle and the nuclear bomb are tools, and tools have a specific purpose”

                — Tools are neutral. They have more than one purpose. A nuclear bomb could be used to warm the atmosphere another planet to make it habitable. Not to mention any weapon can be used to defend humanity, or to attack it. Tools might be designed with a specific purpose in mind, but they can always be used for multiple purposes.

                There are a ton of invalid assumptions about machine learning as well, but I’m not interested in wasting time on someone who believes they know everything.

                • erin (she/her)@lemmy.blahaj.zone
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  6 days ago

                  I understand that you disagree with their points, but I’m more interested in where the strawman arguments are. I don’t see any, and I’d like to understand if I’m missing a clear fallacy due to my own biases or not.

            • wolframhydroxide@sh.itjust.works
              link
              fedilink
              arrow-up
              4
              arrow-down
              4
              ·
              edit-2
              7 days ago

              EDIT: now I understand. After going through your comments, I can see that you just claim confirmation bias rather than actually having to support your own arguments. Ironic that you seem to show all of this erudition in your comments, but as soon as anyone questions your beliefs, you just resort to logical buzzwords. The literal definition of the bias you claim to find. Tragic. Blocked.

              • 𝓔𝓶𝓶𝓲𝓮@lemm.ee
                link
                fedilink
                arrow-up
                5
                arrow-down
                1
                ·
                edit-2
                6 days ago

                Blocking individual on Lemmy is actually quite pointless as they still can reply to your comments and posts you just will not know about it while there can be whole pages long slander about you right under your nose

                I’d say it’s by design to spread tankie propaganda unabated

                • wolframhydroxide@sh.itjust.works
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  4
                  ·
                  edit-2
                  6 days ago

                  You know what? They can go ahead and slander me. Fine. Good for them. They’ve shown they aren’t interested in actual argument. I agree with your point about the whole slander thing, and maybe there is some sad little invective, “full of sound and fury, signifying nothing”, further belittling my intelligence to try to console themself. If other people read it and think “yeah that dude’s right”, then that’s their prerogative. I’ve made my case, and it seems the best they can come up with is projection and baseless accusation by buzzword. I need no further proof of their disingenuity.

                • blind3rdeye@lemm.ee
                  link
                  fedilink
                  arrow-up
                  3
                  arrow-down
                  3
                  ·
                  6 days ago

                  Blocking means that you don’t have to devote your time and thoughts to that person. That’s pretty valuable. And even if they decide they are going to attack you, not-responding is often a good strategy vs that kind of crap anyway - to avoid getting pulled into an endless bad-faith argument. (I’d still suggest not announcing that you’ve blocked them though. Just block and forget about it.)

        • blind3rdeye@lemm.ee
          link
          fedilink
          arrow-up
          7
          arrow-down
          4
          ·
          7 days ago

          Every tech can be safe and unsafe? I think you’ve oversimplified to the point of meaninglessness. Obviously some technologies are safer than others, and some are more useful than others, and some have overwhelming negative effects. Different tech can and should be discussed and considered on a case by case basis - not just some “every tech is good and bad” nonsense.

    • trashgirlfriend@lemmy.world
      link
      fedilink
      arrow-up
      22
      arrow-down
      5
      ·
      7 days ago

      Those fools do not realize that creating the torment nexus is just the same as inventing the wheel!

      I am very smart!

    • HalfSalesman@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      6 days ago

      I would if I got to join. I’ve always wanted to do an eiffel tower.

      I’m bi and poly though.

      Though also I’m probably never going to get married.