• bia@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    14 days ago

    Not sure how to interpret this. The use of any tool can be for good or bad.

    If the quality of the game is increased by the use of AI, I’m all for it. If it’s used to generate a generic mess, it’s probably not going to be interesting enough for me to notice it’s existence.

    If they mean that they don’t use AI to generate art and voice over, I guess it can be good for a medium to large game. But if using AI means it gets made at all, that’s better no?

    • deur@feddit.nl
      link
      fedilink
      English
      arrow-up
      0
      ·
      14 days ago

      People want pieces of art made by actual humans. Not garbage from the confident statistics black box.

      • RampantParanoia2365@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        13 days ago

        Honest question: are things like trees, rocks, logs in a huge world like a modern RPG all placed by hand, or does it use AI to fill it out?

        • finitebanjo@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          13 days ago

          Not AI but certainly a semirandom function. Then they go through and manually clean it up by hand.

          • SchmidtGenetics@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            13 days ago

            Ah, so this kind of tool is allowable, but not another? Pretty hypocritical thinking there.

            A tools is a tool, any tool can be abused.

      • Lumiluz@slrpnk.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        13 days ago

        What if they use it as part of the art tho?

        Like a horror game that uses an AI to just slightly tweak an image of the paintings in a haunted building continuously everytime you look past them to look just 1% creepier?

        • mke@programming.dev
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          12 days ago

          That’s an interesting enough idea in theory, so here’s my take on it, in case you want one.

          Yes, it sounds magical, but:

          • AI sucks at make it more X. It doesn’t understand scary, just like it doesn’t understand anything at all, so you’ll get worse crops of the training data, not meaningful transformations.
          • It’s prohibitively expensive and unfeasible for the majority of consumer hardware.
          • Even if it gets a thousand times cheaper and better at its job, is GenAI really the best way to do this?
          • Is it the only one? Are alternatives also built on exploitation? If they aren’t, I think you should reconsider.
          • Lumiluz@slrpnk.net
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            13 days ago

            •Ok, I know the researching ability of people has decreased greatly over the years, but using “knowyourmeme” as a source? Really?

            • You can now run optimized open source diffusion models on an iPhone, and it’s been possible for years. I use that as an example because yes, there’s models that can easily run on an Nvidia 1060 these days. Those models are more than enough to handle incremental changes to an image in-game

            • Already has for awhile as demonstrated by it being able to run on an iPhone, but yes, it’s probably the best way to get an uncanny valley effect in certain paintings in a horror game, as the alternatives would be:

            • spending many hours manually making hundreds of incremental changes to all the paintings yourself (and the will be a limit to how much they warp, and this assumes you have even better art skills)
            • hiring someone to do what I just mentioned (assumes you have a decent amount of money) and is still limited of course.

            • I’ll call an open source model exploitation the day someone can accurately generate an exact work it was trained on not within 1, but at least within 10 generations. I have looked into this myself, unlike seemingly most people on the internet. Last I checked, the closest was a 90 something % similarity image after using an algorithm that modified the prompt over time after thousands of generations. I can find this research paper myself if you want, but there may be newer research out there.

            • mke@programming.dev
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              9 days ago

              You can now run optimized open source diffusion models on an iPhone, and it’s been possible for years.

              Games aren’t background processes. Even today, triple-A titles still sometimes come out as unoptimized hot garbage. Do you genuinely think it’s easy to pile a diffusion model on top with negligible effect? Also, will you pack an entire model into your game just for one instance?

              I use that as an example because yes, there’s models that can easily run on an Nvidia 1060 these days. Those models are more than enough to handle incremental changes to an image in-game

              Look at the share of people using an 1050 or lower card. Or let’s talk about the AMD and Intel issues. These people aren’t an insignificant portion. Hell, nearly 15% don’t even have 16GB of ram.

              it’s probably the best way to get an uncanny valley effect in … a horror game, as the alternatives would be:

              • spending many hours manually making hundreds of incremental changes
              • hiring someone to do what I just mentioned

              What are you talking about? You’re satisfied with a diffusion model’s output, but won’t be with any other method except excruciating manual labor? Your standards are all over the place—or rather, you don’t have any. And let’s keep it real: most won’t give a shit if your game shows them either 10 or 100 slightly worse versions of the same image.

              Procedural generation has been a thing for decades. Indie devs have been making do with nearly nonexistent art skills and less sophisticated tech for just as long. I feel like you don’t actually care about the problem space, you just want to shove AI into the solution.

              I’ll call an open source model exploitation the day someone can accurately generate an exact work it was trained on not within 1, but at least within 10 generations.

              Are you referring to the OSAID? The infamously broken definition that exists to serve companies? You don’t understand what exploitation here means. “Can it regurgitate exact training input” is not the only question to ask, and not the bar. Knowing your work was used without consent to train computers to replace people’s livelihoods is extremely violating. Talk to artists.

              I know the researching ability of people has decreased greatly over the years, but using “knowyourmeme” as a source? Really?

              I tried to use an accessible and easily understandable example. Fuck off. Go do your own “research”, open those beloved diffusion models, make your scary, then scarier images and try asking people what they think of the results. Do it a hundred times, since that’s your only excuse as to why you need AI. No cherry-picking, you won’t be able to choose what your rube goldberg painting will look like on other people’s PCs.

  • Skullgrid@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    14 days ago

    this is stupid, there’s SO many indie games using procedural generation which is fucking generative AI. It’s in a shitload of them, from speulunky to Darkest Dungeon 2.

    • parlaptie@feddit.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      13 days ago

      Procedural generation is generative, but it ain’t AI. It especially has nothing in common with the exploitative practices of genAI training.

      • Lumiluz@slrpnk.net
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        13 days ago

        “AI” is just very advanced procedural generation. There’s been games that used image diffusion in the past too, just in a far smaller and limited scale (such as a single creature, like the pokemon with the spinning eyes

        • jsomae@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          edit-2
          13 days ago

          By this logic, literally any code is genAI.

          Has a branch statement? It makes decisions. Displays something on the screen, even by stdout? Generated content.

  • Lem Jukes@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    edit-2
    13 days ago

    This feels discouraging as someone who struggled with learning programming for a very long time and only with the aid of copilot have I finally crossed the hurdles I was facing and felt like I was actually learning and progressing again.

    Yes I’m still interacting with and manually adjusting and even writing sections of code. But a lot of what copilot does for me is interpret my natural language understanding of how I want to manipulate the data and translating it into actual code which I then work with and combine with the rest of the project.

    But I’ve stopped looking to join any game jams because it seems even when they don’t have an explicit ban against all AI, the sentiment I get is that people feel like it’s cheating and look down on someone in my situation. I get that submitting ai slop whole sale is just garbage. But it feels like putting these blanket ‘no ai content’ stamps and badges on things excludes a lot of people.

    Edit:

    Is this slop? https://lemjukes.itch.io/ascii-farmer-alpha https://github.com/LemJukes/ASCII-Farmer

    Like I know it isn’t good code but I’m entirely self taught and it seems to work(and more importantly I mostly understand how it works) so what’s the fucking difference? How am I supposed to learn without iterating? If anyone human wants to look at my code and tell me why it’s shit, that’d actually be really helpful and I’d genuinely be thankful.

    *except whoever actually said that in the comment reply’s. I blocked you so I won’t see any more from you anyways and also piss off.

      • Lumiluz@slrpnk.net
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        13 days ago

        Same vibes as “if you learned to draw with an iPad then you didn’t actually learn to draw”.

        Or in my case, I’m old enough to remember “computer art isn’t real animation/art” and also the criticism assist Photoshop.

        And there’s plenty of people who criticized Andy Warhol too before then.

        Go back in history and you can read about criticisms of using typewriters over hand writing as well.

        • endeavor@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          13 days ago

          As an artist who is learning to code its different. It is night and day wether you have access to undo and HSV adjust but still must nail color, composition, values, proportion, perspective etc. Especially when a ton of shortcuts are also available to trad artists who can just paint over a projection. Only thing besides saving tons of money and making it easier to do your daily practise, digital art will also give you is more noob traps like brushes and then the lack of confidence from the reliance on undo and other tools like that. I transferred to traditional oil paints just fine cause the fundamentals are the one that separates the trash from the okay and above.

          It is night and day when you ask ai how to make a multiplication table vs apply what you have learned previously to learn the logic behind making it yourself. Using AI wrong in programming means you don’t learn the fundamentals aka you don’t learn to program. Comparing using AI to learn to program with learning to paint on ipad is wrong. Comparing using AI to learn to program with using AI to make art for you is more apt.

        • finitebanjo@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          13 days ago

          None of your examples are even close to a comparison with AI which steals from people to generate approximate nonsense while costing massive amounts of electricity.

          • Lumiluz@slrpnk.net
            link
            fedilink
            English
            arrow-up
            0
            ·
            13 days ago

            Have you ever looked at the file size of something like Stable Diffusion?

            Considering the data it’s trained on, do you think it’s;

            A) 3 Petabytes B) 500 Terabytes C) 900 Gigabytes D) 100 Gigabytes

            Second, what’s the electrical cost of generating a single image using Flux vs 3 minutes of Balder’s Gate, or similar on max settings?

            Surely you must have some idea on these numbers and aren’t just parroting things you don’t understand.

            • finitebanjo@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              13 days ago

              What a fucking curveball joke of a question, you take a nearly impossible to quantify comparison and ask if its equivalent?

              Gaming:

              A high scenario electricity consumption figure of around 27 TWh, and a low scenario figure of 14.7 TWh

              North American gaming market is about 7% of the global total

              then that gives us a very very rough figure of about 210-285 TWh per annum of global electricity used by gamers.

              AI:

              The rapid growth of AI and the investments into the underlying AI infrastructure have significantly intensified the power demands of data centers. Globally, data centers consumed an estimated 240–340 TWh of electricity in 2022—approximately 1% to 1.3% of global electricity use, according to the International Energy Agency (IEA). In the early 2010s, data center energy footprints grew at a relatively moderate pace, thanks to efficiency gains and the shift toward hyperscale facilities, which are more efficient than smaller server rooms.

              That stable growth pattern has given way to explosive demand. The IEA projects that global data center electricity consumption could double between 2022 and 2026. Similarly, IDC forecasts that surging AI workloads will drive a massive increase in data center capacity and power usage, with global electricity consumption from data centers projected to double to 857 TWh between 2023 and 2028. Purpose-built AI nfrastructure is at the core of this growth, with IDC estimating that AI data center capacity will expand at a 40.5% CAGR through 2027.

              Lets just say we’re at the halfway point and its 600 TWh per anum compared to 285 for gamers.

              So more than fucking double, yeah.

              And to reiterate, people generate thousands of frames in a session of gaming, vs a handful of images or maybe some emails in a session of AI.

    • Demigodrick@lemmy.zipM
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      14 days ago

      FWIW I agree with you. The people who say they don’t support these tools come across as purists or virtue signallers.

      I would agree with not having AI art* or music and sounds. In games I’ve played with it in, it sounds so out of place.

      However support to make coding more accessible with the use of a tool shouldn’t be frowned upon. I wonder if people felt the same way when C was released, and they thought everyone should be an assembly programmer.

      The irony is that most programmers were just googling and getting answers from stackoverflow, now they don’t even need to Google.

      *unless the aim is procedurally generated games i guess, but if they’re using assets I get not using AI generated ones.

      • mke@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        13 days ago

        The people who say they don’t support these tools come across as purists or virtue signallers.

        It is now “purist” to protest against the usage of tools that by and large steal from the work of countless unpaid, uncredited, unconsenting artists, writers, and programmers. It is virtue signaling to say I don’t support OpenAI or their shitty capital chasing pig-brethren. It’s fucking “organic labelling” to want to support like-minded people instead of big tech.

        Y’all are ridiculous. The more of this I see, the more radicalized I get. Cool tech, yes, I admit! But wow, you just want to sweep all those pesky little ethical issues aside because… it makes you more productive? Shit, it’s like you’re competing with Altman on the unlikeability ranking.

        • SchmidtGenetics@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          edit-2
          12 days ago

          These same discussion happened with photoshop and “brush tools” why are those acceptable to make it less labor intensive, but this isn’t?

          It’s more hypocrisy over purism, as you’ve so nicely pointed out.

          • mke@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            ·
            12 days ago

            These same discussion happened with photoshop and “brush tools” why are those acceptable to make it less labor intensive, but this isn’t?

            You’re missing the point. “This makes things easier” isn’t the problem, it’s more along the lines of “this is only possible by stealing the works of countless people, it will attempt to obviate their jobs, and make billionaires even richer.” People aren’t mad you want to work less, they’re mad you’ll make things worse, and won’t even bother to grasp how.

            It’s more hypocrisy over purism, as you’ve so nicely pointed out.

            Comparing GenAI to brush tools is extremely disingenuous, talk about hypocrisy.

    • otp@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      13 days ago

      Back in the day, people hated Intellisense/auto-complete.

      And back in the older day, people hated IDEs for coding.

      And back in the even older day, people hated computers for games.

      There’ll always be people who hate new technology, especially if it makes something easier that they used to have to do “the hard way”.