• @[email protected]
    link
    fedilink
    English
    832 months ago

    Why would you ask a bot to generate a stereotypical image and then be surprised it generates a stereotypical image. If you give it a simplistic prompt it will come up with a simplistic response.

    • @[email protected]
      link
      fedilink
      English
      62 months ago

      So the LLM answers what’s relevant according to stereotypes instead of what’s relevant… in reality?

      • @[email protected]
        link
        fedilink
        English
        23
        edit-2
        2 months ago

        It just means there’s a bias in the data that is probably being amplified during training.

        It answers what’s relevant according to its training.

  • @[email protected]
    link
    fedilink
    English
    41
    edit-2
    2 months ago

    Kinda makes sense though. I’d expect images where it’s actually labelled as “an Indian person” to actually over represent people wearing this kind of clothing. An image of an Indian person doing something mundane in more generic clothing is probably more often than not going to be labelled as “a person doing X” rather than “An Indian person doing X”. Not sure why these authors are so surprised by this

  • @[email protected]
    link
    fedilink
    English
    222 months ago

    Articles like this kill me because the nudge it’s kinda sorta racist to draw images like the ones they show which look exactly like the cover of half the bollywood movies ever made.

    Yes, if you want to get a certain type of person in your image you need to choose descriptive words, imagine gong to an artist snd saying ‘I need s picture and almost nothing matters beside the fact the look indian’ unless they’re bad at their job they’ll give you a bollywood movie cover with a guy from rajistan in a turbin - just like their official tourist website does

    Ask for an business man in delhi or an urdu shop keeper with an Elvis quiff if that’s what you want.

    • @[email protected]
      link
      fedilink
      English
      42 months ago

      the ones they show which look exactly like the cover of half the bollywood movies ever made.

      Almost certainly how they’re building up the data. But that’s more a consequence of tagging. Same reason you’ll get Marvel’s Iron Man when you ask an AI generator for “Draw me an iron man”. Not as though there’s a shortage of metallic-looking people in commercial media, but by keyword (and thanks to aggressive trademark enforcement) those terms are going to pull back a superabundance of a single common image.

      imagine gong to an artist snd saying ‘I need s picture and almost nothing matters beside the fact the look indian’

      I mean, the first thing that pops into my head is Mahatma Gandhi, and he wasn’t typically in a turbine. But he’s going to be tagged as “Gandhi” not “Indian”. You’re also very unlikely to get a young Gandhi, as there are far more pictures of him later in life.

      Ask for an business man in delhi or an urdu shop keeper with an Elvis quiff if that’s what you want.

      I remember when Google got into a whole bunch of trouble by deliberately engineering their prompts to be race blind. And, consequently, you could ask for “Picture of the Founding Fathers” or “Picture of Vikings” and get a variety of skin tones back.

      So I don’t think this is foolproof either. Its more just how the engine generating the image is tuned. You could very easily get a bunch of English bankers when querying for “Business man in delhi”, depending on where and how the backlog of images are sources. And urdu shopkeeper will inevitably give you a bunch of convenience stores and open-air stalls in the background of every shot.

  • @[email protected]
    link
    fedilink
    English
    182 months ago

    There are a lot of men in India who wear a turban, but the ratio is not nearly as high as Meta AI’s tool would suggest. In India’s capital, Delhi, you would see one in 15 men wearing a turban at most.

    Probably because most Sikhs are from the Punjab region?

      • @[email protected]
        link
        fedilink
        English
        102 months ago

        I’m guessing this relates to training data. Most training data that contains skin cancer is probably coming from medical sources and would have a ruler measuring the size of the melanoma, etc. So if you ask it to generate an image it’s almost always going to contain a ruler. Depending on the training data I could see generating the opposite as well, ask for a ruler and it includes skin cancer.

  • 🇰 🔵 🇱 🇦 🇳 🇦 🇰 ℹ️
    cake
    link
    fedilink
    English
    10
    edit-2
    2 months ago

    Would they be equally surprised to see a majority of subjects in baggy jeans with chain wallets if they prompted it to generate an image of a teen in the early 2000’s? 🤨

  • Possibly linux
    link
    fedilink
    English
    72 months ago

    1000002349

    I’m not sure how AI could be possibility racist. (Image is of a supposed Native American but my point still stands)

  • Haus
    link
    fedilink
    42 months ago

    Whenever I try, I get Ravi Bhatia screaming “How can she slap?!”