• @[email protected]
    link
    fedilink
    English
    832 months ago

    Why would you ask a bot to generate a stereotypical image and then be surprised it generates a stereotypical image. If you give it a simplistic prompt it will come up with a simplistic response.

    • @[email protected]
      link
      fedilink
      English
      62 months ago

      So the LLM answers what’s relevant according to stereotypes instead of what’s relevant… in reality?

      • @[email protected]
        link
        fedilink
        English
        23
        edit-2
        2 months ago

        It just means there’s a bias in the data that is probably being amplified during training.

        It answers what’s relevant according to its training.