Internet Watch Foundation has found a manual on dark web encouraging criminals to use software tools that remove clothing. The manipulated image could then be used against the child to blackmail them into sending more graphic content, the IWF said.

  • @[email protected]
    link
    fedilink
    English
    5
    edit-2
    7 months ago

    There are no seatbelts. Its either cars or only public transport.

    So you’re essentially (and falsely) asserting that there is no way to regulate AI without eliminating it completely. Do you understand how insane and reckless that sounds?

    Should AI be able to give instructions on building a bomb as well because to not do so “sTiFlEs iNnOvAtIoN”?

    If people are going to train AI they have an obligation to ensure it’s not producing harm and there should be consequences to those who design and train AI in a reckless or harmful way.

    Yes, you should be restricted from creating a child porn generator.

    • @[email protected]
      link
      fedilink
      English
      57 months ago

      I think the question is: should we have designed the internet such as to have made it impossible to find bomb plans on it? And to be honest, I don’t think the internet would be what it is if it were possible to have that level of filtering and censorship. Child porn is reprehensible in any form. To me, it makes more sense to blame the moron with the hammer than to blame the hammer.

    • @[email protected]
      link
      fedilink
      English
      3
      edit-2
      7 months ago

      What you are asking for is equivalent to stopping people from writing literotica about children using word.

      Nobody is advocating for child literotica or defending it, but most understand that it would take draconian measures to stop it. Word would have to be entirely online and everything written would have to pass through a filter to verify it isn’t something illegal.

      By it’s very nature, it’s very difficult to remove such things from generative models. Although there is one solution I can think of which would be to take children completely out of models.

      The problem is this isn’t a solution that is being proposed, sadly all current possible legislations are meant to do one thing and that is to create and cement a monopoly around AI.

      I’m ready to tackle all issues involving AI but the main current issue is a handful of companies trying to rip it out of our hands and playing on people’s emotions to do so. Once that’s done, we can take care of the 0.01 % of users that are generating CP.