A bipartisan group of senators introduced a new bill to make it easier to authenticate and detect artificial intelligence-generated content and protect journalists and artists from having their work gobbled up by AI models without their permission.

The Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED Act) would direct the National Institute of Standards and Technology (NIST) to create standards and guidelines that help prove the origin of content and detect synthetic content, like through watermarking. It also directs the agency to create security measures to prevent tampering and requires AI tools for creative or journalistic content to let users attach information about their origin and prohibit that information from being removed. Under the bill, such content also could not be used to train AI models.

Content owners, including broadcasters, artists, and newspapers, could sue companies they believe used their materials without permission or tampered with authentication markers. State attorneys general and the Federal Trade Commission could also enforce the bill, which its backers say prohibits anyone from “removing, disabling, or tampering with content provenance information” outside of an exception for some security research purposes.

(A copy of the bill is in he article, here is the important part imo:

Prohibits the use of “covered content” (digital representations of copyrighted works) with content provenance to either train an AI- /algorithm-based system or create synthetic content without the express, informed consent and adherence to the terms of use of such content, including compensation)

  • @[email protected]OP
    link
    fedilink
    English
    26 months ago

    Google already paid 6 million to Reddit for their dataset (preemptively since I’m guessing they are lobbying for laws like this), I didn’t get a dime. Who do you think this helps here?

    The change with this law would be that no one, big tech companies or open source, gets to use this content for free to train new models right?

    My point is that this essentially insure that ONLY big tech companies will get to use the content. Do you think they mind spending a few million if it gives them a monopoly? They actively want this.

    If it’s between the platform I used getting paid for my content while I get nothing and then I have to pay Openai to use a tool built with my content or the platform and me getting nothing while I get free AI, I will chose the latter.

    There are two scenarios and in both, AI massively brings up productivity and huge layoffs happen. The difference is in one scenario, the tools are priced low enough so it’s economical to replace 5 workers with them but high enough so those same workers can’t afford them and compete with the business that just fired them. A situation where no company can remain competitive without paying Openai or Google 50k a month is a dystopian nightmare.

    Open source is the best way to make sure this doesn’t happen and while these laws are the smallest of speed bumps for big tech companies, it is a literal wall for FOSS.

    The best solution would be to copyleft all models using public data, the second best would be to leave things as is. This isn’t a solution but regulatory capture.

    • @[email protected]
      link
      fedilink
      English
      16 months ago

      My man, I think you are delisuonal hahahaha. You are giving AI way too much credit to a technology that’s just a glorified autocomoplete. But I guess I get your point, if you think that AI (and LLMs in particular hahahaha) is the way of the future and all that, then this is apocalyptic hahahahaha.

      But you are delisuonal my man. The only practical use so far for these stupid LLMs is autocomplete which works great when it works. And bypassing copyright law by pretending it’s producing novel shit. But that’s a whole other discussion, time will show this is just another bubble like crypto hahahaha. For now, I hope they at least force everyone to stop plagiarising other peoples work with AI.

      • @[email protected]OP
        link
        fedilink
        English
        25 months ago

        Prohibits the use of “covered content” (digital representations of copyrighted works) with content provenance to either train an AI- /algorithm-based system or create synthetic content without the express, informed consent and adherence to the terms of use of such content, including compensation

        This affects a lot more than just llms and essentially fucks any use of machine learning. You do not understand what you are defending. This kills kaggle and huggingface over night since I figure corporation will be able to keep already created datasets for internal use but distribution will be a no go.

        You also have to be willfully blind to seriously think llms have no use cases. Ignoring the entertainment value, it’s a huge productivity boost, chatbots using it are now commonplace on websites (I preferred when it was actual people but that’s beside the point). I work in research and we are currently building a bunch of internal tools to use with our data.

        Hahaha all you want but you are defending something completely against your own self interests and those of society.

        • @[email protected]
          link
          fedilink
          English
          25 months ago

          So you are saying that content scraped before the law is fair game to train new models? If so it’s fucking terrible. But again, I doubt this is the case since this would be against the interests of the big copyright holders. And if it’s not the case you are just creating a storm in glass of water since this affects the companies too.

          As a side point, I’m really curious about LLM uses. As a programmer the only useful product I have seen so far is copilot and similar tools. And I ended up disabling the fucking thing because it produces too much garbage hahaha. But I’m the first to admit I haven’t been following this hype cycle hahahaha, so I’m really curious what the big things will be. You clearly know so much, so want to enligten me?

          • @[email protected]OP
            link
            fedilink
            English
            25 months ago

            This bill is being built with the interests of the big tech companies in mind imo, big copyright holders are just an afterthought. I figure since big tech spent quite a bit of money building those datasets and since they were built before the law, they will be able to keep using them as long as they don’t add anything new but I can’t be certain.

            The use cases are vast. This is a huge boon for the indie gaming and animation industry. I’m seriously excited to have NPCs running on llms and don’t want to be forced into a subscription just to play my games. It’s also going to bring smart homes to an other level. Systems can be built that are much stronger than Alexa without having to send all that insanely private data to Amazon. There’s a huge privacy issue if all the available models only run on Google or openais cloud, but I won’t get into that (not to mention that these corporate llms will eventually be trained for advertisement and will essentially be poisoned to prefer whoever is paying its creator).

            I’ll give some more concrete example with my work but it will be a bit vague to preserve my anonymity.

            I work in research (I originally studied software engineering and robotics) and we have about 20 years worth of projects. None of it is standardized and it’s honestly a mess. I built a system in the space of a few days that grabs everyone of those docs, reads through it with an LLM and then classifies them doc per doc into an excel sheet with a SharePoint link. I’ve got 20 columns in there, it summarizes them, choses from a list of 30 types of documents I gave it, extracts related towns and people as well as companies and domain, it extracts the columns if there are any tables inside and generally establishes a bunch of different relationships. It doesn’t sound like much but doing it by hand would have been weeks of tedious work. My computer did it in 20 minutes using a local LLM so any sensitive client data doesn’t leave the building.

            Right now I’m working on a GraphRAG system that will take all those docuuments and turns into into vectors, then an LLM adds relationships to those vectors. It will be incorporated into an internal chatbots so people can ask questions and not only get a natural language answer but have the references where the information was found and quick access to it. It’s vector search on steroids and will cost nothing to run. I’m planning on eventually training the chatbots itself on our data so it can have a better understanding of our research sector as well as direct access to all the documents.

            Next is building something that gets info automatically from the web. Sometimes we have to create long Excel sheets with a bunch of different data points. We stay at a state level usually but it can sometimes mean 1000 businesses and we have to google each one manually and find the info. It’s sometimes weeks of work and honestly sucks doing. Llms are entirely capable of doing this kind of work and would take a few hours at most, again at no cost.

            These things are seriously great whenever it’s dealing with data that isn’t just numbers and is hard to quantify. I hate Reddit and will never create an account there after what happened but I still go daily to the localllama subreddit, it’s a great source of information if you want to keep abreast with what’s happening.

            • @[email protected]
              link
              fedilink
              English
              1
              edit-2
              5 months ago

              I figure since big tech spent quite a bit of money building those datasets and since they were built before the law, they will be able to keep using them as long as they don’t add anything new but I can’t be certain.

              This is a very weird assumption you are making man. The quoted text you sent above pretty much says the opposite. It says everyone who wants to train their models wirh copyrigthed data needs to get permission from the copyright holders. That is great for me period. No one, not a big company nor the open source community, gets to steal the work of people producing art, code, etc. I honestly don’t get why you assume all the data scrapped before would be exempt. Again, very weird assumption.

              As for ML algorithms having use, of course they have. Hell, pretty much every company I have worked with has used them for decades. But take a look at the examples you provided. None of them requires you or your company scrapping a bunch of information from randoms on the internet. Specially not copyrighted art, literature, or code. And that’s the point here, you are acting like all of that stops with these laws but that’s ridiculous.

              • @[email protected]OP
                link
                fedilink
                English
                25 months ago

                The article is pro corpo, I’m looking at the bill and it’s quite clear where it’s headed.

                None of what I mentioned is possible without the LLM that’s at its heart. Just training an LLM is a million or two in compute power. We don’t get the next generation for free if laws like this tack on an extra 80 million. 6 million for Reddit and that was when you could scrap it for free, and that’s just a drop in the bucket.