• @[email protected]
    link
    fedilink
    English
    6
    edit-2
    1 year ago

    The model doesn’t contain the training data—it can’t reproduce the original work even if it were instructed to, except by accident. And it wouldn’t know it had done so unless it were checked by some external process that had access to the original.

    • @[email protected]
      link
      fedilink
      English
      21 year ago

      In case anyone wants to try this out: Get ComfyUI and this plugin to get access to unsampling. Unsample to the full number of steps you’re using, and use a cfg=1 for both sampler and unsampler. Use the same positive and negative prompt for both sampler and unsampler (empty works fine, or maybe throw BLIP at it). For A1111: alternative img2img, only heard of it never used it.

      What unsampling is doing is finding the noise that will generate a specific image, and it will find noises that you can’t even get through the usual interface (because there’s more possible latent images than noise seeds). Cfg=1 given the best reproduction possible. In short: The whole thing shows how well a model can generate a replica of something by making sure it gets maximally lucky.

      This will work very well if the image you’re unsampling was generated by the model you’re using to unsample and regenerate it, it will work quite well with related models, imparting its own biases on it, and it’s way worse for anything else. If you ask it to re-create some random photograph it’s going to have its own spin on it changing up pretty much all of the details, if you try to do something like re-creating a page of text it’s going to fail miserably as stable diffusion just can’t hack glyphs.