Commons talk:AI-generated media
|
Real-life
[edit]@Trade: you added the parenthetical phrase in "AI fan art of (real-life) fictional characters", which seems oxymoronic to me. How can something be both "real-life" and "fictional"? - Jmabel ! talk 01:52, 21 August 2023 (UTC)
- Fictional characters that exist outside of the AI generated art in question Trade (talk) 01:54, 21 August 2023 (UTC)
- I hoped the name of the category was enough but unfortunately people keep filling it with images that had nothing to do with fan art. Trade (talk) 01:56, 21 August 2023 (UTC)
- To be fair, the description at the top of Category:AI-generated fictional characters doesn't suggest not to. And a lot of categorisation happens based on the category name alone.
- Would Category:AI-generated fan art be a useful subcategory to create? Belbury (talk) 09:13, 21 August 2023 (UTC)
- I hoped the name of the category was enough but unfortunately people keep filling it with images that had nothing to do with fan art. Trade (talk) 01:56, 21 August 2023 (UTC)
AI images of real subjects (aka. "deepfakes")
[edit]One subject which this draft doesn't seem to address clearly is the topic of AI images which appear to represent real subjects - e.g. real people, real places, real historical events, etc. These images have the potential to be misleading to viewers, and can cause harm to the project by discouraging the contribution of real images, or by being used as substitutes for real images which are already available.
I'd like to address this as follows. This is intentionally strongly worded, but I feel that it's warranted given the potential for deception:
AI-generated images which contain photorealistic depictions of notable people, places, or historical events have the potential to deceive viewers, and must not be uploaded.
If AI-generated images containing these subjects are used as illustrations, effort should be made to use images which cannot be mistaken for photographs, e.g. by prompting the image generation model to use a cartoon art style.
In a limited number of cases, realistic images containing these subjects may be used as demonstrations of AI image generation or "deepfakes". These images should be watermarked to make it clear to viewers and downstream users of these images that they were machine-generated.
Thoughts? Omphalographer (talk) 23:23, 11 September 2023 (UTC)
- A COM:WATERMARK on a demonstration image significantly reduces any constructive reuse of it. Anybody wanting to reuse a notable fake like File:Pope Francis in puffy winter jacket.jpg in their classroom or book should be able to get that direct from Commons.
- These images would benefit from prominent warning templates, though, and perhaps an explicit "Fake image of..." in the filenames. Belbury (talk) 08:25, 12 September 2023 (UTC)
- Why not just add a parameter to the AI template that can be used to indicate whether or not the image depicts a living person? Trade (talk) 10:51, 12 September 2023 (UTC)
- The problem I'm concerned with is reuse of these images outside Wikimedia projects, where the image description certainly won't be available and the filename will likely be lost as well. Photorealistic AI-generated images of recognizable subjects should be fairly rare on Wikimedia projects, and I'm confident that editors can come up with some way of marking them which makes their nature clear without being overly intrusive.
- How about the rest? Are we on board with the overall principle? Omphalographer (talk) 19:41, 12 September 2023 (UTC)
- Seems reasonable to me. And an alternative to a watermark in the narrow sense would be a mandatory notice in a border under the photo. - Jmabel ! talk 20:10, 12 September 2023 (UTC)
- No matter the amount of whistles, alarms and whatnot you put up there will always be someone who cant be bothered to read it before posting the image somewhere else Trade (talk) 15:41, 14 September 2023 (UTC)
- Certainly. But if the image itself can tell viewers "hey, I'm not real", then at least it has less potential to mislead. Omphalographer (talk) 17:01, 14 September 2023 (UTC)
- In what manner is that not covered by the current AI-image license template + related categories and description? Trade (talk) 22:41, 23 September 2023 (UTC)
- Because those do not tend to travel with the image itself when it is reproduced. Indeed, if it the image is used incorrectly within a Wikipedia there would be no indication of that unless someone clicks through. - Jmabel ! talk 22:51, 23 September 2023 (UTC)
- Even if you click to expand an embedded image in an article, the full license and disclaimer templates are only visible if you then click through to the full image page. To an unsophisticated user, they might as well not exist. Omphalographer (talk) 00:19, 24 September 2023 (UTC)
- So in short the only realistic solution would be a warning template that appears when someone from a Wiki project click to expand the image Trade (talk) 22:04, 13 October 2023 (UTC)
- Even if you click to expand an embedded image in an article, the full license and disclaimer templates are only visible if you then click through to the full image page. To an unsophisticated user, they might as well not exist. Omphalographer (talk) 00:19, 24 September 2023 (UTC)
- Because those do not tend to travel with the image itself when it is reproduced. Indeed, if it the image is used incorrectly within a Wikipedia there would be no indication of that unless someone clicks through. - Jmabel ! talk 22:51, 23 September 2023 (UTC)
- No matter the amount of whistles, alarms and whatnot you put up there will always be someone who cant be bothered to read it before posting the image somewhere else Trade (talk) 15:41, 14 September 2023 (UTC)
- Seems reasonable to me. And an alternative to a watermark in the narrow sense would be a mandatory notice in a border under the photo. - Jmabel ! talk 20:10, 12 September 2023 (UTC)
- Why not just add a parameter to the AI template that can be used to indicate whether or not the image depicts a living person? Trade (talk) 10:51, 12 September 2023 (UTC)
- @Omphalographer: I think we should change "notable people" to "actual people" and remove "places" (as that seems overly broad and unnecessary to me). Nosferattus (talk) 23:14, 20 September 2023 (UTC)
- We may also want to clarify that this doesn't apply to AI-enhanced photographs. Nosferattus (talk) 23:16, 20 September 2023 (UTC)
- Excellent point on "notable people"; I agree that this policy should extend to any actual person, not just ones who cross some threshold of notability.
- The inclusion of "places" was intentional. A synthetic photo of a specific place can be just as misleading as one of a person or event; consider a synthesized photo of a culturally significant location like the Notre-Dame de Paris or Mecca, for instance.
- AI-enhanced photographs are... complicated. There's no obvious line dividing photos which are merely "AI-enhanced" and ones which begin to incorporate content which wasn't present in the source photo. For instance, the "Space Zoom" feature of some Samsung phones replaced photos of the moon with reference photos of the moon - this level of processing would probably be inappropriate for Commons photos. Omphalographer (talk) 00:11, 21 September 2023 (UTC)
- @Omphalographer I think there are some legitimate reasons for creating and uploading photorealistic AI images of places, and less danger that they cause harm. For example, an AI generated image of an ice-free Greenland might be useful for a Wikibook discussing climate change. Sure, it could be misleading if used in the wrong context, but it doesn't worry me as much as AI images of people.
- So are you suggesting that all AI-enhanced photographs should also be banned? This will probably be the majority of all photographs in the near future, so I wouldn't support that proposal. Nosferattus (talk) 00:32, 20 October 2023 (UTC)
- I'm not suggesting that all AI-enhanced photos should be banned, but that the limits of what's considered acceptable "enhancement" need to be examined. Filters which make mild changes like synthetically blurring the background behind a person's face or adjusting contrast are almost certainly fine; ones which add/remove substantial elements to an image or otherwise dramatically modify the nature of the photo (like style-transferring a photograph into a painting or vice versa) are probably not.
- With regard to "places", would you be happier if that were worded as "landmarks"? What I had in mind was synthetic photos of notable buildings, monuments, or similarly specific places - not just any location. Omphalographer (talk) 05:57, 20 October 2023 (UTC)
- Oppose - Very strongly against this proposal, which would be highly problematic for many reasons and unwarranted censorship.
- Agree with Belbury on prominent warning templates, though, and perhaps an explicit "Fake image of..." in the filenames - we should have prominent templates for AI images in general and prominent warning templates for deepfake ones...a policy on file title requirements is something to consider. Prototyperspective (talk) 22:11, 13 October 2023 (UTC)
- Oppose per Prototyperspective. I fail to see why this issue is fundamentally different from other kinds of images that could be convincingly misrepresented as actual, unaltered photographs depicting real people, real places, real historical events, an issue that is at least a century old (see e.g. w:Censorship of images in the Soviet Union, Category:Manipulated photographs etc).
- That said, I would support strengthening existing policies against image descriptions (and files names) that misrepresent such images as as actual photos, whether they are AI-generated, photoshopped (in the sense of edits that go beyond mere aesthetics and change what a general viewer may infer from the image about the depicted person, place etc.) or otherwise altered. That's assuming that we have such policies already - do we? (not seeing anything at Template:Commons policies and guidelines)
- PS regarding landmarks: I seem to recall that authenticity issues have been repeatedly debated, years ago already, in context of Wiki Loves Monuments and related contests, with some contributors arguing that alterations like removing a powerline or such that "ruins" a beautiful shot of a monument should not affect eligibility. I do find that problematic too and would support at least a requirement to clearly document such alterations in the file description.
- Regards, HaeB (talk) 01:35, 25 October 2023 (UTC)
- Weak oppose per HaeB. Although I'm sympathetic to the idea of banning deepfake images, I think the proposed wording is too broad in one sense (subjects included) and too narrow in another sense (only addressing AI images). I would be open to a proposal focusing on photo-realistic images of people or events that seem intended to deceive or mislead (regardless of whether they are AI generated or not). Nosferattus (talk) 04:47, 25 October 2023 (UTC)
Custom template for upscaled images
[edit]This page currently advises adding {{Retouched picture}} to upscaled images, which if used without inserting specific text gives a neutral message of This is a retouched picture, which means that it has been digitally altered from its original version. with no mention of the AI nature of the manipulation.
Would it be useful to have a custom AI-upscale template that puts the image into a relevant category and also spells out some of the issues with AI upscaling (potentially introducing details which may not be present at all in the original, copyrighted elements, etc), the way that {{Colorized}} specifically warns the user that the coloring is speculative and may differ significantly from the real colors? Belbury (talk) 08:19, 4 October 2023 (UTC)
- Yes. - Jmabel ! talk 15:16, 4 October 2023 (UTC)
- I also support this. However, there are similar issues also for other ways of using AI to restore or modify images, not just for upscaling. See some examples in Category:AI-restoration (and also Category:AI images generated with specified images as input and AI-generated realistic images of real events like this one).
- Prototyperspective (talk) 09:44, 5 October 2023 (UTC)
I've made a rough first draft of such a template at {{AI upscaled}}, which currently looks like this:
When the template is included on a file page it adds that file to Category:Photos modified by AI per the recommendation at Commons:AI-generated_media#Categorization_and_templates.
Feedback appreciated on what the message should say, and what options the template should take. It should probably always include a thumbnail link to the original image (or an alert that the original is freely licenced but hasn't been uploaded to Commons), and an option to say what software was used, if known, so that the file can be subcategorised appropriately.
It may well be worth expanding this to a generic AI template that also covers restoration and generation, but I'll put this forward for now. --Belbury (talk) 08:21, 12 October 2023 (UTC)
- Could you make a template for AI misgeneration? Trade (talk) 23:41, 18 November 2023 (UTC)
- Would that be meaningfully distinct from your existing {{Bad AI}} template? Belbury (talk) 19:39, 19 November 2023 (UTC)
Wikimedia Foundation position on AI-generated content
[edit]The Wikimedia Foundation recently submitted some comments to the US Copyright Office in response to a Request for Comments on Artificial Intelligence and Copyright. Many of the points made by the Foundation will likely be of interest here, particularly the opening statement that:
Overall, the Foundation believes that generative AI tools offer benefits to help humans work more efficiently, but that there are risks of harms from abuse of these tools, particularly to generate large quantities of low-quality material.
Omphalographer (talk) 20:06, 9 November 2023 (UTC)
- I wonder if there was a specific DR that made thw Foundation concerned about low quality spam. Or maybe someone just complained to staff staff? Trade (talk) 23:38, 18 November 2023 (UTC)
Interesting
[edit]https://twitter.com/Kyatic/status/1725120435644239889 2804:14D:5C32:4673:DAF2:B1E3:1D20:8CB7 03:31, 17 November 2023 (UTC)
- This is highly relevant - thanks, whoever you are! Teaser:
This is the best example I've found yet of how derivative AI 'art' is. The person who generated the image on the left asked Midjourney to generate 'an average woman of Afghanistan'. It produced an almost carbon copy of the 1984 photo of Sharbat Gula, taken by Steve McCurry.
- If you don't have a Twitter account, you can read the thread at https://nitter.net/Kyatic/status/1725120435644239889.
- Omphalographer (talk) 03:59, 17 November 2023 (UTC)
- Hypothetically would we be allowed to upload the AI photo here? Trade (talk) 23:35, 18 November 2023 (UTC)
- No? The whole discussion is about how it's a derivative work of the National Geographic photo. Omphalographer (talk) 02:15, 19 November 2023 (UTC)
- I couldn't find clear info regarding artworks that look very close to non-CCBY photographs at Commons:Derivative works. This particular image may be fine, it's not prohibited just because the person looks similar to an actual person who was photographed and that photograph was the 'inspiration' to the AI creator.
- That AI images look very similar to an existing photograph is overall exceptional and depends on issues with training data, parameters/weighting, and the prompts. Moreover, it's possible that this was caused on purpose to make a point or that they had put an extreme weight on high-valued photographs for cases like this while there are only few images of women from Afghanistan in the training data...more likely though the AI simply does not 'understand' (or misunderstands) what is meant by "average" here.
- img2img issues
- The bigger issue is that you can use images as input images and let AI modify them according to your prompt (example of how this can be really useful). This means some people may upload such an image without specifying the input image so people can check whether or not that is CCBY. If strength of the input image is configured to be e.g. 99% the resulting image would look very similar. I think there should be a policy that when you upload a AI-generated image via img2img, you should specify the input image. Prototyperspective (talk) 11:22, 19 November 2023 (UTC)
- If a human had created that image, we would certainly delete it as a plagiaristic copyvio. I see no reason to treat it more favorably because the plagiarist is a a computer program. - Jmabel ! talk 18:48, 19 November 2023 (UTC)
- I don't think so. I don't see much use of discussing this particular case and only meant to say that the derivative works does not really have info on this but I think artistic works that show something that has previously been photographed are allowed. Or are artworks of the Eiffel tower not allowed if the first depiction of it is a photograph that is not CCBY? Prototyperspective (talk) 18:58, 19 November 2023 (UTC)
- COM:BASEDONPHOTO is the relevant Commons policy for a drawing based on a single photograph: it requires the photographer's permission. I too would see no copyright difference between a human sketching a copy of a single specific photograph and an AI doing the same thing digitally. Belbury (talk) 19:19, 19 November 2023 (UTC)
- Thanks, a link to this policy was missing here so far. I don't see how the issue of photographing things like the Eiffel tower to copyright them away is addressed though. In this case, a person was photographed. I object to the notion that if the first photograph of a person, animal, object, or whatever is not in the public domain, it also can't be drawn under CCBY. I too would not see a copyright difference between a human sketching of a single photograph and an AI doing the same thing digitally. That is what img2img is, the case above is not based on a single image but many images, including many images of women. It would never work if it was just on one image. Prototyperspective (talk) 22:01, 19 November 2023 (UTC)
- The case above is not based on a single image but many images, including many images of women... I'm not convinced. The results shown look very much like they are based primarily on the National Geographic photo, possibly because there were many copies of it in the training data. Omphalographer (talk) 22:18, 19 November 2023 (UTC)
- @Prototyperspective: "the notion that if the first photograph of a person, animal, object, or whatever is not in the public domain, it also can't be drawn under CCBY." That's a straw-man argument, as is your Eiffel Tower example. You are refuting a claim that no one is making. While we have little insight into the "mind" of a generative AI system, I think we can reasonably conclude that if the AI had not seen that particular copyrighted image or works derived from it, then the chance is vanishingly small that it would have produced this particular image. And that is the essence of plagiarism. - Jmabel ! talk 01:06, 20 November 2023 (UTC)
- You may be misunderstanding COM:BASEDONPHOTO. It isn't saying that once somebody takes a photo of a subject, they have control over anyone who chooses to sketch the same subject independently in the future. It only applies to someone making a sketch using that photo alone as their reference for the subject.
- The "many images" point doesn't seem very different from how a human would approach the same copying process. The human would also be applying various internal models of what women and hair and fabric generally look like, when deciding which details to include and omit, and which textures and styles to use. It would still result in a portrait that had been based closely on a specific existing one, so would be infringing on the original photographer's work. Belbury (talk) 13:01, 20 November 2023 (UTC)
- Now there are many good points here.
- I won't address them in-depth or make any statements in regards to whether I agree with these points and their conclusion. Just a brief note on an issue & unclarity: what if that photo is the only photo of the organism or object? Let's say you want to draw an accurate artwork of an extinct animal photographed once where you'd orient by/use the photo – I don't think current copyright law finds you are not allowed to do so. In this case, I think this photo is the only known photo of this woman whose noncopyrighted genetics further emphasize her eyes making a certain noncopyrighted facial expression. Prototyperspective (talk) 13:18, 20 November 2023 (UTC)
- @Prototyperspective: I am going to assume good faith, and that you are not just arguing for the sake of arguing, but this is the last time I will respond here.
- If there is exactly one photo (or other image) of a given organism or object, and it is copyrighted, and you create a work that is clearly derivative of it in a degree that is clearly plagiaristic, then most likely you are violating copyright. Consider the Mona Lisa. We don't have any other image of that woman. If it were a painting recent enough to still be in copyright, and you created an artwork that was nearly identical to the Mona Lisa, you'd be violating Leonardo's [hypothetical] copyright.
- For your "extinct animal" case: probably the way to create another image that did not violate copyright would be to imagine it in a different pose (based at least loosely on images of a related species) and to draw or otherwise create an image of that. But if your drawing was very close to the only known image, and that image was copyrighted, you could well be violating copyright.
- Again: the user didn't ask the AI to draw this particular woman. They asked for "an average woman of Afghanistan," and received a blatant plagiarism of a particular, iconic photo. Also, you say, "I think this photo is the only known photo of this woman." I suppose that may be an accurate statement of what you think, but it also tells me you have chosen to listen to your own thoughts rather than do any actual research. It is not the only photo of Sharbat Gula, nor even the only published photo of her. Other photos from that photo session when she was 12 years old were published (though they are less iconic) and I have seenn at least two published photos of her as an adult (one from the 2000s and one more recent). I suspect there are others that I have not seen. {[w|Sharbat Gula|She's had quite a life}} and now lives in Italy.
- Jmabel ! talk 21:16, 20 November 2023 (UTC)
- @Prototyperspective: I am going to assume good faith, and that you are not just arguing for the sake of arguing, but this is the last time I will respond here.
- The case above is not based on a single image but many images, including many images of women... I'm not convinced. The results shown look very much like they are based primarily on the National Geographic photo, possibly because there were many copies of it in the training data. Omphalographer (talk) 22:18, 19 November 2023 (UTC)
- Thanks, a link to this policy was missing here so far. I don't see how the issue of photographing things like the Eiffel tower to copyright them away is addressed though. In this case, a person was photographed. I object to the notion that if the first photograph of a person, animal, object, or whatever is not in the public domain, it also can't be drawn under CCBY. I too would not see a copyright difference between a human sketching of a single photograph and an AI doing the same thing digitally. That is what img2img is, the case above is not based on a single image but many images, including many images of women. It would never work if it was just on one image. Prototyperspective (talk) 22:01, 19 November 2023 (UTC)
- COM:BASEDONPHOTO is the relevant Commons policy for a drawing based on a single photograph: it requires the photographer's permission. I too would see no copyright difference between a human sketching a copy of a single specific photograph and an AI doing the same thing digitally. Belbury (talk) 19:19, 19 November 2023 (UTC)
- I don't think so. I don't see much use of discussing this particular case and only meant to say that the derivative works does not really have info on this but I think artistic works that show something that has previously been photographed are allowed. Or are artworks of the Eiffel tower not allowed if the first depiction of it is a photograph that is not CCBY? Prototyperspective (talk) 18:58, 19 November 2023 (UTC)
- This exact issue is described in Commons:AI-generated media#Copyrights of authors whose works were used to train the AI. It isn't discussed in other Commons policies because those documents were generally drawn up before AI image generation was a thing. Omphalographer (talk) 19:42, 19 November 2023 (UTC)
- If a human had created that image, we would certainly delete it as a plagiaristic copyvio. I see no reason to treat it more favorably because the plagiarist is a a computer program. - Jmabel ! talk 18:48, 19 November 2023 (UTC)
- No? The whole discussion is about how it's a derivative work of the National Geographic photo. Omphalographer (talk) 02:15, 19 November 2023 (UTC)
- Hypothetically would we be allowed to upload the AI photo here? Trade (talk) 23:35, 18 November 2023 (UTC)
- Similarly to the issues presented in the Twitter/X thread, there is a lawsuit of a group of artists against several companies (incl. Midjourney and Stability AI), where a similar concern is presented (i.e. AI-generated media taking precedence above directly related images and works). I think this, among other things, is important to consider when deciding what scope Commons has in regards to AI-generated media. EdoAug (talk) 12:57, 9 December 2023 (UTC)
Archiving
[edit]I would like to set up archiving fo this talk page to make active discussion more clearly visible. What do you think of a 8 month threshold of inactivity for now? By my estimation that would archive about a half or less of this talk page. Of course, the setting can be revisited later. whym (talk) 09:21, 25 November 2023 (UTC)
- i made one with 300d, which should let discussions stay around for about 1 year. :) --RZuo (talk) 14:35, 25 November 2023 (UTC)
- I think one year is by far not long enough to keep discussions. This should be much longer. It needs to be possible to figure out why a certain rule was implemented and what the substantiation was. JopkeB (talk) 04:41, 9 December 2023 (UTC)
- @JopkeB: It's not like you can't look at the archive. - Jmabel ! talk 08:20, 9 December 2023 (UTC)
- Thanks, User:Jmabel, I did not understand that "let discussions stay around for about 1 year" means that discussions stay on this talk page for about a year and then were moved to the archive. Of coarse this is OK. JopkeB (talk) 09:44, 9 December 2023 (UTC)
- I think 1 year was a good start. When I posted this,[1] the talk page size was 117k bytes. Now it's 119k bytes. Can we shorten the threshold to 8 month? Usually, busy pages have (and I would say, need) more frequent archiving. whym (talk) 11:58, 5 January 2024 (UTC)
- Thanks, User:Jmabel, I did not understand that "let discussions stay around for about 1 year" means that discussions stay on this talk page for about a year and then were moved to the archive. Of coarse this is OK. JopkeB (talk) 09:44, 9 December 2023 (UTC)
- @JopkeB: It's not like you can't look at the archive. - Jmabel ! talk 08:20, 9 December 2023 (UTC)
- I think one year is by far not long enough to keep discussions. This should be much longer. It needs to be possible to figure out why a certain rule was implemented and what the substantiation was. JopkeB (talk) 04:41, 9 December 2023 (UTC)
Something I would actively like us to do involving AI images
[edit]I think it would be very useful to track the evolution of generative AI by giving some specific collection of prompts identically to various AI engines and see what each produces, and repeating this experiment periodically over the course of years to show how those engines evolve. Now that would clearly be within scope, if we don't trip across copyright issues of some sort. - Jmabel ! talk 21:55, 10 December 2023 (UTC)
- No need to highlight that being more clearly within scope here. I don't think you thought long about all the many potential and already existing use-cases of AI generated media especially once these improve. At some point with enough expertise you possibly could generate high-quality images of anything you can imagine matching quite closely to what you intended to depict (if you're skilled at prompting & modifying); how people don't see a tremendous potential in that is beyond me.
- One part of that has already been done. However, that one implementation only compares one specific prompt at one point in time.
- It would also be interesting to see other prompts such as landscape and people scenes prompts as well as how the generators improve on known issues such as misgenerated hands or basic conceptual errors (where e.g. people looking exactly the same are generated multiple times instead of only once as prompted). I think instead of uploading many images of the same prompt using different styles (example) it would be best to upload large collages (example) that include many styles/images at once. Since there is such a large number of styles, such applications are not as valuable as some other ones where one or a few images are enough and may even close a current gap. Prototyperspective (talk) 14:46, 13 December 2023 (UTC)
- For collages/montages we still prefer to also have each individual image available in a file of its own. - Jmabel ! talk 19:03, 13 December 2023 (UTC)
More food for thought on AI-generated content
[edit]Omphalographer (talk) 18:02, 18 December 2023 (UTC)
- @Omphalographer: I was here to post the same thing! You beat me by 43 minutes. - Jmabel ! talk 18:55, 18 December 2023 (UTC)
- I think what they're posing there is more of a moral than a copyright question. What is being "stolen" is the concept of "man with wood-carved dog". Possibly none of the AI-generated images presented there is close enough to the original photos (also shown) to be considered a derivative work (I'm not quite sure, but the general characteristics of a bulldog or a German Shepherd aren't copyrighted, and the AI-works could be seen as just another interpretation of the concept - and mere concepts aren't copyrighted). But that doesn't mean that the moral question is uninteresting or unimportant. It's something we should consider when we talk about what we want to host on Commons. I previously advocated for keeping AI-generated content here only if it's in use in a Wikimedia project, but this makes me think: Imagine that Wikimedia projects start using AI-generated images of this kind to show something they otherwise couldn't show for copyright reasons? Thanks to fair use, some projects like English-language Wikipedia can show a lot more than others, for example, en:The Empire of Light does contain the works of Magritte under the fair use policy. But as German-language Wikipedia doesn't accept fair use (due to the absence of that legal provision in German-language countries), de:Das Reich der Lichter only links to images and doesn't show them directly. Well, now a German-language Wikipedian could tell the AI to generate a "painting of a house surrounded by trees at night, but under a daylight sky" and the result would probably be a "painting" that is similar to Magritte's works in its concept. It could then be used with a caption like "AI-generated approximation of Magritte's Empire of Light series", maybe without being a copyright violation - but I think we wouldn't want that? Gestumblindi (talk) 23:45, 18 December 2023 (UTC)
- The generated images in question here go beyond copying a "concept". They're being generated as fairly close replicas of specific, identifiable source photos; the article has examples.
- Whether an "approximation" of a painting would be acceptable is questionable. If it's a close enough approximation, especially if it's imitating a specific work of art, that's likely to push it over the edge into being a derivative work. It's also questionable whether those projects would consider such an image educationally useful. Omphalographer (talk) 00:20, 19 December 2023 (UTC)
- I see now that some of the images are indeed very close to some of the originals (including virtually the same trees in the background, for example), but for others I would still say that they follow the concept without necessarily being a derivative work for copyright purposes. It would be legal (though not terribly original) to sculpt your own wooden dog and pose in a similar way as Michael Jones does. The very first image on the page, for example, has a background that is completely different from all of the photos by Jones shown, the dog is "sculpted" very differently, and the man looks nothing like Michael Jones. Gestumblindi (talk) 01:19, 19 December 2023 (UTC)
- "but I think we wouldn't want that?" It's not Commons responsibility to decide what images German Wikipedia users can use anymore than it's Commons responsibility to pick what admins they should have Trade (talk) 18:29, 20 December 2023 (UTC)
- I meant "we" more generally in the sense of the Wikimedia communities, not just Commons. I would strongly suspect that German-language Wikipedia's community would also be against such an approach. Gestumblindi (talk) 19:53, 20 December 2023 (UTC)
- I think what they're posing there is more of a moral than a copyright question. What is being "stolen" is the concept of "man with wood-carved dog". Possibly none of the AI-generated images presented there is close enough to the original photos (also shown) to be considered a derivative work (I'm not quite sure, but the general characteristics of a bulldog or a German Shepherd aren't copyrighted, and the AI-works could be seen as just another interpretation of the concept - and mere concepts aren't copyrighted). But that doesn't mean that the moral question is uninteresting or unimportant. It's something we should consider when we talk about what we want to host on Commons. I previously advocated for keeping AI-generated content here only if it's in use in a Wikimedia project, but this makes me think: Imagine that Wikimedia projects start using AI-generated images of this kind to show something they otherwise couldn't show for copyright reasons? Thanks to fair use, some projects like English-language Wikipedia can show a lot more than others, for example, en:The Empire of Light does contain the works of Magritte under the fair use policy. But as German-language Wikipedia doesn't accept fair use (due to the absence of that legal provision in German-language countries), de:Das Reich der Lichter only links to images and doesn't show them directly. Well, now a German-language Wikipedian could tell the AI to generate a "painting of a house surrounded by trees at night, but under a daylight sky" and the result would probably be a "painting" that is similar to Magritte's works in its concept. It could then be used with a caption like "AI-generated approximation of Magritte's Empire of Light series", maybe without being a copyright violation - but I think we wouldn't want that? Gestumblindi (talk) 23:45, 18 December 2023 (UTC)
Latest uploaded AI-generated files
[edit]Is there any way to see them? 2804:14D:5C32:4673:CC8C:1445:1D43:194B 17:58, 27 December 2023 (UTC)
- Searching for the name of software and sorting by recency is the best simple way.
- WMC needs walls of images sortable by recency that also shows images of subcats. All AI-images should be in a subcat of Category:AI-generated media. Petscan can be helpful too but seems dysfunctional most of the time. Prototyperspective (talk) 18:16, 27 December 2023 (UTC)
- Of course we have no easy way to search for AI-generated files that aren't properly identified as such. If AI / the name of the software isn't mentioned, it's difficult. Gestumblindi (talk) 19:41, 27 December 2023 (UTC)
- Either by context or artstyle... File:Prinssi Leo.jpg and sometimes, you just can´t tell... File:Gao ke qing in uniform.png hf. Alexpl (talk) 17:37, 2 January 2024 (UTC)
- "Prinssi Leo" is obviously AI-generated; I've just nominated a batch of images by that uploader for deletion (Commons:Deletion requests/Files uploaded by SKoskinenn). "Gao ke qing" looks like a low-quality photo but is likely out of scope anyway. Omphalographer (talk) 17:46, 2 January 2024 (UTC)
- But vaguely eductional stuff, even if most likely an AI work, stays in. Like File:வானவில்-அணிவகுப்பு V4 compressed.pdf and it´s fellow drawings. That is kind of a problem. Alexpl (talk) 21:04, 3 January 2024 (UTC)
- "Prinssi Leo" is obviously AI-generated; I've just nominated a batch of images by that uploader for deletion (Commons:Deletion requests/Files uploaded by SKoskinenn). "Gao ke qing" looks like a low-quality photo but is likely out of scope anyway. Omphalographer (talk) 17:46, 2 January 2024 (UTC)
- Either by context or artstyle... File:Prinssi Leo.jpg and sometimes, you just can´t tell... File:Gao ke qing in uniform.png hf. Alexpl (talk) 17:37, 2 January 2024 (UTC)
- Of course we have no easy way to search for AI-generated files that aren't properly identified as such. If AI / the name of the software isn't mentioned, it's difficult. Gestumblindi (talk) 19:41, 27 December 2023 (UTC)
File:Crying robot yelled at by newspaper, dall-e 3.pngAlexpl (talk) 07:18, 12 January 2024 (UTC)
- Which, besides anything else, has two unrelated images, one uploaded over the other. - Jmabel ! talk 16:09, 12 January 2024 (UTC)
- Well, it may look like the guy who did the en:I, Robot (film) artwork did way too many depressants, but at least it is modestly sized with 1.3 MB, compared to some of the "true" stuff we have got in tif-format. Alexpl (talk) 08:13, 17 January 2024 (UTC)
File:Jose Maria Narvaez.jpg and File:Weugot photo.jpg, last one not marked as AI, but looks like work by "pixai.art". Alexpl (talk) 20:42, 18 January 2024 (UTC)
- Thanks for pointing these out. I've nominated both for deletion - the first because it's ahistorical, the second (along with a few others) because it's unused personal artwork. Omphalographer (talk) 21:04, 18 January 2024 (UTC)
- Not to go off on conspiracy theories or anything, but it's interesting that both of those files were uploaded today by new users who don't have any prior edits. --Adamant1 (talk) 22:12, 18 January 2024 (UTC)
- I don't think there's a connection, beyond that they're both recent uploads.
- The uploader of File:Jose Maria Narvaez.jpg also uploaded a number of other AI-generated images of historical figures; I've brought the issue up with them on enwiki. Omphalographer (talk) 00:30, 19 January 2024 (UTC)
- Not to go off on conspiracy theories or anything, but it's interesting that both of those files were uploaded today by new users who don't have any prior edits. --Adamant1 (talk) 22:12, 18 January 2024 (UTC)
File:King Robert III.webp, File:Emperor Robert.webp and, even more concerning, File:David H Anderson - Portrait of George Washington Custis Lee 1832-1913 1876 - (MeisterDrucke-1182010).jpg + all the other stuff by Dboy2001. Alexpl (talk) 08:17, 19 January 2024 (UTC) File:Moplah Revolt guerilla army.jpg, File:Malbar Moplahs.jpg - claimed to be by painter "Edward Hysom" - but I was unable to verify. Alexpl (talk) 11:17, 6 February 2024 (UTC)
File:OIG1.IjIC.jpg and maybe File:OIG2 (1).png Alexpl (talk) 18:30, 8 February 2024 (UTC)
File:Robotica-industrial.jpg Alexpl (talk) 21:16, 8 February 2024 (UTC)
Various Putin impressions: File:Hang noodles on the ears 1.png - File:Hang noodles on the ears 2.png - File:Hang noodles on the ears 3.png Alexpl (talk) 20:09, 10 February 2024 (UTC)
- I'd actually prefer to use these images over other AI-generated images of Putin - it's obvious that they aren't real, so there's less risk of them being used in place of real images. Omphalographer (talk) 21:31, 10 February 2024 (UTC)
- I don't know why arbitrary AI images are posted here. They are dealt with such as being quickly categorized so that they can be easily excluded or speedily deleted when they claim to be made by some painter but weren't.
- All of this is like uploaders trying to make things appear problematic to build the impression that they are when they actually aren't. For 'risk of being used in place of real images', things like moving files to "…(AI-generated)" file-titles is one possibility but that's yet another thing that isn't a problem at all and if it was then the problem is not the file being here but something else missing such as 1) a way to view all file-uses of all files in "AI-generated images of real people" 2) somebody using that / some report showing that. Another thing needed not just for AI images but also for other issues are bots or scripts that do things like image reverse search to tag likely copyvios as well as AI images not yet categorized into "AI-generated media" (currently the number of AI images not yet in that cat is very small). Prototyperspective (talk) 22:08, 10 February 2024 (UTC)
- I will never ever add that category to anything. You can do it - or program a bot to do it. (oh, wait...) Alexpl (talk) 11:02, 11 February 2024 (UTC)
- So what is your intention when you post these filenames to a talk page section titled "Latest uploaded AI-generated files"? From the lack of AI categorisation at the time you post them, are you asking other editors to help you judge whether or not they're AI-generated? Belbury (talk) 13:30, 11 February 2024 (UTC)
- To show that "we" are not up to the task of hosting AI-works - unless the original uploaders kindly categorize them correctly. Which they don´t do in many cases. Alexpl (talk) 19:47, 11 February 2024 (UTC)
- So what is your intention when you post these filenames to a talk page section titled "Latest uploaded AI-generated files"? From the lack of AI categorisation at the time you post them, are you asking other editors to help you judge whether or not they're AI-generated? Belbury (talk) 13:30, 11 February 2024 (UTC)
- I will never ever add that category to anything. You can do it - or program a bot to do it. (oh, wait...) Alexpl (talk) 11:02, 11 February 2024 (UTC)
File:Bibicrouline.jpg Alexpl (talk) 20:13, 10 February 2024 (UTC)
Possible alternative/additional text for this page
[edit]The following was my summary of recent discussion at Commons:Village pump (based loosely on an earlier draft by [[User:JopkeB]).
1) Licensing: Commons hosts only images that are either public-domain or free-licensed in both the U.S. and their country of origin. We also prefer, when possible, for works that are in the public domain in those jurisdictions to also offer licenses that will allow reuse in countries that might allow these works to be copyrighted. As of the end of 2023, generative AI is still in its infancy, and there are quite likely to be legislation and court decisions over the next few years affecting the copyright status of its outputs.
As far as we can tell, the U.S. considers any work contribution of a generative AI, whether that is an enhancement of an otherwise copyrightable work or is an "original" work, to be in the public domain. That means that if a work by a generative AI is considered "original" then it is in the public domain in the U.S., and if it is considered "derivative" then the resulting work has the same copyright status as the underlying work.
However, some countries (notably the UK and China) are granting copyrights on AI-generated works. So far as we can tell, the copyright consistently belongs to the person who gave the prompt to the AI program, not to the people who developed the software.
The question of "country of origin" for AI-generated content can be a bit tricky. Unlike photographs, they are not "taken" somewhere in particular. Unlike content from a magazine or book, they have no clear first place of publication. We seem to be leaning toward saying that the country of origin is the country of residence of the person who prompted the AI, but that may be tricky: accounts are not required to state a country of residence; residence does not always coincide with citizenship; people travel; etc.
Consequently, for AI-generated works:
- a) Each file should carry a tag indicating that it is public domain in those countries that do not grant copyrights for AI-generated art.
- b) If its country of origin is one that grants copyrights for AI-generated art, then in addition to that tag, license requirements are the same as for any other copyrighted materials.
- c) If its country of origin is one that does not grant copyrights for AI-generated art, then we [require? request? I lean toward require] an additional license to cover use in countries that grant copyrights for AI-generated art.
For AI-enhanced works, the requirements are analogous. We should have a tag to indicate that the contribution of the AI is public domain in those countries that do not grant copyrights for AI-generated art, and that in those countries the copyright status is exactly the same as that of the underlying work. We would require/request the same additional licenses for any copyrightable contribution as we do for AI-generated work. In most cases, {{Retouched}} or other similar template should also be present.
2) Are even AI-generated "original" works derivative? There is much controversy over whether AI works are inherently all derivative, whether derived from one or a million examples, and whether the original works are known or not. Files only can be deleted for copyright infringement when there are tangible copyright concerns such as being a derivative work of a specific work you can point to.
Most currently available AI datasets include stolen images, used in violation of their copyright or licensing terms. Commons should not encourage the production of unethically produced AI images by hosting them.
AI datasets may contain images of copyrighted subjects, such as buildings in non-FOP countries or advertisements. Can we say that if, for example, a building in France is protected by copyright, an AI-generated image of that building would be exactly as much of a copyright violation as a photo of that building? Seems to me to be the case.
3) Accuracy: There is zero guarantee that any AI-generated work is an accurate representation of anything in the real world. It cannot be relied upon for the accurate appearance of a particular person, a species, a place at a particular time, etc. This can be an issue even with works that are merely AI-enhanced: when AI removes a watermark or otherwise retouches a photo, that retouching always involves conjecture.
4) Scope: We only allow artworks when they have a specific historical or educational value. We do not allow personal artworks by non-notable creators that are out of scope; they are regularly deleted as F10 or at DR. In general, AI-generated works are subject to the same standard.
5) Negative effects: AI-generated images on Commons can have the deleterious effect of discouraging uploaders from contributing freely licensed real images of subjects when an AI-generated version exists, or of leading editors to choose a synthetic image of a subject over a real one. As always, we recommend that editors find, upload and use good images, an it is our general consensus that an AI-generated or AI-enhanced image is rarely better than any available image produced by more traditional means.
That said, there are good reasons to host certain classes of AI images on Commons. In decreasing order of strength of consensus:
- Images to illustrate facets of AI art production.
- clearly there would need to be a decision on how many images are allowed under this rubric, and what sort of images.
- Use of ethically-sourced AI to produce heraldic images that inherently involve artistic interpretation of specifications.
- Icons, placeholders, diagrams, illustrations of theoretical models, explanations of how things work or how to make something (for manuals, guides and handbooks), abstracted drawings of for instance tools and architectural elements, and other cases where we do not need historical accuracy.
- For enhancing/retouching images, improving resolution and source image quality, as long as the original image stays on Commons, so the enhanced one gets a different filename and there should be link to the original image in the image description. AI-based retouching should presumably be held to the same standards as other retouching.
- Because Commons generally defers to our sister projects for "in use" files, allow files to be uploaded on an "as-needed" basis to satisfy specific requirements from any and all other Wikimedia projects. Such files are in scope on this basis only as long as they are used on a sister project. We will allow some time (tentatively one week) after upload for the file to be used.
- The need to allow slack for files to be used on this basis will raise some difficulties. We need to allow for a certain amount of good-faith efforts to upload such images that turn out not all to be used, but at some point if a user floods Commons with such images and few or none are used this way, that needs to be subject to sanctions.
- Our usual allowance for a small number of personal images for use on user and talk pages should work more or less the same for AI-generated images as for any other images without copyright issues, as long as their nature is clearly labeled. E.g. an AI-generated image of yourself or an "avatar" for your user page; a small number of examples of AI-generated works where you were involved in the prompting. (In short, again "same standard as if the work were drawn by an ordinary user.")
- (Probably no consensus for this one, but mentioning it since JopkeB did; seems to me this would be covered by the "Scope" section above, "same standard as if the work were drawn by an ordinary user.") For illustrating how cultures and people could have looked like in the past.
While there is some disagreement as to "where the bar is set" for how many AI-generated images to allow on Commons, we are at least leaning toward all of the following as requirements for all AI-generated images that we host:
- All files must meet the normal conditions of Commons. Files must fall within Commons' scope, including notability, and any derivative works must only public-domain and free-licensed materials. File pages must credit all sources.
- AI-generated or AI-enhanced images must be clearly recognizable as such:
- There should be a clearly visible prominent note about it being an AI image, mentioning that it is fake, perhaps add Template:Factual accuracy and/or another message in every file with an AI illustration, preferably by a template, perhaps every file that is uploaded by Upload Wizard and where the box has been ticked to indicate that AI image has been uploaded
- Differentiation between real and generated images should also be done at category level, categories containing images about real places and persons should not be flooded with fake images; AI-generated images should be in a (sub) category of Category:AI-generated images;
- Whether in countries that allow copyright on AI-generated images or not, these images should not be identified simply as "Own work". The AI contribution must always be explicitly acknowledged.
- There is at least a very strong preference (can we make it a rule?) that file pages for AI-generated or AI-enhanced images should indicate what software was used, and what prompt was given to that software. Some of us think that should be a requirement.
- With very rare exceptions—the only apparent one is to illustrate AI "hallucinations" as such—AI-generated or AI-enhanced images should contain no obviously wrong things, like extra fingers or an object that shouldn't be there; these should be fixed. Probably the best way to do this is to first upload the problematic AI-generated file, then overwrite that with the human-generated correction.
Jmabel ! talk 20:01, 31 December 2023 (UTC)
- Under "good reasons", I'd split point 3 ("Icons, placeholders, diagrams...") into two subcategories:
- 3A. Icons, placeholders, decorative illustrations, and other images where the precise contents of the image and its accuracy, especially in details, aren't relevant.
- 3B. Technical illustrations, explanations, blueprints, instructions, abstract drawings, and other images where details are expected to be meaningful and accurate.
- And I'm not convinced that 3B is a good reason to use AI-generated images. Most of the image generators I've worked with have been spectacularly poor at producing meaningful technical illustrations; they have a tendency to make up stylized nonsense to fill space, e.g. [2], [3], [4].
- As far as requirements are concerned, agreed on all points; including information about the tool used to generate images should be mandatory so that, in the event that an image generation model and its output are declared to be derivative works (which isn't out of the realm of possibility!), Wikimedia can identify and remove infringing images.
- Omphalographer (talk) 23:47, 31 December 2023 (UTC)
- The "good reasons" part should not be added to the page. We shouldn't try to define and assume we can anticipate and know all the potential constructive useful applications of AI art. It's like trying to specify for which purposes images made or modified with the software Photoshop or any of its tools like its cut tool can be used for at a time when that software was still new. And I elaborated on lots of useful applications that aren't captured by these explicit reasons. For example, consider if we had no image of how the art style cubism looked like. Then an artistic AI image that illustrates that would be useful. And that is just one example; same for a subject of human art culture of which we have no image such as one of its subgenres or topics. This could be part of essays or be included somewhere (would make the policy too long though) for illustrating use-cases that are most likely to be within scope.
- Moreover, what software was used, and what prompt was given to that software agree on the former but regarding prompts that should only be encouraged, not required. For example, if images from elsewhere are added these are often not included and as I explained before, the person may not have saved the prompt or it could be more than ten prompts for one image.
- The good reasons part is already implied and delineated by other parts of the policy as well as other policies. Obviously, Most currently available AI datasets include stolen images is false and just makes the anti AI tools bias evident since they weren't stolen (have you still not looked up definitions of theft?) but used for training (or machine learning) similar to how humans can look at (and "learn" from) copyrighted artworks online (or in other public exhibitions) if they are accessible there. Prototyperspective (talk) 17:10, 1 January 2024 (UTC)
- Sneaking insufficiently documented stuff in via Flickr (again) and beeing extremly casual about laws? If the neccessary context for an AI file can´t be provided, that file should be off limits for commons. Donated funds should not end up in some legal battle with stock picture companies a.o.. Just stop it. Alexpl (talk) 22:18, 1 January 2024 (UTC)
- Not sure what you refer to and courts have confirmed this. neccessary context for an AI file They should be provided but it's not the prompt. Donated funds should not end up in some legal battle I'd say start by not throwing donated funds out the window and totally neglecting the software development instead of coming up with hypothetical unlikely horror scenarios…as if they'd sue WMF for WMC hosting a few AI-generated files rather than some other entity (and fail). This is nothing but unfounded scaremongering. But I do agree that the software used to make images should get specified. Moreover, if you really cared about copyvios on WMC there would long be some bot that did at least some tineye reverse searches and so on to suggest editors likely copyvios to review. Prototyperspective (talk) 22:58, 1 January 2024 (UTC)
- If images like the one in this example where uploaded by a regular user it no doubt would be deleted as a derivative of the original. Yet you seem to think such images can't be derivatives if they were created by AI simply because the AI in question was trained on millions of images, which is a totally ridiculous assertion. Same goes for you dismissing the chance of a law suit over something that as unfounded scaremongering. There's a pretty good chance such law suits will happen in the future. The question is do we want to needlessly risk the WMF foundation getting involved in one simply to appease people like you, who for some bizarre reason thinks the precautionary principle doesn't apply to AI artwork. I'd say no we don't.
- Not sure what you refer to and courts have confirmed this. neccessary context for an AI file They should be provided but it's not the prompt. Donated funds should not end up in some legal battle I'd say start by not throwing donated funds out the window and totally neglecting the software development instead of coming up with hypothetical unlikely horror scenarios…as if they'd sue WMF for WMC hosting a few AI-generated files rather than some other entity (and fail). This is nothing but unfounded scaremongering. But I do agree that the software used to make images should get specified. Moreover, if you really cared about copyvios on WMC there would long be some bot that did at least some tineye reverse searches and so on to suggest editors likely copyvios to review. Prototyperspective (talk) 22:58, 1 January 2024 (UTC)
- Sneaking insufficiently documented stuff in via Flickr (again) and beeing extremly casual about laws? If the neccessary context for an AI file can´t be provided, that file should be off limits for commons. Donated funds should not end up in some legal battle with stock picture companies a.o.. Just stop it. Alexpl (talk) 22:18, 1 January 2024 (UTC)
- Using caution when it comes this stuff is in no way unfounded scaremongering. It's the default position when there's any risk of a law suit and its ludicrous to act like there isn't one in this case, whatever the details are of how the software was created. I'll also point out that there's plenty of models trained on extremely small, specialized datasets, which you seem to be ignoring for some reason, and there's an extremely high risk of it creating derivatives in those cases. Admittedly it's smaller with larger ones, but we have no way of knowing how much the AI was trained, what exactly it was trained on, and specialized models are becoming and more common as time passes.
- Regardless, there's no valid reason not to assume most currently available AI datasets include stolen image. At least the more popular ones are up front about the fact that they were trained on copyrighted works. There are none that aren't as far as I know. Otherwise there's no reason we can't make an exception specifically for models that were specifically trained on free licensed images, but that doesn't mean we should just allow for anything, including images from models that were clearly trained on copyrighted material. Not only does doing so put Commons at risk, it's also antithetical to the goals of the project. It's not like we can't loosen the restrictions to have exceptions for images created certain models or exclude others as time goes on though. At the end of the day I don't think there is, or needs to be, a one size fits all/works in ever situation way to do this. And the policy will probably be heavily updated as time goes on and the technology changes. --Adamant1 (talk) 05:15, 3 January 2024 (UTC)
- With regard to Midjourney, there are some recent allegations that their model was specifically designed to target specific artists and art styles. Some of the examples in that thread are fairly damning, e.g. [5]. Omphalographer (talk) 07:14, 3 January 2024 (UTC)
- Wow, that's without the person even mentioning cyberpunk 2077 in the prompt to. It must have been nothing more then totally random chance based on the millions of images Midjourney was trained on though lmao. --Adamant1 (talk) 10:38, 3 January 2024 (UTC)
- Yet you seem to think such images can't be derivatives if they were created by AI No, that is not true and I never said or indicated that. That image would need to be deleted. The assertion is not ridiculous but e.g. backed by many sources such as this Stable Diffusion’s initial training was on low-resolution 256×256 images from LAION-2B-EN, a set of 2.3 billion English-captioned images from LAION-5B‘s full collection of 5.85 billion image-text pairs, as well as LAION-High-Resolution, another subset of LAION-5B with 170 million images greater than 1024×1024 resolution (downsampled to 512×512). The lawsuits would affect Midjourney or other large entities and as said we already delete derivatives. If you care about risks of copyvios being hosted then support bots that scan images via tineye or other image reverse searches. stolen image Ignoring what I said about that above. clearly trained on copyrighted material It's not even feasible otherwise and you are also allowed to look at and learn from copyrighted media if they are put public online as said but ignored earlier. Prototyperspective (talk) 12:42, 3 January 2024 (UTC)
- Ignoring what I said about that above. I didn't "ignore" what you said about bots. It's just not relevant to the discussion about what text should be added to the page. Using bots to scan images and editing the essay aren't mutually exclusive, obviously. Anyway, there's no reason you can't start a separate thread to discuss using bots to check for derivatives if you think it's important, but your general attitude around the subject has been that AI artwork is original due to the amount of images models are sometimes trained on. Otherwise I guess you agree that "Most currently available AI datasets include stolen images," or at least can re-create them, but that clearly wasn't how you were acting. --Adamant1 (talk) 20:32, 3 January 2024 (UTC)
- Briefly, it can be original or unoriginal in the sense of being a derivative work: it depends on what the image depicts. AI models aim to deliver high quality results currently seem to have to rely on algorithmic and selection proxies for quality (such as increasing the weighting of popular professional art vs deviantart furry drawings); the earlier links are interesting despite that these things happening rarely, are spottable via image categories/description or image reverse search, seem to only occur for very well-known images for certain texts/prompts, and depend heavily on prompt used. Prototyperspective (talk) 20:50, 3 January 2024 (UTC)
- the earlier links are interesting despite that these things happening rarely It's probably a lot more common then that since your talking about some models generating a near infinite amount of images per second, or at least enough of a magnitude that it would be impossible to come up with original works of art that quickly and in those numbers. We don't really have any way of knowning the exact number though. Nor is relevant either. Since the important thing is if the amount of derivatives generated by AI models is enough to justify including something about it in the article, be that a sentence like "Most currently available AI datasets include stolen images" or something similar, I don't really care. But it's clearly a problem and at least IMO one worth mentioning. Regardless of how it's ultimately dealt with. People should still be cautious not to assume AI artwork is original even if bots can scan websites for similar images or whatever. Again, they aren't mutually exclusive and it's not like we don't already have policies reminding people that certain things might be or are copyrighted. So I don't really see what your issue with adding that part of Jmabel's text into the essay is. --Adamant1 (talk) 22:24, 3 January 2024 (UTC)
- Briefly, it can be original or unoriginal in the sense of being a derivative work: it depends on what the image depicts. AI models aim to deliver high quality results currently seem to have to rely on algorithmic and selection proxies for quality (such as increasing the weighting of popular professional art vs deviantart furry drawings); the earlier links are interesting despite that these things happening rarely, are spottable via image categories/description or image reverse search, seem to only occur for very well-known images for certain texts/prompts, and depend heavily on prompt used. Prototyperspective (talk) 20:50, 3 January 2024 (UTC)
- Ignoring what I said about that above. I didn't "ignore" what you said about bots. It's just not relevant to the discussion about what text should be added to the page. Using bots to scan images and editing the essay aren't mutually exclusive, obviously. Anyway, there's no reason you can't start a separate thread to discuss using bots to check for derivatives if you think it's important, but your general attitude around the subject has been that AI artwork is original due to the amount of images models are sometimes trained on. Otherwise I guess you agree that "Most currently available AI datasets include stolen images," or at least can re-create them, but that clearly wasn't how you were acting. --Adamant1 (talk) 20:32, 3 January 2024 (UTC)
- With regard to Midjourney, there are some recent allegations that their model was specifically designed to target specific artists and art styles. Some of the examples in that thread are fairly damning, e.g. [5]. Omphalographer (talk) 07:14, 3 January 2024 (UTC)
- Regardless, there's no valid reason not to assume most currently available AI datasets include stolen image. At least the more popular ones are up front about the fact that they were trained on copyrighted works. There are none that aren't as far as I know. Otherwise there's no reason we can't make an exception specifically for models that were specifically trained on free licensed images, but that doesn't mean we should just allow for anything, including images from models that were clearly trained on copyrighted material. Not only does doing so put Commons at risk, it's also antithetical to the goals of the project. It's not like we can't loosen the restrictions to have exceptions for images created certain models or exclude others as time goes on though. At the end of the day I don't think there is, or needs to be, a one size fits all/works in ever situation way to do this. And the policy will probably be heavily updated as time goes on and the technology changes. --Adamant1 (talk) 05:15, 3 January 2024 (UTC)
- @Jmabel: I've added a recommendation to the licensing section per your suggestion. Nosferattus (talk) 00:21, 6 January 2024 (UTC)
Relevant deletion discussion
[edit]You are invited to join the discussion at Commons:Deletion requests/File:Bing AI Jeff Koons Niagara take 3.jpeg. {{u|Sdkb}} talk 15:41, 3 January 2024 (UTC)
Need a template for AI-modified images
[edit]From recent discussions (here and at the Village Pump), it sounds like we need a specific template for AI-modified images (separate from {{AI upscaled}}). Belbury, is this something you could help with? Nosferattus (talk) 00:29, 6 January 2024 (UTC)
- I was actually saying to another user recently (regarding a photo of an ancient metal statue where the resolution was unchanged but the face was given a subtle flesh tone, its nose filled out and its eyes "corrected") that {{AI upscaled}} should also apply to any image where an AI has added or altered details. We don't, I think, really care whether the pixel size of the image was increased.
- Maybe we should just rename and rephrase the template to {{AI modified}}? Belbury (talk) 18:41, 6 January 2024 (UTC)
- That would be fine with me. Nosferattus (talk) 02:12, 8 January 2024 (UTC)
- Template:AI modified is a good idea, but it may not be sufficient for certain cases:
- When a detail is erased whether by AI or by traditional cloning, a template should identify the modification.
- Other image manipulation such as composites, stacking, whether by AI or not should be identified.
- Use of generative fill, whether AI or not should be indicated
- Pierre5018 (talk) 02:47, 26 August 2024 (UTC)
- Don't know why this is only about images but not videos. For example, see the new Category:Colorized videos. Prototyperspective (talk) 12:09, 26 August 2024 (UTC)
Should something like the following text be added to the “are AI images in scope?” section
[edit]“AI images should not be used to illustrate topics which already have quality non-AI images; however AI images should not other[wise] be treated differently from other files in regards to scope. Something is not in or out of scope because an AI made it. In-scope AI-generated files should not be nominated for deletion simply because they could hypothetically be done better by a human, or hypothetically generated on demand.”
- I thought of adding this because I’m seeing a lot of deletion arguments that are basically “AI is cheap because anyone can make it anytime” or “AI is inherently bad” and an unwritten attitude that “AI images don’t need to be reviewed individually when nominating them for deletion because of that”. I think we should make it clear that AI having unique problems doesn’t mean it should just be indiscriminately targeted on the basis of being AI.
Dronebogus (talk) 11:42, 8 January 2024 (UTC)
- Would Support it but due to the large anti-AI bias here I don't know if it has a good chance since clearing up such issues is usually not done on policy/essay pages like that. I also note that people should claim things are so super easy based on what they think not what they know and prove. Additionally we don't have any such discrimination for other tools in people repertoires such as Photoshop. Prototyperspective (talk) 12:08, 8 January 2024 (UTC)
- I oppose this. This lets through entirely fictional representations of (for example) people of whom we have no image, just because they were generated by AI. We would never accept the same entirely fictional representation if it were drawn by a human. - Jmabel ! talk 19:55, 8 January 2024 (UTC)
- No, nothing gets through "just because they were generated by AI". It's just that nothing gets deleted more or less basically just based on that. For images of people, what would be the problem of having an image made using AI like these (accurate + high quality) of a person of whom we have no image of? It doesn't mean Wikipedia needs to use it. In any case that is just a very particular example and the proposal would not imply that they are fine. I don't care much about this proposal since I don't think such things would need to be explicit or that it being explicit would necessarily help much. Also there are lots of drawings of historical figures so that part is also clearly false. Prototyperspective (talk) 20:05, 8 January 2024 (UTC)
- It's one thing to have (for example) Rembrandt's painting of Moses, or a 15th-century wood cut showing an unknown author's conception of François Villon, but if we didn't have those, a random user's illustration of either would presumably remain out of scope. I don't see why an AI-generated illustration would be any more acceptable. - Jmabel ! talk 21:03, 8 January 2024 (UTC)
- I would say the two examples you've presented here are very high quality, and I wouldn't at all hold it against them that they were produced by AI. Equivalents by a user would probably be considered in scope as well. (This presumes no copyright issues about them being possibly derivative.) But only a tiny fraction of the AI-generated content I see here is anywhere near this level. Plus, we know that these genuinely resemble the people in question. - Jmabel ! talk 21:07, 8 January 2024 (UTC)
- However, I do question the claim of "own work" on those two images. - Jmabel ! talk 21:08, 8 January 2024 (UTC)
- They aren't any more acceptable but not less acceptable and if the image was CCBY it could be there; maybe you just never took a look at the visual arts cats so I don't know why you have these false assumptions; the reason why there are barely any modern high-quality ones is that artists nearly never license them under CCBY. In any case you seem to refute yourself in regards to those two images and the images are clearly labeled as made via Midjourney by the WMC user who uploaded them whom I thank for their constructive contributions. But yes, the AI content here is sadly largely significantly below that level of quality. In any case, nothing is "let[] through" just because it was made via AI. Prototyperspective (talk) 21:20, 8 January 2024 (UTC)
- @Prototyperspective I don't find either of those images to be "accurate and high quality". Look at the image of Gandhi. The nose is wrong. The body is too small for the head. The ear is wrong. The splashes of colour (including the one on the head) may add visual interest but they detract from the main point which is what Gandhi looked like. The image of Hawking doesn't even look like him. I would not use either of these pictures. I understand your interest in AI generated images, but let's not confuse a good-looking image with an accurate depiction. Counterfeit Purses (talk) 04:32, 9 January 2024 (UTC)
- We have human-made drawings of people with no free image i.e. Syd Barrett that are usable and in-use. Just because many AI portraits are inaccurate doesn’t mean one that is doesn’t potentially exist. Dronebogus (talk) 08:07, 9 January 2024 (UTC)
- Many human-made artworks are also somewhat inaccurate.
- Please look at the category instead of the two examples; I'm sure there are multiple that are of sufficient quality and accuracy even if one could argue that these two aren't. In any case, I don't know why we discuss this specific potential application; there are many other potential ways one could use AI images – it's just that people should pay attention to the accuracy of the image, just like for any other artwork, and probably should clarify wherever it's used that it's AI-made. Note that the use doesn't have to be Wikipedia but it could also be useful there in some occasions. AI software are a new tool used by humans, we don't discriminate against other tools just for the tool itself that is used such as Photoshop. Is there a similar discrimination against photos modified with Adobe Lightroom or are people simply deciding whether or not they use the image anywhere (Wikipedia or elsewhere) on a case-by-case basis? Prototyperspective (talk) 11:26, 9 January 2024 (UTC)
- No, nothing gets through "just because they were generated by AI". It's just that nothing gets deleted more or less basically just based on that. For images of people, what would be the problem of having an image made using AI like these (accurate + high quality) of a person of whom we have no image of? It doesn't mean Wikipedia needs to use it. In any case that is just a very particular example and the proposal would not imply that they are fine. I don't care much about this proposal since I don't think such things would need to be explicit or that it being explicit would necessarily help much. Also there are lots of drawings of historical figures so that part is also clearly false. Prototyperspective (talk) 20:05, 8 January 2024 (UTC)
- Oppose. While I'm sure this represents your personal views on AI-generated content, it does not represent the views of the Commons community at large. Omphalographer (talk) 04:29, 9 January 2024 (UTC)
- But are the “views of Commons at large”, which seem to be based on subjective distaste for AI images, right simply by popularly, especially when they frequently conflict with hard-and-fast principles like COM:SCOPE or COM:INUSE? Dronebogus (talk) 08:09, 9 January 2024 (UTC)
@Prototyperspective: you say you are not wanting to privilege AI images, but you write, "In-scope AI-generated files should not be nominated for deletion simply because they could hypothetically be done better by a human, or hypothetically generated on demand." This begs the question. Obviously, if it's in scope and doesn't have copyright problems we keep it, but often a DR is how we determine whether it is in scope. - Jmabel ! talk 18:36, 9 January 2024 (UTC)
- No, I didn't write that. And I don't see how you come to this conclusion – that's not at all what this implies. It just says that it shouldn't be nominated merely "because they could hypothetically be done better by a human, or hypothetically generated on demand" and I don't know how to explain or rephrase it clearer than it already is…it doesn't mean AI images can't or should never be nominated for deletion. If you see any image out of the 100 million and don't consider it within scope, you can nominate it (hopefully including an explanation) – nothing about that is proposed to be changed here. --Prototyperspective (talk) 21:43, 9 January 2024 (UTC)
- @Prototyperspective: I'm sorry: [User:Dronebogus]] wrote it, you concurred. Also: I just now made a minor edit there, for something that was not even grammatical.
- I suppose it could be understood your way as well, but I'm still concerned with the question-begging. I think it comes down to what we mean by "other files" in the statement that they should not "be treated differently from other files in regards to scope." If that means art by non-notable users, yes, I'd agree to that. If it means pretty much anything else I can think of, I would not. - Jmabel ! talk 02:20, 10 January 2024 (UTC)
- Oppose - Our scope rules should be delineated at COM:SCOPE not here. The less we say about scope here the better, other than simply reminding people that AI-generated media follow the same scope rules as all other media on Commons. There should be no rules to either favor or disfavor AI-generated media when it comes to scope. Nosferattus (talk) 20:10, 9 January 2024 (UTC)
- Oppose I see little reason in binding volunteer personel in judging the "scope" of every submitted AI-work. And when the point is reached, where we drown in AI sludge and drasitic measures have to be taken, the "scope" won´t matter anyway. Alexpl (talk) 11:12, 18 January 2024 (UTC)
An alternative
[edit]I just thought of this, based on a remark I just made above: how about Except for certain enumerated exceptions [which we will have to spell out], AI-generated art will be held to the same standards as non-photographic original work by a typical, non-notable user. The exceptions would include at least some of the items listed above in the section #Possible alternative/additional text for this page, in the portion beginning, "That said, there are good reasons to host certain classes of AI images on Commons". And, yes, it may be that this belongs more on COM:SCOPE than here. - Jmabel ! talk 02:27, 10 January 2024 (UTC)
- That’s basically what it already says. I’m not sure what “enumerated exceptions” we need to spell out. “In-scope works”? Now we’re getting in a rut! Dronebogus (talk) 05:30, 10 January 2024 (UTC)
- @Dronebogus: On the enumerated exceptions, most obviously images to illustrate facets of AI art production. We'd presumably like a certain number of examples of the output of any notable generative AI. But see the list I referred to in my original remark, which came out of a discussion involving a dozen or so users on the Village pump.
- “In-scope works”: I don't see that phrase anywhere in what I wrote. What am I missing? - Jmabel ! talk 16:13, 12 January 2024 (UTC)
- I think the "non-photographic original work" is a good point to make. I'd certainly support that.--Prosfilaes (talk) 17:19, 12 January 2024 (UTC)
- Yeah, and that seems like it's a valid point beyond AI-generated media as well. Broadly speaking, Commons holds non-photographic images to higher standards than photos. We generally pare diagrams, charts, logos, flags, and other non-photographic content down to one or perhaps a few definitive, high-quality images, whereas photos of potentially useful subjects are only culled if they're unusually bad. (And that's good! Mediocre non-photographic images can be improved through collaborative editing; mediocre photos are improved by taking better photos.) Omphalographer (talk) 17:54, 12 January 2024 (UTC)
- That’s a good point. But I still think people are being unnecessarily harsh on AI to the point of steamrolling over COM:INUSE in massive indiscriminate culls. I’m not trying to unfairly favor AI images, just prevent this sort of behavior by marking it as explicitly disruptive. Dronebogus (talk) 19:04, 12 January 2024 (UTC)
- Yeah, and that seems like it's a valid point beyond AI-generated media as well. Broadly speaking, Commons holds non-photographic images to higher standards than photos. We generally pare diagrams, charts, logos, flags, and other non-photographic content down to one or perhaps a few definitive, high-quality images, whereas photos of potentially useful subjects are only culled if they're unusually bad. (And that's good! Mediocre non-photographic images can be improved through collaborative editing; mediocre photos are improved by taking better photos.) Omphalographer (talk) 17:54, 12 January 2024 (UTC)
Wikimedia Commons AI: a new Wikimedia project
[edit]- The discussion takes place at the discussion page of the proposal on Meta-Wiki.
I may have thought of a solution. See my proposal for a new Wikimedia sister project here. I would love to discuss this proposal with you on the discussion page of the proposal! Kind regards, S. Perquin (talk) 22:49, 19 January 2024 (UTC)
Declaration of AI prompts
[edit]when an image is determined to be ai generated, but the file descriptions, sdc, etc., have nothing indicating it is, should it be a requirement for the uploader to provide the methodology (ai models used, prompts given, etc.)?
and if the uploader doesnt provide the info, what should be done? RZuo (talk) 14:24, 11 February 2024 (UTC)
- Are there any ways to confirm what prompts have been used in the generation of AI images? EdoAug (talk) 14:27, 11 February 2024 (UTC)
- No, not really. Moreover, a prompt isn't even sufficient to reproduce an output in a lot of newer hosted AI image generators like Midjourney, as generating an image is an interactive process, and the underlying model and software are modified frequently. With this in mind, I'd consider the inclusion of a textual prompt to be of limited value.
- Including information about what software was used is much more useful, and I'd be on board with requiring that. Ditto for uploading the original image when using tools which transform images. Omphalographer (talk) 23:42, 11 February 2024 (UTC)
- @Omphalographer: I'm trying to understand: if, for example, someone uploads an image as an AI representation of the Temple of Solomon, would it not be important whether "Temple of Solomon" was somewhere in the prompt? Or are you saying that it would be so likely out of scope on some other basis that it doesn't matter? Or what? - — Preceding unsigned comment added by Jmabel (talk • contribs) 03:09, 12 February 2024 (UTC)
- Why would it matter whether "Temple of Solomon" was in the prompt? How the software got there is only relevant if the image is being used as an example of AI; if it's being used as an example of the Temple of Solomon, then the proof is in the pudding; does it accurately represent what it's intended to represent? Which is not really the "Temple of Solomon"; it's one conception of what that might have looked like, since we have no photographs or remains or drawings from life.--Prosfilaes (talk) 21:18, 12 February 2024 (UTC)
- We have, essentially, no way to know whether a particular representation of the Temple of Solomon is accurate. It might be in scope to know what (for example) Stable Diffusion does to represent the Temple, but if you prompted Stable Diffusion, for example, "draw me the Mormon Temple in Salt Lake City" and then posted it as a representation of the Temple of Solomon, that is almost certainly not acceptable. - Jmabel ! talk 01:29, 13 February 2024 (UTC)
- Definitely not acceptable. More like "ArtStation" material. Alexpl (talk) 15:10, 13 February 2024 (UTC)
- Yes, we have no way to know whether a particular representation of the Temple of Solomon is accurate. I don't see why we should treat an AI version any different from any other version; if it is useful as an image of the Temple of Solomon, we keep it. Asking for a prompt is a bit like asking for the palette used for a painting an illustration of a dinosaur; everything you really need to know is in the final version.--Prosfilaes (talk) 15:31, 13 February 2024 (UTC)
- Asking for a prompt is like asking the painter what they think they just painted. If they say ancient temple in Phoenician style with two bronze pillars supporting... at great length then we'd use that context to assess whether the image was of any educational use to anyone; if they said old stone temple breathtaking painterly style or Salt Lake City temple 10,000 BC we'd know that we could rename it or throw it out straight away.
- File:Urânia Vanério (Dall-E).png (now renamed) was once added to that person's biography article on enwiki, the image uploader saying that it was AI-generated but not giving the prompt. So I had to waste a little time working out what I was actually looking at, and it turned out when asked that the uploader had just prompted an AI to draw 10 years old 19th century Brazilian-Portuguese girl. Belbury (talk) 15:54, 13 February 2024 (UTC)
- Asking for a prompt is like asking the painter what they think they just painted That is something only people with very little practical experience of using AI art tools would say and it is false in most or many cases. File:Urânia Vanério (Dall-E).png That case does not seem to be good assuming it's not based on an artwork of the specific person. Thus it should probably be removed from where it's used if that is the case. Prototyperspective (talk) 16:40, 13 February 2024 (UTC)
- I believe I've got a good handle on how AI art tools function, thanks.
- The uploader of File:Urânia Vanério (Dall-E).png suggested in the filename that they'd intentionally created an image of Urânia Vanério, but they knew that the image they created was produced from entering the generic prompt 10 years old 19th century Brazilian-Portuguese girl - the uploader knew they hadn't really created a picture of Vanério specifically, only someone generally of that century. That's very useful context for us when deciding if we need to rename or remove an image, so from that perspective it seems worth asking for clear prompts at the point of upload. Belbury (talk) 17:29, 13 February 2024 (UTC)
- Yes in that case it was good to ask/require the prompt/the info on how they made the image. Very much agree with that, thanks for getting the user to disclose the prompt, and as said the image's file-use seems inappropriate. It can be very useful context and for example showing an optional field for prompt(s) used for the image if an AI tool was used could be a good thing. Prototyperspective (talk) 18:09, 13 February 2024 (UTC)
- Asking for a prompt is like asking the painter what they think they just painted That is something only people with very little practical experience of using AI art tools would say and it is false in most or many cases. File:Urânia Vanério (Dall-E).png That case does not seem to be good assuming it's not based on an artwork of the specific person. Thus it should probably be removed from where it's used if that is the case. Prototyperspective (talk) 16:40, 13 February 2024 (UTC)
- We have, essentially, no way to know whether a particular representation of the Temple of Solomon is accurate. It might be in scope to know what (for example) Stable Diffusion does to represent the Temple, but if you prompted Stable Diffusion, for example, "draw me the Mormon Temple in Salt Lake City" and then posted it as a representation of the Temple of Solomon, that is almost certainly not acceptable. - Jmabel ! talk 01:29, 13 February 2024 (UTC)
- Why would it matter whether "Temple of Solomon" was in the prompt? How the software got there is only relevant if the image is being used as an example of AI; if it's being used as an example of the Temple of Solomon, then the proof is in the pudding; does it accurately represent what it's intended to represent? Which is not really the "Temple of Solomon"; it's one conception of what that might have looked like, since we have no photographs or remains or drawings from life.--Prosfilaes (talk) 21:18, 12 February 2024 (UTC)
- @Omphalographer: I'm trying to understand: if, for example, someone uploads an image as an AI representation of the Temple of Solomon, would it not be important whether "Temple of Solomon" was somewhere in the prompt? Or are you saying that it would be so likely out of scope on some other basis that it doesn't matter? Or what? - — Preceding unsigned comment added by Jmabel (talk • contribs) 03:09, 12 February 2024 (UTC)
- I think it would be good if the user was required to confirm that it's AI-generated and should also specify which tool was used such as "Stable Diffusion" or a link of the web interface used.
- Btw, prompts are often not known anymore by the prompter (and not stored by the website) and there can be 20 prompts used for just one image. While I used to think attaching prompts are good I now think not attaching them is better since when they attached people who have little experience with AI tools misinterpret things and object to the image based on flawed understandings of how these tools work in more elaborate or more intentional creative processes. Prototyperspective (talk) 15:00, 11 February 2024 (UTC)
- When prompts are attached, the file is at least potentially useful as a illustration of what that [evolving] software did when given such a prompt at such a date. Otherwise, it's a big hunk of nothing. - Jmabel ! talk 20:08, 11 February 2024 (UTC)
- It's not anything 'by default'. An image made or modified with Photoshop is not valuable by default. It depends on its contents / what it shows. Really simple. Prototyperspective (talk) 20:31, 11 February 2024 (UTC)
- Not sure how Photoshop got into this discussion. A photo modified with Photoshop is a modified version of a particular photo. Without knowing the prompt, an image made by AI is nothing more than "an image made by AI". It is not clearly an image "of" anything. - Jmabel ! talk 23:13, 11 February 2024 (UTC)
- It´s called "Whataboutism". A russian thing. Alexpl (talk) 21:34, 18 February 2024 (UTC)
- No, it's exceptionalism. AI tools are not special. No actual point(s) here, just derailing. Prototyperspective (talk) 21:52, 18 February 2024 (UTC)
- It´s called "Whataboutism". A russian thing. Alexpl (talk) 21:34, 18 February 2024 (UTC)
- Not sure how Photoshop got into this discussion. A photo modified with Photoshop is a modified version of a particular photo. Without knowing the prompt, an image made by AI is nothing more than "an image made by AI". It is not clearly an image "of" anything. - Jmabel ! talk 23:13, 11 February 2024 (UTC)
- It's not anything 'by default'. An image made or modified with Photoshop is not valuable by default. It depends on its contents / what it shows. Really simple. Prototyperspective (talk) 20:31, 11 February 2024 (UTC)
- When prompts are attached, the file is at least potentially useful as a illustration of what that [evolving] software did when given such a prompt at such a date. Otherwise, it's a big hunk of nothing. - Jmabel ! talk 20:08, 11 February 2024 (UTC)
- It's a good idea to read project pages before commenting on them on their associated talk page.
- Commons:AI-generated_media#Description says (and has already said for a good while):
HaeB (talk) 01:08, 1 March 2024 (UTC)Whenever you upload an AI-generated image (or other media file), you are expected to document the prompt used to generate the media in the file description, and identify the software that generated the media.
- so what happens when they dont do that?
- it's a good idea to read before commenting on my original post:
- "should it be a requirement for the uploader to provide the methodology (ai models used, prompts given, etc.)? and if the uploader doesnt provide the info, what should be done?" RZuo (talk) 13:56, 1 March 2024 (UTC)
- It should be a requirement for them to at least say what model or models were used. The prompts aren't as important since generation is usually random anyway, but providing them should still be a recommendation, if not something that requires action like not providing the model (at least IMO) should. Although I'm not going to go as far as saying making it a requirement to provide the model or models used means the images should be deleted if they aren't. At least not as a first line of action. There's no reason we can't assume good faith and ask the uploader. Then proceed with nominating the image or images for deletion if there's a reason to. I image there might be some edge cases where the model doesn't have any meaning what-so-ever though. So I hesitate to say I think it should be a hard and fast rule. --Adamant1 (talk) 14:04, 1 March 2024 (UTC)
- there's one more problem with failure to provide the prompts.
- many websites host many ai generated images for users to search and use, e.g. https://openart.ai/search/xi%20jinping?method=prompt .
- when the uploader is unable to provide the prompts, it's unclear whether the uploader is the true author who initiated the prompt and generated the file, or s/he just downloaded a file generated by someone else from these websites.
- and since some countries rule that copyright can exist and belong to the person who did the prompt, ai generated commons files with unclear authorship should be deleted. RZuo (talk) 10:54, 14 March 2024 (UTC)
- There's a few problems with that: 1) the prompt could be made up or it could be only a part of the full prompt (not the full text or only one of multiple prompts used for the creation of one image being provided), 2) the image could also be taken from or first uploaded to such a site by either the same user or another user (so it would still be unclear), and 3) prompts can be long and not entirely relevant to what the image is about so without any changes that would lead to such images showing up in more unrelated search results.
- In addition, there's also 4) issues arising from readers who don't understand how txt2img prompting is used in practice (such as using examples or proxy-terms or otherwise hacking toward the desired image) such as misinterpreting images based on what prompt has been used (e.g. just because this or that is somewhere in a long prompt doesn't mean the category for this or that is due). Also just FYI, there could also be 5) some txt2img platforms add prompts for selectable "styles" where the prompt text is not even known by the prompter so it would not be the full prompt for these sites. Such platforms where the prompt is shown, including playgroundai and the one you mentioned, could be used to collaboratively develop art and illustrations for the public domain as well as learning from how other achieved good results. More info about UK genAI copyright would be useful, the source of images should always be specified.
- Prototyperspective (talk) 11:45, 14 March 2024 (UTC)
- the prompt could be....only one of multiple prompts used for the creation of one image being provided @Prototyperspective: Can you provide an example of where you'd use multiple prompts to create a single image? Like what software has multiple prompts outside of a different section for negative keywords (which at least IMO isn't really a prompt to begin with)? --Adamant1 (talk) 16:10, 14 March 2024 (UTC)
- I did so roughly three times already but since that was elsewhere, a workflow could be like this: create an image of a person, crop that person out and generate another image with another prompt, paste in the person and do the same for two objects like a vehicle and a building, then use the resulting image and use img2img to alter and improve the image to change the style, then use yet another prompt to fix imperfections like misgenerated fingers, then for the final image use the result as an input with some proxy-terms for high-quality images but change the output resolution so it's scaled up to a high-resolution final image. At least two examples of mine where I included multiple prompts were deleted for no good reason, such as the DR being a keep outcome but the admin deleting the particular image anyway. Not everyone briefly types "Funny building, matte painting" clicks generate three times and is done with it. Prototyperspective (talk) 16:24, 14 March 2024 (UTC)
- I mean, I guess that could happen, but then so what? Just include the last prompt. At least according to you AI art generator's are no different then photoshop and it's not like we include the title every single piece of software that touched an image along the pipeline to it being uploaded in a description. So I don't see why it would be any different for AI artwork. Although I think your example is extremely unlikely, if not just totally false hyperbole, to begin with. --Adamant1 (talk) 17:26, 14 March 2024 (UTC)
- I'm not an advanced prompter by any means and I used workflows coming close to that so it's not exaggerated. I think the quality of AI images as well as the sophistication of txt2image workflows is going to increase over time. I'm not opposed to asking for prompts (in specific) – I was just clarifying complications. And I've seen many cases where the last prompt was as simple as one word on such prompt sharing websites. In the example workflow the last prompt could be the least important. The info on potential and/or identified complications could be more relevant in the other discussion about the Upload Wizard. Prompts are usually good to include and useful but they don't somehow verify that the uploader is the prompter who should if they tasked the AI software specify that. Prototyperspective (talk) 17:37, 14 March 2024 (UTC)
- I don't disagree. It's not like someone can't just copy a prompt from wherever they downloaded the image. I can't really think of any other reason they would be useful though. Yet alone why they would or should be required. At least other then the novelty factor of someone being able to roughly recreate the image if they want to, but then I feel like it's not the purpose of Commons to be a prompt repository either. Especially if that's main or only reason to include them. --Adamant1 (talk) 17:48, 14 March 2024 (UTC)
- "I can't really think of any other reason they would be useful though." Here are a few:
- AI is constantly evolving. It will probably be of interest five years from now to know what a particular AI would have done in March 2024, given a particular prompt.
- If the image is to be used to illustrate anything other than what a particular AI program does -- that is, if it supposedly represents anything past or present in the real world -- presumably we should be interested in whether the prompt actually asked for the thing supposedly represented. For example, if someone is claiming that an image is supposed to represent George Washington as a child, presumably the result of a prompt asking for just that has more validity than just a prompt for "18th-century American boy." Or maybe not, but if not then no AI representation of this subject has any validity at all. - Jmabel ! talk 21:26, 14 March 2024 (UTC)
- @Jmabel: To your first point, I'd say prompts are a reflection of what a particular user of the software thinks they need to put into a textbox to generate a certain image. That's about it though since images are never generated consistently and a lot of keywords just get ignored during the generation process. Although I would probably support a side project Wikibooks for storing prompts for their novelty and historical usefulness, but then the point in Commons is to be a media repository, not an encyclopedia, and I don't think that's served with storing a bunch of prompts that were probably ignored by the AI and aren't going to generate the same image anyway. Maybe if there was a 1/1 recreation of an image based on a particular prompt, but no one is arguing that is how they work.
- On the second thing, I think your last sentence pretty much summarizes what my argument there would be. An AI representation of George Washington as a child has no validity to begin with largely because of the reasons I've already stated in the first paragraph as to why prompts don't matter in the first place. An AI representation of George Washington as a child doesn't somehow magically become legitimate just because it includes a prompt that will probably generate an image of a compeletely different child if someone were to use it. --Adamant1 (talk) 08:49, 15 March 2024 (UTC)
- Amendment to 1.: they would recreate 1:1 the same if the parameters and seed were the same with the same model.
The key there is the seed so it would at least be very similar if the same seed was used. A prompt that works well for one seed is not unlikely to also work well with another one but it could look very different. playgroundai and openart.ai include full prompts, seeds, and parameters. 2. For things like George Washington as a child the AI would need to semantically understand which training data shows the person (or an object or concept) as done at low-quality here using DreamBooth because models currently don't get semantics. In the file description there is more than a prompt. Prototyperspective (talk) 11:24, 15 March 2024 (UTC)
- Amendment to 1.: they would recreate 1:1 the same if the parameters and seed were the same with the same model.
- "I can't really think of any other reason they would be useful though." Here are a few:
- I don't disagree. It's not like someone can't just copy a prompt from wherever they downloaded the image. I can't really think of any other reason they would be useful though. Yet alone why they would or should be required. At least other then the novelty factor of someone being able to roughly recreate the image if they want to, but then I feel like it's not the purpose of Commons to be a prompt repository either. Especially if that's main or only reason to include them. --Adamant1 (talk) 17:48, 14 March 2024 (UTC)
- I'm not an advanced prompter by any means and I used workflows coming close to that so it's not exaggerated. I think the quality of AI images as well as the sophistication of txt2image workflows is going to increase over time. I'm not opposed to asking for prompts (in specific) – I was just clarifying complications. And I've seen many cases where the last prompt was as simple as one word on such prompt sharing websites. In the example workflow the last prompt could be the least important. The info on potential and/or identified complications could be more relevant in the other discussion about the Upload Wizard. Prompts are usually good to include and useful but they don't somehow verify that the uploader is the prompter who should if they tasked the AI software specify that. Prototyperspective (talk) 17:37, 14 March 2024 (UTC)
- I mean, I guess that could happen, but then so what? Just include the last prompt. At least according to you AI art generator's are no different then photoshop and it's not like we include the title every single piece of software that touched an image along the pipeline to it being uploaded in a description. So I don't see why it would be any different for AI artwork. Although I think your example is extremely unlikely, if not just totally false hyperbole, to begin with. --Adamant1 (talk) 17:26, 14 March 2024 (UTC)
- I did so roughly three times already but since that was elsewhere, a workflow could be like this: create an image of a person, crop that person out and generate another image with another prompt, paste in the person and do the same for two objects like a vehicle and a building, then use the resulting image and use img2img to alter and improve the image to change the style, then use yet another prompt to fix imperfections like misgenerated fingers, then for the final image use the result as an input with some proxy-terms for high-quality images but change the output resolution so it's scaled up to a high-resolution final image. At least two examples of mine where I included multiple prompts were deleted for no good reason, such as the DR being a keep outcome but the admin deleting the particular image anyway. Not everyone briefly types "Funny building, matte painting" clicks generate three times and is done with it. Prototyperspective (talk) 16:24, 14 March 2024 (UTC)
- the prompt could be....only one of multiple prompts used for the creation of one image being provided @Prototyperspective: Can you provide an example of where you'd use multiple prompts to create a single image? Like what software has multiple prompts outside of a different section for negative keywords (which at least IMO isn't really a prompt to begin with)? --Adamant1 (talk) 16:10, 14 March 2024 (UTC)
- There's a few problems with that: 1) the prompt could be made up or it could be only a part of the full prompt (not the full text or only one of multiple prompts used for the creation of one image being provided), 2) the image could also be taken from or first uploaded to such a site by either the same user or another user (so it would still be unclear), and 3) prompts can be long and not entirely relevant to what the image is about so without any changes that would lead to such images showing up in more unrelated search results.
- The most obvious example - to me, at least - would be something like Midjourney. There's an initial textual prompt, but it's often followed by a sequence of operations like selecting individual variations and/or expanding the canvas which alter the image in ways which go beyond the prompt. And even then I'm not sure how repeatable of a process it is. Omphalographer (talk) 17:49, 14 March 2024 (UTC)
- I guess so. Although I think your quickly reaching a point of diminishing returns when it comes to ever recreating the same, or a similar, image at that. --Adamant1 (talk) 17:54, 14 March 2024 (UTC)
- The most obvious example - to me, at least - would be something like Midjourney. There's an initial textual prompt, but it's often followed by a sequence of operations like selecting individual variations and/or expanding the canvas which alter the image in ways which go beyond the prompt. And even then I'm not sure how repeatable of a process it is. Omphalographer (talk) 17:49, 14 March 2024 (UTC)
AI upscaling of historical paintings
[edit]What is Commons' stance on the AI upscaling of historical paintings?
I was surprised to see that File:Bernardetto de Medici - Giorgio Vasari.jpg, generated by taking a low resolution photograph of a painting and asking an AI to enlarge it and increase the saturation, actually survived a specific deletion request with "no valid reasons for deletion" last year.
For an upscaled low-quality photograph there's some argument to be made that the AI might have a better idea of what a person might have looked like than the grainy photograph does, but unless an AI has been scrupulously trained on the work of the specific artist, scaling up a painting seems like it will always be misleading and out of educational COM:SCOPE. Belbury (talk) 09:59, 27 February 2024 (UTC)
- The claim in the license area that "this is a faithful photographic reproduction of a two-dimensional, public domain work of art" seems false to me. Not that there is a licensing problem, but this is not a "faithful photographic reproduction," it's a derivative work. - Jmabel ! talk 19:11, 27 February 2024 (UTC)
- Does that matter? That claim is important because a process that's only a "faithful photographic reproduction" is taken to not have created some new protectable copyright, thus we can host the overall image as the original was otherwise clear. If some additional process does something more then that might attract a copyright. But not if it's an AI process (as we're using an assumption that AIs can't create copyright materials) or either that the upscaling was done by some scanner (sic) here, who is happy to license any contribution that their own efforts have made. Andy Dingley (talk) 01:42, 1 March 2024 (UTC)
- It doesn't matter in licensing terms, but it does in documentary terms. It is a made-up extrapolation, and should not be reused under the misimpression that it is a faithful reproduction. - Jmabel ! talk 05:25, 1 March 2024 (UTC)
- Does that matter? That claim is important because a process that's only a "faithful photographic reproduction" is taken to not have created some new protectable copyright, thus we can host the overall image as the original was otherwise clear. If some additional process does something more then that might attract a copyright. But not if it's an AI process (as we're using an assumption that AIs can't create copyright materials) or either that the upscaling was done by some scanner (sic) here, who is happy to license any contribution that their own efforts have made. Andy Dingley (talk) 01:42, 1 March 2024 (UTC)
- I'd be inclined to agree. Using AI upscaling on a low-resolution photograph of a painting will inevitably result in an image which differs from the original work. No matter how good an upscaler is, it'll never know the difference between a detail which was blurry because it's working from a low resolution photo and a detail which was blurry because the original painting was indistinct (for example). Worse, a lot of AI upscalers are primarily trained on photographs, and will frequently add details in an inappropriate style, like inferring photorealistic, smooth-shaded faces in engravings where those details should have been rendered with patterns of lines.
- As far as that file is concerned, I'd be inclined to renominate it with a more clear rationale, focusing on the fact that it's not an accurate representation of the original piece. Omphalographer (talk) 19:54, 27 February 2024 (UTC)
How do you know an image is not a person's attempt to imitate ai-generated image?
[edit]how do you know an image is certainly ai-generated? what if a person does an artwork manually, but in a way to imitate "ai style" and give audience an impression that it's ai-generated?
File:Global Celebration Of New Years.jpg is what prompts me to ask these questions and #Declaration of AI prompts. it fails because in the centre foreground it's imitating rather than using actual cjk chars, so it looks like ai-generated, but then it can equally likely be an artwork by a person without knowledge of cjk chars. RZuo (talk) 14:04, 1 March 2024 (UTC)
- I think the question usually rather is whether it's AI-generated or not, where manual artwork misidentified as AI-generated is less of an issue so far. It's basically the same question and there's Category:Unconfirmed likely AI-generated images to check for these. Note that often it can be that only a part of a manual image or a step of the otherwise manual process is AI-supported or AI-generated. Prototyperspective (talk) 14:26, 1 March 2024 (UTC)
- Simple answer: You can´t know. From this point on, everything on Commons, without proper provenience, is basically some sort of fraud. The companies, which offer such services, should really provide provenience for the output their AI creates. Alexpl (talk) 23:00, 2 March 2024 (UTC)
More food for thought: Youtube's AI guidelines
[edit]As well as some additional examples at: https://support.google.com/youtube/answer/14328491
Youtube now requires disclosure for videos containing certain categories of potentially deceptive content. Some of categories they've defined for material requiring disclosure may be worth considering with regard to Commons policies on AI. Omphalographer (talk) 18:38, 18 March 2024 (UTC)
- We do. It's called {{PD-algorithm}}. Anything more than this would not serve any purpose other than making uploading AI images an burden on the uploader. --Trade (talk) 23:18, 6 April 2024 (UTC)
- Nobody cares about burdens for AI-uploadres. To put it mildly. Alexpl (talk) 13:51, 7 April 2024 (UTC)
- Not nobody. Personally, I strongly object to the type of game playing we see in politics all the time, where one side can't ban something, but puts up onerous rules against it for the sole point of discouraging it.--Prosfilaes (talk) 18:45, 7 April 2024 (UTC)
- Would you like me to start working on consensus for a ban (or at least a moratorium) on AI-generated files? So far, I've been trying to avoid that. - Jmabel ! talk 20:38, 7 April 2024 (UTC)
- Hey, don't threaten me with a good time. Omphalographer (talk) 04:29, 8 April 2024 (UTC)
- If you must. I just oppose the idea that nobody cares about undue burdens on users.--Prosfilaes (talk) 20:15, 9 April 2024 (UTC)
- For a media repository hosting free-to-use images, sounds, videos - AI created content is, and will allways be, pure poison. So (...) Alexpl (talk) 07:44, 8 April 2024 (UTC)
- That's your opinion. If you can't get a consensus on that, I think it hostile and damaging to the community to toss up rules that "would not serve any purpose other than making uploading AI images an burden on the uploader."--Prosfilaes (talk) 20:15, 9 April 2024 (UTC)
- People on here coddle and pander to uploaders way to much. The paternalism towards uploaders is just bizarre. Especially in this case consider how much AI artwork is absolutely worthless trash. I really don't the urge to put everyone else out just to indulge a group of users who probably don't care and are just using Commons as a personal file host for their fantasy garbage to begin with. And no that doesn't everything generated by AI is worthless or shouldn't be hosted on Commons, but there's a cost to benefit here that's clearly slanted towards this being a massive time sink that could potentially be detrimental to the project without proper guidelines. But hey who cares as long as we can be needlessly over protective of uploaders by not doing anything about it right? --Adamant1 (talk) 23:42, 9 April 2024 (UTC)
- Yeah. Or to use the language of "the community" - Я согласен. Alexpl (talk) 11:59, 11 April 2024 (UTC)
- It's not too much to ask for policies to have a purpose other than spite Trade (talk) 22:55, 20 April 2024 (UTC)
- People on here coddle and pander to uploaders way to much. The paternalism towards uploaders is just bizarre. Especially in this case consider how much AI artwork is absolutely worthless trash. I really don't the urge to put everyone else out just to indulge a group of users who probably don't care and are just using Commons as a personal file host for their fantasy garbage to begin with. And no that doesn't everything generated by AI is worthless or shouldn't be hosted on Commons, but there's a cost to benefit here that's clearly slanted towards this being a massive time sink that could potentially be detrimental to the project without proper guidelines. But hey who cares as long as we can be needlessly over protective of uploaders by not doing anything about it right? --Adamant1 (talk) 23:42, 9 April 2024 (UTC)
- That's your opinion. If you can't get a consensus on that, I think it hostile and damaging to the community to toss up rules that "would not serve any purpose other than making uploading AI images an burden on the uploader."--Prosfilaes (talk) 20:15, 9 April 2024 (UTC)
- Would you like me to start working on consensus for a ban (or at least a moratorium) on AI-generated files? So far, I've been trying to avoid that. - Jmabel ! talk 20:38, 7 April 2024 (UTC)
- Not nobody. Personally, I strongly object to the type of game playing we see in politics all the time, where one side can't ban something, but puts up onerous rules against it for the sole point of discouraging it.--Prosfilaes (talk) 18:45, 7 April 2024 (UTC)
- Nobody cares about burdens for AI-uploadres. To put it mildly. Alexpl (talk) 13:51, 7 April 2024 (UTC)
Possibly of future laws
[edit]I am just gonna bring this up for all ai generated media for Wikimedia.
In March of this year Tennessee passed the Elvis Act. Making it a crime to copy a musician’s voice without permission. Also there is this.
Yes I know this is all very recent but I do think Wikimedia should be cautious. Because it seems new laws can harm Wikipedia’s usage of ai generated media.CycoMa1 (talk) 19:07, 14 April 2024 (UTC)
- I hope so. Alexpl (talk) 12:47, 18 April 2024 (UTC)
- There's also a a bipartisan bill in the United States that would require the labeling of AI-generated videos and audio. Although I don't think it's passed yet, but I think that's probably the direction we are going in. It would be interesting to know how something that like could even be enforced on Commons if such a bill ever passes though. Probably the only way to is by banning AI generated content in some way, if not just totally banning it outright. --Adamant1 (talk) 12:53, 18 April 2024 (UTC)
- The only way to label AI generated videos is to ban them? That's deeply confusing.--Prosfilaes (talk) 14:12, 18 April 2024 (UTC)
- I know your just taking what I said out of context, but that's why I said it would be interesting to know how something like that could be enforced. I don't know what else we can do other then ban AI generated videos if there's no way workable way to label them as such though. Of course we could include said information in the file description, but then it sounds like at least the bill I linked to would require the files themselves to be digitally watermarked as AI generated, which of course would have no control over. And that's probably where the banning would come into play. Not that I think your question was genuine in the first place though. Otherwise, what's your proposed way to enforce such a law if one were to be passed other then banning AI generated videos that don't contain the digital watermarking? --Adamant1 (talk) 14:45, 18 April 2024 (UTC)
- Well, we can label them: If no comprehensible provenance can be provided - it´s treated as AI content. Alexpl (talk) 15:11, 18 April 2024 (UTC)
- Perhaps I was taking it in the exact context it was provided in, one that made no mention of digital watermarking in the video. Moreover, if such a thing is required, most reputable sources will provide it in the first place. We could add digital watermarking ourself, and yes, a ban on videos that don't have digital watermarking is possible. Note that you jumped to "just totally banning it outright", instead of requiring or adding labels.
- And Commons:Assume good faith.--Prosfilaes (talk) 15:29, 18 April 2024 (UTC)
- I'd buy that, but my comment was made in the context of the article about the bill which pretty clearly mentions how it would require digitally watermarking videos. It's not on me that you didn't read said article before replying to my message. Also in no way did I "jump" to just totally banning it outright. I pretty clearly said "Probably banning it in someway." That's obviously not "totally banning it outright." Although totally banning it is an option in absence of anything else. But nowhere did I say that should be the only or main solution. It's kind of hard to assume good faith when your taking my comment out of context and reading negative intent behind it that isn't even there to begin with. --Adamant1 (talk) 15:46, 18 April 2024 (UTC)
- Meh, if Commons can find ways to get around 18 USC 2257 then this shouldn't be an issue Trade (talk) 22:58, 20 April 2024 (UTC)
- I'd buy that, but my comment was made in the context of the article about the bill which pretty clearly mentions how it would require digitally watermarking videos. It's not on me that you didn't read said article before replying to my message. Also in no way did I "jump" to just totally banning it outright. I pretty clearly said "Probably banning it in someway." That's obviously not "totally banning it outright." Although totally banning it is an option in absence of anything else. But nowhere did I say that should be the only or main solution. It's kind of hard to assume good faith when your taking my comment out of context and reading negative intent behind it that isn't even there to begin with. --Adamant1 (talk) 15:46, 18 April 2024 (UTC)
- I know your just taking what I said out of context, but that's why I said it would be interesting to know how something like that could be enforced. I don't know what else we can do other then ban AI generated videos if there's no way workable way to label them as such though. Of course we could include said information in the file description, but then it sounds like at least the bill I linked to would require the files themselves to be digitally watermarked as AI generated, which of course would have no control over. And that's probably where the banning would come into play. Not that I think your question was genuine in the first place though. Otherwise, what's your proposed way to enforce such a law if one were to be passed other then banning AI generated videos that don't contain the digital watermarking? --Adamant1 (talk) 14:45, 18 April 2024 (UTC)
- The only way to label AI generated videos is to ban them? That's deeply confusing.--Prosfilaes (talk) 14:12, 18 April 2024 (UTC)
What shouldn't AI-generated content be used for?
[edit]Let's take a break from the endless circular discussions of "should AI media be banned" for a moment. Something I think we should be able to, at least, reach some consensus on is a canonical list of things that AI-generated images aren't good for - tasks which AI image generators are simply incapable of doing effectively, and which users should be discouraged (not banned) from uploading.
A couple of examples to get the discussion started:
- Graphs and charts. AI image generators aren't able to collect, interpret, or visualize data; any "graphs" they produce will inevitably be wildly wrong.
- Technical and scientific diagrams - flowcharts, schematics, blueprints, formulas, biological illustrations, etc. Producing accurate images of these types requires domain knowledge and attention to detail which image generation AIs aren't capable of. Some examples of how this can go horribly wrong were highlighted in an Ars Technica article a few months ago: Scientists aghast at bizarre AI rat with huge genitals in peer-reviewed article.
- Images of people who aren't well represented in the historical record by photographs or contemporary artwork. AI is not a crystal ball; it cannot transform a name or a broad description of a person into an accurate portrait.
Thoughts? Omphalographer (talk) 23:07, 20 April 2024 (UTC)
- Since the landscape is constantly changing with new features and skillsets being added, there is not much (beyond the ability to accurately draw people without sourcematerial) we can ultimately rule out. Just forbid AI´s useage for anything but representation of it´s own work. Alexpl (talk) 11:54, 21 April 2024 (UTC)
- The problem is that the pro AI artwork people just throw their chicken nuggies at the wall and foam at the mouth about how original works aren't true representations of historical people either. If anyone wants a example just read through the walls of text in Commons:Deletion requests/File:Benjamin de Tudela.jpg. There's no reasoning with people like that, and unfortunately they are the one's who ultimately have the say here. Sans the WMF ever coming out with a stance on it one or another. I think that's really the only solution at this point though. We clearly aren't going to just ban AI artwork, nor should we, but I also suspect there's never going to be reasonable guidelines about it without us being forced into implementing them by an outside party like the WMF either. --Adamant1 (talk) 12:08, 21 April 2024 (UTC)
- "they are the one's who ultimately have the say here" - that´s BS. The argumentation presented on the talk page by the activist is useless. The next guy can seed his AI work with Benjamin de Tudela as some obese, bald dude with a parrot on his shoulder and without the religious artifacts. The concept of teaching people anything with these interpretations has the built-in failure of making stuff up. Alexpl (talk) 13:27, 21 April 2024 (UTC)
- Oh I totally agree. AI artwork is essentially worthless as teaching aid because every image it generates is completely different. The problem is that all it takes is a couple of mouth foamers like that user in the DR to derail any guideline proposal. To the point that I don't think we could even implement something saying that AI generated images of clearly made up fantasy settings or people should be banned. At least not without some kind of guidance from the WMF or serious change in the laws. I'd love to be proven wrong though. --Adamant1 (talk) 13:54, 21 April 2024 (UTC)
- Please stop using the talk page as a place to attack other users. Thanks Trade (talk) 22:12, 22 April 2024 (UTC)
- Oh I totally agree. AI artwork is essentially worthless as teaching aid because every image it generates is completely different. The problem is that all it takes is a couple of mouth foamers like that user in the DR to derail any guideline proposal. To the point that I don't think we could even implement something saying that AI generated images of clearly made up fantasy settings or people should be banned. At least not without some kind of guidance from the WMF or serious change in the laws. I'd love to be proven wrong though. --Adamant1 (talk) 13:54, 21 April 2024 (UTC)
- "they are the one's who ultimately have the say here" - that´s BS. The argumentation presented on the talk page by the activist is useless. The next guy can seed his AI work with Benjamin de Tudela as some obese, bald dude with a parrot on his shoulder and without the religious artifacts. The concept of teaching people anything with these interpretations has the built-in failure of making stuff up. Alexpl (talk) 13:27, 21 April 2024 (UTC)
- We can still document the limitations of AI image models as they currently exist. If advancements in technology address some of those limitations, we can update the page to reflect that. Omphalographer (talk) 17:03, 21 April 2024 (UTC)
- If I understand you correctly, that would mean to keep this page up to date for some two dozen evolving AI-systems. Even tracking one such systems capabilities requieres a PhD-level amount of work and skill. Alexpl (talk) 09:25, 22 April 2024 (UTC)
- That seems like an exaggeration. If there were to be advancements to AI image models to the degree that one could collect data and create accurate graphical visualizations, draw a meaningful flowchart of a process, or create an accurate historical reconstruction of a person's appearance, we'd be hearing the developers trumpet their success from the rooftops. (And even then, it'd still be accurate to warn that most models still couldn't complete those tasks.) Omphalographer (talk) 00:23, 28 April 2024 (UTC)
- If I understand you correctly, that would mean to keep this page up to date for some two dozen evolving AI-systems. Even tracking one such systems capabilities requieres a PhD-level amount of work and skill. Alexpl (talk) 09:25, 22 April 2024 (UTC)
- The problem is that the pro AI artwork people just throw their chicken nuggies at the wall and foam at the mouth about how original works aren't true representations of historical people either. If anyone wants a example just read through the walls of text in Commons:Deletion requests/File:Benjamin de Tudela.jpg. There's no reasoning with people like that, and unfortunately they are the one's who ultimately have the say here. Sans the WMF ever coming out with a stance on it one or another. I think that's really the only solution at this point though. We clearly aren't going to just ban AI artwork, nor should we, but I also suspect there's never going to be reasonable guidelines about it without us being forced into implementing them by an outside party like the WMF either. --Adamant1 (talk) 12:08, 21 April 2024 (UTC)
- Since the landscape is constantly changing with new features and skillsets being added, there is not much (beyond the ability to accurately draw people without sourcematerial) we can ultimately rule out. Just forbid AI´s useage for anything but representation of it´s own work. Alexpl (talk) 11:54, 21 April 2024 (UTC)
- The AI-scientific diagrams by midjourney work fine and apparently even make it through peer review. Progress! Alexpl (talk) 06:38, 29 April 2024 (UTC)
- This is already linked in the original post above which you apparently didn't fully read. Prototyperspective (talk) 11:32, 29 April 2024 (UTC)
- True. None the less - "Frontiers Media" should be out as credible source. Alexpl (talk) 15:03, 29 April 2024 (UTC)
- Just a random side thought, but I'd probably more keen on AI generated images being hosted on Commons if the people on the pro side calmed down about it and committed to not using AI generated images on other projects. Plus spent the time as a group making things were properly categorized and maintained. Like I think you could maybe make an argument for us hosting fantasy artwork under some circumstances, but it's undermined when those types of images are then used in Wikipedia as legitimate depictions of events, people, or whatever. Regardless, I think AI generated artwork would probably be totally fine if not for the other things that inevitable seem to come along with it. At least IMO that's totally on the people who think AI generated artwork should be hosted here to remedy if they really want it to not be on Commons though. --Adamant1 (talk) 15:23, 29 April 2024 (UTC)
- True. None the less - "Frontiers Media" should be out as credible source. Alexpl (talk) 15:03, 29 April 2024 (UTC)
- This is already linked in the original post above which you apparently didn't fully read. Prototyperspective (talk) 11:32, 29 April 2024 (UTC)
- And some AI-generated scientific diagrams are showing up on Commons now, to the detriment of the project: Commons:Deletion requests/File:De nucleus supraopticus.jpg. Omphalographer (talk) 21:51, 30 April 2024 (UTC)
- There's thousands of problematic files on Commons but you pick a single file that already is nominated for deletion, not used anywhere and clearly going to get deleted. I wonder if people upload such things just to support the point that these tools/images are soo uniquely problematic. They're not useful for diagrams and get deleted just like the more abundant problematic paint/photoshop/photo images are. Prototyperspective (talk) 14:04, 1 May 2024 (UTC)
- Presumption of guilt won´t help you here. There is 1K+ of dickpics that aren´t used anywhere either, but people keep uploading them anyway - maybe to support the point that those are uniquely problematic? Probably not. Alexpl (talk) 14:44, 1 May 2024 (UTC)
- What an odd statement. I mentioned that image because I had just identified it as a recent AI upload, removed it from an article where it was being used inappropriately, and nominated it for deletion. As I explained in the nomination, that image is not "useful for diagrams", as the text is complete nonsense and the anatomy is wrong as well; we already have many better alternatives created by human artists, who (unlike AI image generators) have the skill to produce anatomically accurate diagrams and label them with meaningful, relevant text. Omphalographer (talk) 17:48, 1 May 2024 (UTC)
- It´s reasonable to assume that AI will be able to add meaningful text too at some point. Alexpl (talk) 19:36, 1 May 2024 (UTC)
- It can sometimes if you build it into the prompt. I don't think it could accurately label something like that even by prompting it to though. --Adamant1 (talk) 20:30, 1 May 2024 (UTC)
- It´s reasonable to assume that AI will be able to add meaningful text too at some point. Alexpl (talk) 19:36, 1 May 2024 (UTC)
- There's thousands of problematic files on Commons but you pick a single file that already is nominated for deletion, not used anywhere and clearly going to get deleted. I wonder if people upload such things just to support the point that these tools/images are soo uniquely problematic. They're not useful for diagrams and get deleted just like the more abundant problematic paint/photoshop/photo images are. Prototyperspective (talk) 14:04, 1 May 2024 (UTC)
- Comment What about fantasy depictions of mythological gods? Yes? No? Maybe? can we at least all agree that's probably not a valid use case of AI generated artwork on here? --Adamant1 (talk) 05:36, 10 May 2024 (UTC)
- Mixed feelings on this one. If you're extremely familiar with the religious iconography associated with the figures you're trying to depict, and you have a very clear concept of what you want the resulting image to look like, and you're familiar with how to persuade an image model to bring that concept to fruition, using an image model might be a valid choice. But I am not at all convinced that's what's been done here, and I'm more generally unconvinced that any such AI-generated image would ever be preferable to existing religious artwork. (And if that artwork doesn't exist, there's no cultural basis for an AI-generated image.) Omphalographer (talk) 06:17, 10 May 2024 (UTC)
- Doesn´t work for illustrating articles in wikipedia, but religious themed works seem somewhat useful for evaluating how an AI was trained. Alexpl (talk) 06:28, 10 May 2024 (UTC)
- I don't know about that. You got to at least admit that Khoriphaba probably didn't look like Jason Momoa. So what exactly does that tell us about how AI was trained, except that if you put "mythological gods wrestling" in the prompt that the resulting image is probably going to be loosely based on Jason Momoa from the Aquaman film series? --Adamant1 (talk) 06:37, 10 May 2024 (UTC)
- It shouldn´t be on Wikipedia. On commons, I really don´t care. On a second thought: The entire bunch of images under Category:AI-generated mythology should be deleted. Just a way to use the "wikipedia" branding to add legitimacy to pseudo-religious garbage. Alexpl (talk) 09:44, 10 May 2024 (UTC)
- I don't know about that. You got to at least admit that Khoriphaba probably didn't look like Jason Momoa. So what exactly does that tell us about how AI was trained, except that if you put "mythological gods wrestling" in the prompt that the resulting image is probably going to be loosely based on Jason Momoa from the Aquaman film series? --Adamant1 (talk) 06:37, 10 May 2024 (UTC)
Country of origin
[edit]Apologies if this has already been asked and I just missed it, but how is anyone on here realistically suppose to determine the country of origin for an AI generated image, or for that matter is there even one in a lot of cases to begin with (for instance with images generated by online tools)? Adamant1 (talk) 01:19, 27 April 2024 (UTC)
- Did anyone ask you to name a country of origin for AI-work? Alexpl (talk) 10:29, 27 April 2024 (UTC)
- @Alexpi: No, but there's a section in the article about the laws of different for AI generated images. So I thought it would mentioning if there was or wasn't a way to determine the country of origin. Otherwise the section seems kind of pointless. I guess its covered by Trade's comment though. --Adamant1 (talk) 10:51, 28 April 2024 (UTC)
- Every image is presumed to be from a country where AI is ineligible for copyright unless proven otherwise Trade (talk) 00:11, 28 April 2024 (UTC)
Tagging suspected AI images
[edit]I noticed the three images uploaded by Special:ListFiles/ChanzyAl (that's AL as in Albert, not AI as in artificial intelligence) today.
There is a possibility that a skilled artist manually created these and personally uploaded them. There is the possibility that the paintings are real (e.g., painted in the 19th century when these men were alive) and the attribution is wrong.
But I – definitely a non-expert – personally think the most likely situation is that someone fed real engravings, etc. into an image generating software and said something like "Give me full-color presidential-style paintings of the man in this drawing", picked one that looked plausible, and uploaded it as "own work".
What I'd like, for the sake of Wikipedia editors and other re-users, is some way to flag the images as potentially being inaccurate. Does any such tag exist? WhatamIdoing (talk) 18:11, 28 April 2024 (UTC)
- This situation seems extremely similar to Commons:Deletion requests/Files uploaded by Chromjunuor, and should be dealt with the same way - by taking the files to a deletion request. If these are real paintings (which seems extremely unlikely given the deformed nonsense medals on the chests of the first two), the uploader should be able to back that up with some sort of verifiable claim about the provenance of the paintings. Omphalographer (talk) 18:47, 28 April 2024 (UTC)
- +1 to Omphalographer's comment. You could probably argue a template for suspected AI-generated media might be good, but there's no reason not to just nominate the images for deletion at that point either. It's probably the easier solution. --Adamant1 (talk) 20:36, 28 April 2024 (UTC)
- Okay. I have linked all three of them at Commons:Deletion requests/File:General Nissage Saget of Haiti.jpg#File:General Nissage Saget of Haiti.jpg. WhatamIdoing (talk) 20:58, 28 April 2024 (UTC)
- +1 to Omphalographer's comment. You could probably argue a template for suspected AI-generated media might be good, but there's no reason not to just nominate the images for deletion at that point either. It's probably the easier solution. --Adamant1 (talk) 20:36, 28 April 2024 (UTC)
European Union and its "AI Act"
[edit]Hi! I see that some country-specific clarifications are mentioned on this page, but is the newly-approved Artificial Intelligence Act in the EU relevant, or is it only relevant for AI tools? I want to note that some countries outside the EU, such as Norway, might consider bringing the EU AI Act to their jurisdiction. EdoAug (talk) 13:41, 26 May 2024 (UTC)
Deletion discussion for viral "All Eyes on Rafah" AI image
[edit]An AI image using the phrase w:All Eyes on Rafah is currently going viral on Instagram / social media. I uploaded the image to Commons using the tag {{PD-AI}}.
Sources describe this as perhaps one of the first examples of an AI-generated image being widely used as a component of political protest.
- https://www.bbc.com/news/articles/cjkkj5jejleo
- https://www.washingtonpost.com/technology/2024/05/29/all-eyes-on-rafah-meaning-ai-image/
- https://www.nytimes.com/2024/05/29/world/middleeast/all-eyes-on-rafah.html
- https://www.aljazeera.com/news/2024/5/29/what-is-all-eyes-on-rafah-decoding-the-latest-viral-social-trend
Would be interested in others' thoughts at the deletion discussion happening on Commons.
PK-WIKI (talk) 20:41, 31 May 2024 (UTC)
- Not sure where this story is going to end, but instructive that what started as an apparently clear AI image generated by a creator in a particular country (with some news sources reporting both of these statements as fact) may have now turned out to be that creator reusing someone else's (definitely AI) image and altering it (probably using AI) to an Instagram vertical format with no watermark. Belbury (talk) 10:15, 4 June 2024 (UTC)
- That the origin story didn´t match up was ... obvious. Alexpl (talk) 12:00, 4 June 2024 (UTC)
Discussion at Commons:Village pump/Proposals#Make COM:AI a Guideline
[edit]You are invited to join the discussion at Commons:Village pump/Proposals#Make COM:AI a Guideline. AntiCompositeNumber (talk) 21:20, 2 June 2024 (UTC)
Negative boosted template
[edit]Should Template:Negative boosted template be added to Template:AI upscaled and Template:PD-algorithm? You could argue that most people who are using Commons would rather prefer to find non-AI images when using the search function.--Trade (talk) 08:41, 22 June 2024 (UTC)
Industrial Designs Act 1996 of Malaysia
[edit]Article 10. (6) "In the case of an industrial design generated by computer in circumstances such that there is no human author, the person by whom the arrangements necessary for the creation of the industrial design are made shall be taken to be the author."
@Nosferattus that is similar to British CDPA Section 9(3) "In the case of a literary, dramatic, musical or artistic work which is computer‐generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken." Butcher2021 (talk) 11:58, 26 June 2024 (UTC)
- @Butcher2021: Interesting! It looks like it currently only applies to "industrial designs", likely because that's a newer law. Nosferattus (talk) 16:21, 26 June 2024 (UTC)
- So Malaysian AI images are now copyrighted? Trade (talk) 21:05, 7 July 2024 (UTC)
- It is uncertain, depending on a future court case or a new law. But as I said, it's quite likely that Malaysia will follow the British way. Butcher2021 (talk) 14:54, 9 July 2024 (UTC)
- So Malaysian AI images are now copyrighted? Trade (talk) 21:05, 7 July 2024 (UTC)
Commercial usage restrictions with images generated by Stable Diffusion 3
[edit]Apparently Stability AI released Stable Diffusion 3 recently under a non-commercial license. Obviously meaning images generated with it can't be used commercially. I'm wondering if that means we can't host images created with Stable Diffusion 3 on Commons and how that effects us hosting AI images more generally. Since it's not inherently clear what software created any given image. At least it isn't on our end. I think an argument could be made for banning images that were clearly generated with SD 3 if not AI artwork more generally due to our inability to know what particular software created any given image though. Thoughts? Adamant1 (talk) 22:35, 26 June 2024 (UTC)
- Before making such claims (Obviously meaning images generated with it can't be used commercially) you should have read the guideline page you are commenting on. It already covers this kind of misunderstanding, under "Copyrights of AI providers" and "Terms of use of AI providers". Regards, HaeB (talk) 20:40, 30 June 2024 (UTC)
- @HaeB: I'm aware of what the guideline says. It seems like pure speculation though, which usually isn't what we base guidelines on. Regardless, do you think these people's lawyer would have just came up with a license they have absolutely no way to enforce or maybe there's an enforcement mechanisms there that we as people who aren't lawyers who work in the industry just aren't privy to? Because we're talking about a billion dollar company here. I doubt they would bother creating a license they don't have a way to sue people for infregment with. --Adamant1 (talk) 02:04, 1 July 2024 (UTC)
- @Adamant1: perfectly possible, just like GLAMs claim copyright on work that is clearly PD, and copyright trolls write to people threatening letters offering to "settle" a matter for way more than any court would likely award them. Not to mention unenforceable non-competes in contracts and any number of other such things. It is imaginable that Stability AI might even have a basis to sue the person who created an inherently PD file with their product for commercializing the file, since that person has a contract with them. It's hard to imagine the grounds on which they could sue anyone else for using that file. - Jmabel ! talk 03:58, 1 July 2024 (UTC)
- @Jmabel: I could see a situation where the images have a hidden watermark and then they sue based purely on the basis of someone violating their licensing terms, if not the copyright. I think that's what your getting at. Regardless, it's unclear to me how exactly we should or handle such a thing. But we still take issue with people suing over someone not properly attributing them as the creator of the work and in that case a proposal just passed to watermark the images. So it's not like there isn't some kind of precedent or solution here. --Adamant1 (talk) 04:04, 1 July 2024 (UTC)
- At least in the U.S., where no court has upheld any copyright claim for AI-generated content, they can watermark to their heart's content, but it still gives them no basis to sue a third party who hasn't signed a contract with them. The only basis to do that would be copyright law, where it appears they haven't got a leg to stand on. - Jmabel ! talk 04:43, 1 July 2024 (UTC)
- @Jmabel: I could see a situation where the images have a hidden watermark and then they sue based purely on the basis of someone violating their licensing terms, if not the copyright. I think that's what your getting at. Regardless, it's unclear to me how exactly we should or handle such a thing. But we still take issue with people suing over someone not properly attributing them as the creator of the work and in that case a proposal just passed to watermark the images. So it's not like there isn't some kind of precedent or solution here. --Adamant1 (talk) 04:04, 1 July 2024 (UTC)
- I'm aware of what the guideline says - in that case, since you are disagreeing with the current community consensus as reflected in this guideline, it would have been useful to acknowledge that right away.
- It seems like pure speculation though - how so? Did you review the prior community discussions that led to this version of the guideline?
- Concretely, regarding the two sections I mentioned:
- COM:AI#Terms of use of AI providers is basically just a restatement of Commons:Non-copyright restrictions. If you disagree with that guideline, the talk page there might be a better venue for your concerns.
- COM:AI#Copyrights of AI providers is based on the observation that In the United States and most other jurisdictions, only works by human authors qualify for copyright protection (i.e. not works created by AI models without copyrightable human intervention), to quote from another part of this guideline where this is explained in more detail with references, in particular to the US Copyright Office's statements. Is it these statements that you are disagreeing with?
- Regards, HaeB (talk) 04:57, 1 July 2024 (UTC)
- @Adamant1: perfectly possible, just like GLAMs claim copyright on work that is clearly PD, and copyright trolls write to people threatening letters offering to "settle" a matter for way more than any court would likely award them. Not to mention unenforceable non-competes in contracts and any number of other such things. It is imaginable that Stability AI might even have a basis to sue the person who created an inherently PD file with their product for commercializing the file, since that person has a contract with them. It's hard to imagine the grounds on which they could sue anyone else for using that file. - Jmabel ! talk 03:58, 1 July 2024 (UTC)
- @HaeB: I'm aware of what the guideline says. It seems like pure speculation though, which usually isn't what we base guidelines on. Regardless, do you think these people's lawyer would have just came up with a license they have absolutely no way to enforce or maybe there's an enforcement mechanisms there that we as people who aren't lawyers who work in the industry just aren't privy to? Because we're talking about a billion dollar company here. I doubt they would bother creating a license they don't have a way to sue people for infregment with. --Adamant1 (talk) 02:04, 1 July 2024 (UTC)
- PS: Also, what exactly does Apparently Stability AI released Stable Diffusion 3 recently under a non-commercial license refer to - the license for the model weights of Stable Diffusion 3? In that case it would be entirely wrong to conclude that this is Obviously meaning images generated with it can't be used commercially. The copyright status of a model's weights and of its outputs are two different things. (Relatedly, Commons also allows the upload of images that were created using proprietary software like MS Paint.) Regards, HaeB (talk) 05:19, 1 July 2024 (UTC)
- Is it these statements that you are disagreeing with? I'm not neccessarily disagreeing with anything in the guideline. I'm simply posting an update to Stable Diffusion's terms of service after they put out a new release and seeing what other people think about it, ehich the last time I checked I can do since this is is a talk page about AI-generated media. If its not something you think is worth discussing, cool. Just don't comment next time. No one asked for your opinion. I certainly didn't. --Adamant1 (talk) 05:25, 1 July 2024 (UTC)
- I'm simply posting an update to Stable Diffusion's terms of service after they put out a new release and seeing what other people think about it - no, you were also wrongly asserting that this update would also be Obviously meaning images generated with it can't be used commercially. And this right after making several other factually wrong arguments in favor of the deletion of AI-generated media elsewhere (e.g. [6][7]).
- No one asked for your opinion. I certainly didn't. - Normally I would ask you to remain civil after such a comment, but considering your reaction to concerns about your personal attacks against another user [8][9] in the same context (deletions of AI-generated media), that's probably futile.
- Regards, HaeB (talk) 06:38, 1 July 2024 (UTC)
- no, you were also wrongly asserting that Are you saying that their press release saying images generated by Stable Diffusion 3.0 can't be commercially is wrong or that I just got it wrong myself? Admittedly companies can buy a commercial license from Stability AI for large-scale commercial usage, but that's not really germane to this. So what exactly am I wrongly asserting here?
- Is it these statements that you are disagreeing with? I'm not neccessarily disagreeing with anything in the guideline. I'm simply posting an update to Stable Diffusion's terms of service after they put out a new release and seeing what other people think about it, ehich the last time I checked I can do since this is is a talk page about AI-generated media. If its not something you think is worth discussing, cool. Just don't comment next time. No one asked for your opinion. I certainly didn't. --Adamant1 (talk) 05:25, 1 July 2024 (UTC)
- In the same context (deletions of AI-generated media) Sorry, but I didn't know that was the context of the discussion. I certainly didn't mean for it to be. I was simply bringing up their licensing being changed and asking how people thought we could handle such a change on our end. That's really it. I don't think I ever brought up deleting AI generated images here. You to be the only person making it about that. Otherwise can you point out where I've said anywhere in this discussion that we should delete AI-generated images? No offense, but it really seems like your just boxing ghosts here. --Adamant1 (talk) 07:02, 1 July 2024 (UTC)
- What press release saying images generated by Stable Diffusion 3.0 can't be commercially [sic]? Link?
- I don't think I ever brought up deleting AI generated images here - in your very first comment above you were musing if that [i.e. your mistaken "Obviously" conclusion] means we can't host images created with Stable Diffusion 3 on Commons and how that effects us hosting AI images more generally, and about possibly banning images that were clearly generated with SD 3 if not AI artwork more generally.
- Regards, HaeB (talk) 08:00, 1 July 2024 (UTC)
- I said "I'm wondering if that means we can't host images created with Stable Diffusion 3 on Commons" because it's been proposed before. Not that I think we should. Nor was that the main gist of my comment anyway. I could really care less myself. But again, it is something that has been proposed in the past. Regardless, there's a difference between "banning" something from a particular source if their license doesn't allow for it to be hosted on Commons versus "deletions of AI-generated media", which is what you were claiming I was saying. Your just moving the bar because your original comment was clearly false. --Adamant1 (talk) 08:10, 1 July 2024 (UTC)
- In the same context (deletions of AI-generated media) Sorry, but I didn't know that was the context of the discussion. I certainly didn't mean for it to be. I was simply bringing up their licensing being changed and asking how people thought we could handle such a change on our end. That's really it. I don't think I ever brought up deleting AI generated images here. You to be the only person making it about that. Otherwise can you point out where I've said anywhere in this discussion that we should delete AI-generated images? No offense, but it really seems like your just boxing ghosts here. --Adamant1 (talk) 07:02, 1 July 2024 (UTC)
- PPS: Actually it turns out that Stability AI explicitly disclaims copyright ownership of the generated images in its Non-Commercial and Creator License FAQs, rendering much of the above discussion moot in that context (and showing yet again that Adamant1's Obviously meaning ... conclusion was wrong):
Does Stability AI own any of the “outputs” generated from Core Models?
Between Stability AI and a member, or its authorized user, that member or authorized user owns the “outputs” that they generate using the Core Model, to the extent permitted by applicable law.- Regards, HaeB (talk) 06:50, 1 July 2024 (UTC)
- "permitted by applicable law" includes all recent/upcoming regulation, so issuing a non-commercial license makes perfect sense from Stability AI point of view. They should have the data to tell how many of their customers just use the AI as an intermediary to simply steal stuff, otherwise out of reach. Like money laundering - but for intellectual properties. Alexpl (talk) 07:09, 2 July 2024 (UTC)
- @Alexpl: Not sure what you were referring to exactly. But if it is the claim (frequently made by copyright industry advocates) that the outputs of an AI model are automatically derivative works of the material it was trained on (i.e. are copyright infringements), then that's simply wrong. E.g. in one prominent recent lawsuit, a US judge called that theory "nonsensical".
- Now, as this guideline pages has mentioned for a while already, there could still be isolated cases where a particular generated image does infringe on some existing work. But for all we know these have been rare (and may have become rarer still as the AI companies have become more sensitive to the bad PR effect of such cases and improved their copyvio filters). E.g. OpenAI now feels confident enough about this to indemnify business users of ChatGPT and DALL-E "for any damages finally awarded by a court of competent jurisdiction and any settlement amounts payable to a third party arising out of a third party claim alleging that the Services (including training data we use to train a model that powers the Services) infringe any third party intellectual property right." Regards, HaeB (talk) 05:06, 4 July 2024 (UTC)
- @HaeB: Since you asked for a source to what I was talking about above according to this page on their website "Do I need a license?
- You need a license to use our models commercially. Whether you qualify for the Creator License or Enterprise license depends on your company size and the intended use of our models. Creator and Enterprise license allow you to use all of the Stability AI Core Models and SD3 Medium commercially. If you want to use our models for personal and non-commercial use, the non-commercial license is enough." Now does that necessarily mean they have a way to enforce it? Who really knows. I doubt they would bother with it if there was enforcement mechanism on their end somehow though. Whatever that might be. --Adamant1 (talk) 09:07, 2 July 2024 (UTC)
- @Adamant1: Thanks for (finally) providing a source, appreciated. (Although I'm still wondering where to find their press release saying images generated by Stable Diffusion 3.0 can't be commercially.)
- I already addressed this point above, but to repeat: The "Self-Hosted Licenses" page you linked, like the license text itself I had mentioned earlier, only applies to the model weights (what you need to download to run the model yourself) and only restricts how one can use the model itself. It does not apply to the model's outputs (images) and does not restrict how one can use those. Similarly, you need to obtain a license from Microsoft to use MS Paint, but that doesn't mean that Microsoft imposes restrictions on images created with MS Paint.
- Hence there is no contradiction to Stability's statement in the FAQ that I quoted above, where they clarify that they do not own copyrights in the output images. So, again, your claim above that images generated with it can't be used commercially was incorrect.
- Note for others: Adamant1 is currently blocked for two weeks (and will hopefully be contributing to such conversations in a more constructive way in the future), but I thought it worthwhile to address this comment already for others who may be reading along.
- Regards, HaeB (talk) 05:06, 4 July 2024 (UTC)
- "permitted by applicable law" includes all recent/upcoming regulation, so issuing a non-commercial license makes perfect sense from Stability AI point of view. They should have the data to tell how many of their customers just use the AI as an intermediary to simply steal stuff, otherwise out of reach. Like money laundering - but for intellectual properties. Alexpl (talk) 07:09, 2 July 2024 (UTC)
I have a question.
[edit]Which AI generator tools aren't copyrighted/non-free? MJGTMKME123 (talk) 00:15, 25 July 2024 (UTC)
- @MJGTMKME123: Are you asking which are completely free of copyright throughout their code? Which of them can be guaranteed never to violate a copyright in their outputs? or what? - Jmabel ! talk 04:27, 25 July 2024 (UTC)
- I mean the tools that doesn't make the image fair use if there isn't any prompts that is based off of fair use images, movies, books, videos, single covers, etc. MJGTMKME123 (talk) 13:06, 25 July 2024 (UTC)
- This still makes no sense. "Fair use" means nothing except in the context of a particular usage. (It is a term of U.S. law, referring to certain uses of copyrighted material being permissible despite the copyright.) It is always dependent on where and how the image is used. No image is inherently "a fair use image".
- While the law is still unsettled, it appears so far that under U.S. law any output of a generative AI is in the public domain except insofar as it infringes an existing copyright. You seem to be asking to know in advance that the output of some particular tool will never violate a copyright, but that is like asking in advance if some person will never violate a copyright.
- Even if the developers of a particular generative AI system intend to train it exclusively on public-domain materials, there is no guarantee that they get that right. Look at the complexity of some of the discussions here on Commons as to whether certain materials are in the public domain (e.g. when it turns out that an image in a U.S. government document originated with a contractor rather than a federal government employee, or when it turns out that some rather old image was not published until 20 years after it was made). Or consider an image from France, taken in 1920, where the photographer died in 1950, but it shows a building whose architect lived until 1980, and that is only very belatedly caught. We're about as diligent here about that as anyone, but we still sometimes get it wrong. So there are never any guarantees: individual images still have to be taken up one by one. - Jmabel ! talk 17:33, 25 July 2024 (UTC)
- I'd add that:
- All large image models are trained on some amount of non-free content. Sometimes it's intentional, sometimes it slips in by accident, but no model can reliably make a promise that all of its training material is freely licensed. The volume of material needed to train these models is just so large that some amount of copyrighted material will inevitably slip in (just as it does on Commons, despite our best efforts).
- Even if a model were somehow not trained on non-free content and its prompt didn't explicitly mention non-free media, it is still possible for an image it generates to be infringing, either because the prompt describes non-free content without naming it, or because the model coincidentally arrives at a depiction which is too close to something copyrighted to use.
- For example, if you asked an image generator to draw a male superhero with a red uniform and a lightning bolt on his chest, it would be likely to arrive at something visually similar to Flash. It's quite possible that an image generated from this prompt would be considered a derivative work of that character, regardless of whether it was intended to be one, and regardless of whether the model had ever been trained on images of that character.
- Omphalographer (talk) 21:59, 25 July 2024 (UTC)
- I'd add that:
Svg
[edit]If I generate SVG text it's not the same as an image. File:Cybernetic-cognitive-art.svg here is my example, It would be hard to say this was an image, it is more of a diagram or code. MdupontMobile (talk) 11:11, 31 July 2024 (UTC)
- Diagrams are images, and they have to satisfy the same COM:SCOPE criteria as any other media file on Commons. --Belbury (talk) 15:15, 31 July 2024 (UTC)
- I am not talking about scope, this just is an example of an svg.
- Regarding the arguments about AI generated art being public domain that do not hold for text based images.
- When we generate a high level description of an image that is translated to a high level format, say graph viz or plantuml, the result is much more like the input and can be seen as a derivative work. The arguments about images are not holding in this case, I think we need to treat them differently. MdupontMobile (talk) 17:30, 31 July 2024 (UTC)
- Seems the same to me as an AI generating an artistic image from a very detailed prompt.
- You've told Commons in the licence templates that the image is in the public domain because it is the work of a computer algorithm or artificial intelligence and does not contain sufficient human authorship to support a copyright claim. If you don't feel that's right in this case and that it does contain a sufficient amount of your human authorship, you are welcome to change the licence. Belbury (talk) 17:53, 31 July 2024 (UTC)
- The tool asked me if it was ai generated art. I think we, commons, need to create a different template for non bitmap images, so vector art. It did not ask me what kind of ai generated art it was. MdupontMobile (talk) 17:56, 31 July 2024 (UTC)
- This is, quite frankly, a moot point. The capabilities of language models to generate vector graphics are rudimentary at best; they are not competent to generate educationally useful images. Omphalographer (talk) 18:17, 31 July 2024 (UTC)
- The tool asked me if it was ai generated art. I think we, commons, need to create a different template for non bitmap images, so vector art. It did not ask me what kind of ai generated art it was. MdupontMobile (talk) 17:56, 31 July 2024 (UTC)
Historical value of pre-2023 AI-generated media
[edit]100 years from now, historians may look back at the 2020s as the decade in which AI was unleashed on the world (for better or worse). Unfortunately, those historians (which are probably AIs themselves) may have relatively few artifacts to aid their research given the ephemeral nature of AI-generated media. On Commons, we have a relatively small number of pre-2023 AI-generated media, and an extremely small number of pre-2022 AI-generated media. Although these images have little historical and educational value currently, they may be of significant historical and educational value in the future (similar to pre-1850 photography). As such, should we do anything to protect these images from being deleted in the meantime (presumably as out of scope)? Personally, I think any pre-2022 AI-generated images should automatically be considered historically valuable, and possibly pre-2023 as well. For context, the original DALL-E was released in 2021, although it did not become popular until DALL-E 2 was released in 2022. Midjourney and Stable Diffusion were released in 2022. Thoughts? Nosferattus (talk) 19:31, 11 September 2024 (UTC)
- Agree entirely. In my view, almost all actual value of AI-generated media is to demonstrate the history of AI-generated media. This is also why I suggested elsewhere that it would be valuable for someone to to develop a varied set of prompts and try them out every few months on a variety of AIs that generate graphics, so we can see the evolution over time of what sort of response the same prompting would provide. - Jmabel ! talk 12:59, 12 September 2024 (UTC)
- For these images to have any value, we need more than random images without any information. I would agree, but only for images well documented, including the engine name, the complete process (number of iterations, etc.) and prompt. Very few uploads meet these criterias. Yann (talk) 15:01, 12 September 2024 (UTC)
- The same prompt leads to very different results even in the same model and even if parameters are not adjusted. However, some large testing set could be useful...there are repos containing such as 1 2. It could be good to 1) aggregate such resources together and create new ones systematically with particularly useful prompts 2) check these over time for different version of the model (e.g. 5 images per prompt per version) 3) getting all of these into some project.
- However, for 3 if that was WMC it would get flooded with AI Images so I think over time this proposal makes more sense even though it wouldn't be that useful, it would be largely about enabling storing these in some Wikimedia project and keeping these sets off WMC (also useful to have AI images with issues there as a resource which then can be edited by people with photoshop/whatever-skills until a particular image doesn't have any misgeneration issues and is exceptionally useful). In general, Nosferattus I think the issue there is that other sites like openart.ai have far larger collections and also store other parameters like the seed etc which is nearly never recorded here. Prototyperspective (talk) 23:09, 12 September 2024 (UTC)
- Sure, but openart.ai will be a deadlink in 10 years. I'm not saying we need to create a huge research project. I'm just saying that it might be useful to keep some examples of early AI-generated media even if they don't otherwise fall within Commons' scope, just to demonstrate what the capabilities of AI image generation were at that particular point. Nosferattus (talk) 16:10, 15 September 2024 (UTC)
- Not just pre-2023; who knows where this will go in future? Accordingly, I have just categorised an image I just made using Category:AI-generated images made on 2024-09-25, and encourage others to do similarly. I'm not precious, if people think there is a better naming scheme for categories. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 11:34, 25 September 2024 (UTC)
- I think subcategorization by day is far too fine/granular – this just makes it hard to browse the works and categorization by year makes the most sense. Moreover, until now I only added items to cat 2024 AI-generated works when the work is fairly high quality or illustrating some novel application or similar. Furthermore, one can work on the same image over many days. For example look at this workflow video and even relatively unskilled occasional users of AI tools like me often work on one image over several days such as by generating the image on one day and then improving it with img2img on another and finally removing a few things that I didn't intend to be in the picture and misgenerations on yet another day. Models also don't change by day but more at the scale of years (even if they change more quickly people don't always use the latest version). Prototyperspective (talk) 11:45, 25 September 2024 (UTC)
- If we are to consider pre-2023 generations as historical examples of early representations of the technology, with all faults, mistakes and quirks included, would this also open the avenue for justifying COM:UNDEL for AI-generated files that were previously deleted as out of Commons scope, on a case-by-case basis, as long as they don't fall foul of any other policy? Plenty of early 2022 examples full of faults and quirks (such as bad fingers, etc.) would have been deleted on the basis of them being low quality AI works, and/or similar justifications. While, at the time, I would have fully supported the deletion of such files back in the day, things change a bit if we start to view more primitive AI works as having historical value. Just a thought experiment, that's all. --benlisquareTalk•Contribs 12:53, 25 September 2024 (UTC)
- Yeah, I don't know. It seems like your just trying to create a de-facto standard where all AI artwork is inherently notable "because history." The whole thing is just tautological and can apply to anything though. At that point you might as undelete every selfie every deleted as OOS just because they where cell phone camera technology was at the time. Really, screw Commons:Project scope at that point. Maybe you could argue key images that show off specific things about AI artwork at any given point in the history of the technology would be worth saving for their historical value, but then there has to be an actual standard to determine what should or shouldn't be saved specifically for that reason. And I have yet to see anyone who has argued that AI artwork has historical value proposing one. This whole thing kind of comes off like favoritism that just puts down photographers to. If a photographer takes a mondain, out of scope image of a fridge or whatever, that's OOS. But if the same image is created by AI? Lets keep that for it's historical value. Right. Either Commons:Project scope is a thing or it isn't. There's absolutely nothing on here saying that images created by specific technologies or types of technology are inherently more educational then others. I think photographs taken with the Gameboy Camera are historically interesting. I'm not going to advocate for every image ever taken by one being educational simply because of the technology though. --Adamant1 (talk) 20:03, 2 October 2024 (UTC)
- "Personally, I think any pre-2022 AI-generated images should automatically be considered historically valuable, and possibly pre-2023 as well" - I disagree, any is too strong, but I agree that a selection of early AI-generated images that show characteristics of the limitations of generating images using AI at the time is valuable and in scope. But it should be limited to a reasonable selection. Gestumblindi (talk) 09:21, 3 October 2024 (UTC)
AI content taken from the web
[edit]There have been a few cases where an image on the web is described in some general way as having been AI-generated, leading someone to upload a copy to Commons as {{PD-algorithm}}, the template asserting that it is the work of a computer algorithm or artificial intelligence and does not contain sufficient human authorship to support a copyright claim. For example, File:Facebook AI slop, "Shrimp Jesus" 1.jpg, an image used in a few news sources that describe it as AI-generated content from social media.
This generally ignores two potential issues: that the image could have been created in a country where AI works have some copyright protection, and that the image might not actually be 100% algorithmic (it could be based on a human-made source image, or have been manually edited afterwards in some way).
Should this guideline take a clear and strict stance on Commons only ever hosting pre-existing images when the creator has confirmed which country they are working in, and explicitly stated that the work was 100% AI-generated? Or can we take a more relaxed approach within COM:PRP, if the chances are relatively low of there being a problem? Belbury (talk) 12:03, 4 October 2024 (UTC)
AI enhancements/improvements/upscaling
[edit]Have we had a discussion about this being a part of policy? Or should it be a separate policy. In my opinion, images that have been modified using AI ought to be named, labeled and categorized as such, and never uploaded over the original image. Any other thoughts about this? Bastique ☎ appelez-moi! 21:11, 4 October 2024 (UTC)
- I generally agree. Not sure at what point we draw the line between from simply {{Retouched}} and something more AI-specific, though. The line between traditional enhancement filters and AI is going to get less and less clear. For example, I'm sure there is a tool out there that lets you click on part of a person or object that is the main subject of your photo and then uses otherwise conventional techniques to blur the background. - Jmabel ! talk 05:53, 6 October 2024 (UTC)
- One could draw the line where the tool adds something to the image. A better example may be cases where an AI tool removes selected parts of an image as can be done e.g. with this tool. One can also do the same with Photoshop and GIMP, it just takes more time and in some cases the AI tool may work better. I think In my opinion, images that have been modified using AI in that regard makes little sense – instead make it a norm to not upload extensively modified images as a new version regardless of the production method/tools or make doing so require getting some discussion that also involves the original image uploader first. Prototyperspective (talk) 10:37, 6 October 2024 (UTC)
- We could possibly add a parameter to the {{Retouched}} template to indicate whether or not something was retouched using AI. Bastique ☎ appelez-moi! 19:18, 7 October 2024 (UTC)
- It's not a policy as such, but a short statement on AI-upscaling (and upscaling in general) can be found here. ReneeWrites (talk) 12:05, 6 October 2024 (UTC)
- We allow (for example) removal of background as an overwrite by the uploader/creator or with the uploader/creator's permission, and that can be pretty extensive. I'm not sure AI-based upscaling should be allowed as an overwrite even when the creator/uploader supports it being done. - Jmabel ! talk 15:16, 7 October 2024 (UTC)
- @ReneeWrites that is very helpful, thank you. @Jmabel I agree that AI-based upscaling should only be used in separate files, never as an overwrite. Bastique ☎ appelez-moi! 19:13, 7 October 2024 (UTC)
I tested a sample image File:Victoria Woodhull 2 (cropped, upscaled and cleaned up using AI).jpg using the {{Retouched}} template and making sure I included AI in the filename. I think something like this should definitely be policy. Bastique ☎ appelez-moi! 19:34, 7 October 2024 (UTC)