Wikipedia:Village pump (all)
This is the Village pump (all) page which lists all topics for easy viewing. Go to the village pump to view a list of the Village Pump divisions, or click the edit link above the section you'd like to comment in. To view a list of all recent revisions to this page, click the history link above and follow the on-screen directions.
(to see recent changes on Village pump subpages)
I want... | Then go to... |
---|---|
...help using or editing Wikipedia | Teahouse (for newer users) or Help desk (for experienced users) |
...to find my way around Wikipedia | Department directory |
...specific facts (e.g. Who was the first pope?) | Reference desk |
...constructive criticism from others for a specific article | Peer review |
...help resolving a specific article edit dispute | Requests for comment |
...to comment on a specific article | Article's talk page |
...to view and discuss other Wikimedia projects | Wikimedia Meta-Wiki |
...to learn about citing Wikipedia in a bibliography | Citing Wikipedia |
...to report sites that copy Wikipedia content | Mirrors and forks |
...to ask questions or make comments | Questions |
Discussions older than 7 days (date of last made comment) are moved to a sub page of each section (called (section name)/Archive).
Policy
RfC: Voluntary RfA after resignation
- The following discussion is closed. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
- There is clear consensus that participants in this discussion wish to retain the "Option 2" status quo. We're past 30 days of discussion and there's not much traffic on the discussion now. It's unlikely the consensus would suddenly shift with additional discussion. --Hammersoft (talk) 18:29, 16 January 2025 (UTC)
Should Wikipedia:Administrators#Restoration of admin tools be amended to:
- Option 1 – Require former administrators to request restoration of their tools at the bureaucrats' noticeboard (BN) if they are eligible to do so (i.e., they do not fit into any of the exceptions).
- Option 2 –
ClarifyMaintain the status quo that former administrators who would be eligible to request restoration via BN may instead request restoration of their tools via a voluntary request for adminship (RfA). - Option 3 – Allow bureaucrats to SNOW-close RfAs as successful if (a) 48 hours have passed, (b) the editor has right of resysop, and (c) a SNOW close is warranted.
Background: This issue arose in one recent RfA and is currently being discussed in an ongoing RfA. voorts (talk/contributions) 21:14, 15 December 2024 (UTC)
Note: There is an ongoing related discussion at Wikipedia:Village pump (idea lab) § Making voluntary "reconfirmation" RFA's less controversial.
Note: Option 2 was modified around 22:08, 15 December 2024 (UTC).
Note: Added option 3. theleekycauldron (talk • she/her) 22:12, 15 December 2024 (UTC)
- 2 per Kline's comment at Hog Farm's RfA. If an admin wishes to be held accountable for their actions at a re-RfA, they should be allowed to do so. charlotte 👸🎄 21:22, 15 December 2024 (UTC)
- Also fine with 3 charlotte 👸♥📱 22:23, 15 December 2024 (UTC)
- There is ongoing discussion about this at Wikipedia:Village pump (idea lab)#Making voluntary "reconfirmation" RFA's less controversial. CMD (talk) 21:24, 15 December 2024 (UTC)
- 2, after thought. I don't think 3 provides much benefit, and creating separate class of RfAs that are speedy passed feels a misstep. If there are serious issues surrounding wasting time on RfAs set up under what might feel to someone like misleading pretenses, that is best solved by putting some indicator next to their RFA candidate name. Maybe "Hog Farm (RRfA)". CMD (talk) 14:49, 16 December 2024 (UTC)
best solved by putting some indicator next to their RFA candidate name. Maybe "Hog Farm (RRfA)"
- I like this idea, if option 2 comes out as consensus I think this small change would be a step in the right direction, as the "this isn't the best use of time" crowd (myself included) would be able to quickly identify the type of RFAs they don't want to participate in. BugGhost 🦗👻 11:05, 17 December 2024 (UTC)- I think that's a great idea. I would support adding some text encouraging people who are considering seeking reconfirmation to add (RRfA) or (reconfirmation) after their username in the RfA page title. That way people who are averse to reading or participating in reconfirmations can easily avoid them, and no one is confused about what is going on. 28bytes (talk) 14:23, 17 December 2024 (UTC)
- I think this would be a great idea if it differentiated against recall RfAs. Aaron Liu (talk) 18:37, 17 December 2024 (UTC)
- If we are differentiating three types of RFA we need three terms. Post-recall RFAs are referred to as "reconfirmation RFAs", "Re-RFAS" or "RRFAs" in multiple places, so ones of the type being discussed here are the ones that should take the new term. "Voluntary reconfirmation RFA" (VRRFA or just VRFA) is the only thing that comes to mind but others will probably have better ideas. Thryduulf (talk) 21:00, 17 December 2024 (UTC)
- 2, after thought. I don't think 3 provides much benefit, and creating separate class of RfAs that are speedy passed feels a misstep. If there are serious issues surrounding wasting time on RfAs set up under what might feel to someone like misleading pretenses, that is best solved by putting some indicator next to their RFA candidate name. Maybe "Hog Farm (RRfA)". CMD (talk) 14:49, 16 December 2024 (UTC)
- 1 * Pppery * it has begun... 21:25, 15 December 2024 (UTC)
- 2 I don't see why people trying to do the right thing should be discouraged from doing so. If others feel it is a waste of time, they are free to simply not participate. El Beeblerino if you're not into the whole brevity thing 21:27, 15 December 2024 (UTC)
- 2 Getting reconfirmation from the community should be allowed. Those who see it as a waste of time can ignore those RfAs. Schazjmd (talk) 21:32, 15 December 2024 (UTC)
- Of course they may request at RfA. They shouldn't but they may. This RfA feels like it does nothing to address the criticism actually in play and per the link to the idea lab discussion it's premature to boot. Barkeep49 (talk) 21:38, 15 December 2024 (UTC)
- 2 per my comments at the idea lab discussion and Queent of Hears, Beeblebrox and Scazjmd above. I strongly disagree with Barkeep's comment that "They shouldn't [request the tools back are RFA]". It shouldn't be made mandatory, but it should be encouraged where the time since desysop and/or the last RFA has been lengthy. Thryduulf (talk) 21:42, 15 December 2024 (UTC)
- When to encourage it would be a worthwhile RfC and such a discussion could be had at the idea lab before launching an RfC. Best, Barkeep49 (talk) 21:44, 15 December 2024 (UTC)
- I've started that discussion as a subsection to the linked VPI discussion. Thryduulf (talk) 22:20, 15 December 2024 (UTC)
- When to encourage it would be a worthwhile RfC and such a discussion could be had at the idea lab before launching an RfC. Best, Barkeep49 (talk) 21:44, 15 December 2024 (UTC)
- 1 or 3. RFA is an "expensive" process in terms of community time. RFAs that qualify should be fast-tracked via the BN process. It is only recently that a trend has emerged that folks that don't need to RFA are RFAing again. 2 in the last 6 months. If this continues to scale up, it is going to take up a lot of community time, and create noise in the various RFA statistics and RFA notification systems (for example, watchlist notices and User:Enterprisey/rfa-count-toolbar.js). –Novem Linguae (talk) 21:44, 15 December 2024 (UTC)
- Making statistics "noisy" is just a reason to improve the way the statistics are gathered. In this case collecting statistics for reconfirmation RFAs separately from other RFAs would seem to be both very simple and very effective. If (and it is a very big if) the number of reconfirmation RFAs means that notifications are getting overloaded, then we can discuss whether reconfirmation RFAs should be notified differently. As far as differentiating them, that is also trivially simple - just add a parameter to template:RFA (perhaps "reconfirmation=y") that outputs something that bots and scripts can check for. Thryduulf (talk) 22:11, 15 December 2024 (UTC)
- Option 3 looks like a good compromise. I'd support that too. –Novem Linguae (talk) 22:15, 15 December 2024 (UTC)
- I'm weakly opposed to option 3, editors who want feedback and a renewed mandate from the community should be entitled to it. If they felt that that a quick endorsement was all that was required then could have had that at BN, they explicitly chose not to go that route. Nobody is required to participate in an RFA, so if it is going the way you think it should, or you don't have an opinion, then just don't participate and your time has not been wasted. Thryduulf (talk) 22:20, 15 December 2024 (UTC)
- 2. We should not make it more difficult for administrators to be held accountable for their actions in the way they please. JJPMaster (she/they) 22:00, 15 December 2024 (UTC)
- Added option 3 above. Maybe worth considering as a happy medium, where unsure admins can get a check on their conduct without taking up too much time. theleekycauldron (talk • she/her) 22:11, 15 December 2024 (UTC)
- 2 – If a former admin wishes to subject themselves to RfA to be sure they have the requisite community confidence to regain the tools, why should we stop them? Any editor who feels the process is a waste of time is free to ignore any such RfAs. — Jkudlick ⚓ (talk) 22:12, 15 December 2024 (UTC)
- I would also support option 3 if the time is extended to 72 hours instead of 48. That, however, is a detail that can be worked out after this RfC. — Jkudlick ⚓ (talk) 02:05, 16 December 2024 (UTC)
- Option 3 per leek. voorts (talk/contributions) 22:16, 15 December 2024 (UTC)
- A further note: option 3 gives 'crats the discretion to SNOW close a successful voluntary re-RfA; it doesn't require such a SNOW close, and I trust the 'crats to keep an RfA open if an admin has a good reason for doing so. voorts (talk/contributions) 23:24, 16 December 2024 (UTC)
- 2 as per JJPMaster. Regards, --Goldsztajn (talk) 22:20, 15 December 2024 (UTC)
- Option 2 (no change) – The sample size is far too small for us to analyze the impact of such a change, but I believe RfA should always be available. Now that WP:RECALL is policy, returning administrators may worry that they have become out of touch with community norms and may face a recall as soon as they get their tools back at BN. Having this familiar community touchpoint as an option makes a ton of sense, and would be far less disruptive / demoralizing than a potential recall. Taking this route away, even if it remains rarely used, would be detrimental to our desire for increased administrator accountability. – bradv 22:22, 15 December 2024 (UTC)
- (edit conflict) I'm surprised the response here hasn't been more hostile, given that these give the newly-unresigned administrator a get out of recall free card for a year. —Cryptic 22:25, 15 December 2024 (UTC)
- @Cryptic hostile to what? Thryduulf (talk) 22:26, 15 December 2024 (UTC)
- 2, distant second preference 3. I would probably support 3 as first pick if not for recall's rule regarding last RfA, but as it stands, SNOW-closing a discussion that makes someone immune to recall for a year is a non-starter. Between 1 and 2, though, the only argument for 1 seems to be that it avoids a waste of time, for which there is the much simpler solution of not participating and instead doing something else. Special:Random and Wikipedia:Backlog are always there. -- Tamzin[cetacean needed] (they|xe|🤷) 23:31, 15 December 2024 (UTC)
- 1 would be my preference, but I don't think we need a specific rule for this. -- Ajraddatz (talk) 23:36, 15 December 2024 (UTC)
- Option 1.
No second preference between 2 or 3.As long as a former administrator didn't resign under a cloud, picking up the tools again should be low friction and low effort for the entire community. If there are issues introduced by the recall process, they should be fixed in the recall policy itself. Daniel Quinlan (talk) 01:19, 16 December 2024 (UTC)- After considering this further, I prefer option 3 over option 2 if option 1 is not the consensus. Daniel Quinlan (talk) 07:36, 16 December 2024 (UTC)
- Option 2, i.e. leave well enough alone. There is really not a problem here that needs fixing. If someone doesn’t want to “waste their time” participating in an RfA that’s not required by policy, they can always, well, not participate in the RfA. No one is required to participate in someone else’s RfA, and I struggle to see the point of participating but then complaining about “having to” participate. 28bytes (talk) 01:24, 16 December 2024 (UTC)
- Option 2 nobody is obligated to participate in a re-confirmation RfA. If you think they are a waste of time, avoid them. LEPRICAVARK (talk) 01:49, 16 December 2024 (UTC)
- 1 or 3 per Novem Linguae. C F A 02:35, 16 December 2024 (UTC)
- Option 3: Because it is incredibly silly to have situations like we do now of "this guy did something wrong by doing an RfA that policy explicitly allows, oh well, nothing to do but sit on our hands and dissect the process across three venues and counting." Your time is your own. No one is forcibly stealing it from you. At the same time it is equally silly to let the process drag on, for reasons explained in WP:SNOW. Gnomingstuff (talk) 03:42, 16 December 2024 (UTC)
- Update: Option 2 seems to be the consensus and I also would be fine with that. Gnomingstuff (talk) 18:10, 19 December 2024 (UTC)
- Option 3 per Gnoming. I think 2 works, but it is a very long process and for someone to renew their tools, it feels like an unnecessarily long process compared to a normal RfA. Conyo14 (talk) 04:25, 16 December 2024 (UTC)
- As someone who supported both WormTT and Hog Farm's RfAs, option 1 > option 3 >> option 2. At each individual RfA the question is whether or not a specific editor should be an admin, and in both cases I felt that the answer was clearly "yes". However, I agree that RfA is a very intensive process. It requires a lot of time from the community, as others have argued better than I can. I prefer option 1 to option 3 because the existence of the procedure in option 3 implies that it is a good thing to go through 48 hours of RfA to re-request the mop. But anything which saves community time is a good thing. HouseBlaster (talk • he/they) 04:31, 16 December 2024 (UTC)
- I've seen this assertion made multiple times now that
[RFA] requires a lot of time from the community
, yet nowhere has anybody articulated how why this is true. What time is required, given that nobody is required to participate and everybody who does choose to participate can spend as much or as little time assessing the candidate as they wish? How and why does a reconfirmation RFA require any more time from editors (individually or collectively) than a request at BN? Thryduulf (talk) 04:58, 16 December 2024 (UTC)- I think there are a number of factors and people are summing it up as "time-wasting" or similar:
- BN Is designed for this exact scenario. It's also clearly a less contentious process.
- Snow closures a good example of how we try to avoid wasting community time on unnecessary process and the same reasoning applies here. Wikipedia is not a bureaucracy and there's no reason to have a 7-day process when the outcome is a given.
- If former administrators continue to choose re-RFAs over BN, it could set a problematic precedent where future re-adminship candidates feel pressured to go through an RFA and all that entails. I don't want to discourage people already vetted by the community from rejoining the ranks.
- The RFA process is designed to be a thoughtful review of prospective administrators and I'm concerned these kinds of perfunctory RFAs will lead to people taking the process less seriously in the future.
- Daniel Quinlan (talk) 07:31, 16 December 2024 (UTC)
- Because several thousand people have RFA on their watchlist, and thousands more will see the "there's an open RFA" notice on theirs whether they follow it or not. Unlike BN, RFA is a process that depends on community input from a large number of people. In order to even realise that the RFA is not worth their time, they have to:
- Read the opening statement and first few question answers (I just counted, HF's opening and first 5 answers are about 1000 words)
- Think, "oh, they're an an ex-admin, I wonder why they're going through RFA, what was their cloud"
- Read through the comments and votes to see if any issues have been brought up (another ~1000 words)
- None have
- Realise your input is not necessary and this could have been done at BN
- This process will be repeated by hundreds of editors over the course of a week. BugGhost 🦗👻 08:07, 16 December 2024 (UTC)
- That they were former admins has always been the first two sentences of their RfA’s statement, sentences which are immediately followed by that they resigned due to personal time commitment issues. You do not have to read the first 1000+ words to figure that out. If the reader wants to see if the candidate was lying in their statement, then they just have a quick skim through the oppose section. None of this should take more than 30 seconds in total. Aaron Liu (talk) 13:15, 16 December 2024 (UTC)
- Not everyone can skim things easily - it personally takes me a while to read sections. I don't know if they're going to bury the lede and say something like "Also I made 10,000 insane redirects and then decided to take a break just before arbcom launched a case" in paragraph 6. Hog Farm's self nom had two paragraphs about disputes and it takes more than 30 seconds to unpick that and determine if that is a "cloud" or not. Even for reconfirmations, it definitely takes more than 30 seconds to determine a conclusion. BugGhost 🦗👻 11:21, 17 December 2024 (UTC)
- They said they resigned to personal time commitments. That is directly saying they wasn’t under a cloud, so I’ll believe them unless someone claims the contrary in the oppose section. If the disputes section contained a cloud, the oppose section would have said so. One chooses to examine such nominations like normal RfAs. Aaron Liu (talk) 18:47, 17 December 2024 (UTC)
- Just to double check, you're saying that whenever you go onto an RFA you expect any reason to oppose to already be listed by someone else, and no thought is required? I am begining to see how you are able to assess an RFA in under 30 seconds BugGhost 🦗👻 23:08, 17 December 2024 (UTC)
- Something in their statement would be an incredibly obvious reason. We are talking about the assessment whether to examine and whether the candidate could've used BN. Aaron Liu (talk) 12:52, 18 December 2024 (UTC)
- Just to double check, you're saying that whenever you go onto an RFA you expect any reason to oppose to already be listed by someone else, and no thought is required? I am begining to see how you are able to assess an RFA in under 30 seconds BugGhost 🦗👻 23:08, 17 December 2024 (UTC)
- They said they resigned to personal time commitments. That is directly saying they wasn’t under a cloud, so I’ll believe them unless someone claims the contrary in the oppose section. If the disputes section contained a cloud, the oppose section would have said so. One chooses to examine such nominations like normal RfAs. Aaron Liu (talk) 18:47, 17 December 2024 (UTC)
- Not everyone can skim things easily - it personally takes me a while to read sections. I don't know if they're going to bury the lede and say something like "Also I made 10,000 insane redirects and then decided to take a break just before arbcom launched a case" in paragraph 6. Hog Farm's self nom had two paragraphs about disputes and it takes more than 30 seconds to unpick that and determine if that is a "cloud" or not. Even for reconfirmations, it definitely takes more than 30 seconds to determine a conclusion. BugGhost 🦗👻 11:21, 17 December 2024 (UTC)
- That they were former admins has always been the first two sentences of their RfA’s statement, sentences which are immediately followed by that they resigned due to personal time commitment issues. You do not have to read the first 1000+ words to figure that out. If the reader wants to see if the candidate was lying in their statement, then they just have a quick skim through the oppose section. None of this should take more than 30 seconds in total. Aaron Liu (talk) 13:15, 16 December 2024 (UTC)
- @Thryduulf let's not confuse "a lot of community time is spent" with "waste of time". Some people have characterized the re-RFAs as a waste of time but that's not the assertion I (and I think a majority of the skeptics) have been making. All RfAs use a lot of community time as hundreds of voters evaluate the candidate. They then choose to support, oppose, be neutral, or not vote at all. While editor time is not perfectly fixed - editors may choose to spend less time on non-Wikipedia activities at certain times - neither is it a resource we have in abundance anymore relative to our project. And so I think we, as a community, need to be thought about how we're using that time especially when the use of that time would have been spent on other wiki activities.Best, Barkeep49 (talk) 22:49, 16 December 2024 (UTC)
- Absolutely nothing compels anybody to spend any time evaluating an RFA. If you think your wiki time is better spent elsewhere than evaluating an RFA candidate, then spend it elsewhere. That way only those who do think it is a good use of their time will participate and everybody wins. You win by not spending your time on something that you don't think is worth it, those who do participate don't have their time wasted by having to read comments (that contradict explicit policy) about how the RFA is a waste of time. Personally I regard evaluating whether a long-time admin still has the approval of the community to be a very good use of community time, you are free to disagree, but please don't waste my time by forcing me to read comments about how you think I'm wasting my time. Thryduulf (talk) 23:39, 16 December 2024 (UTC)
- I am not saying you or anyone else is wasting time and am surprised you are so fervently insisting I am. Best, Barkeep49 (talk) 03:34, 17 December 2024 (UTC)
- I don't understand how your argument that it is not a good use of community time is any different from arguing that it is a waste of time? Thryduulf (talk) 09:08, 17 December 2024 (UTC)
- I am not saying you or anyone else is wasting time and am surprised you are so fervently insisting I am. Best, Barkeep49 (talk) 03:34, 17 December 2024 (UTC)
- Absolutely nothing compels anybody to spend any time evaluating an RFA. If you think your wiki time is better spent elsewhere than evaluating an RFA candidate, then spend it elsewhere. That way only those who do think it is a good use of their time will participate and everybody wins. You win by not spending your time on something that you don't think is worth it, those who do participate don't have their time wasted by having to read comments (that contradict explicit policy) about how the RFA is a waste of time. Personally I regard evaluating whether a long-time admin still has the approval of the community to be a very good use of community time, you are free to disagree, but please don't waste my time by forcing me to read comments about how you think I'm wasting my time. Thryduulf (talk) 23:39, 16 December 2024 (UTC)
- I think there are a number of factors and people are summing it up as "time-wasting" or similar:
- I've seen this assertion made multiple times now that
- Option 2 I don't mind the re-RFAs, but I'd appreciate if we encouraged restoration via BN instead, I just object to making it mandatory. EggRoll97 (talk) 06:23, 16 December 2024 (UTC)
- Option 2. Banning voluntary re-RfAs would be a step in the wrong direction on admin accountability. Same with SNOW closing. There is no more "wasting of community time" if we let the RfA run for the full seven days, but allowing someone to dig up a scandal on the seventh day is an important part of the RfA process. The only valid criticism I've heard is that folks who do this are arrogant, but banning arrogance, while noble, seems highly impractical. Toadspike [Talk] 07:24, 16 December 2024 (UTC)
- Option 3, 1, then 2, per HouseBlaster. Also agree with Daniel Quinlan. I think these sorts of RFA's should only be done in exceptional circumstances. Graham87 (talk) 08:46, 16 December 2024 (UTC)
- Option 1 as first preference, option 3 second. RFAs use up a lot of time - hundreds of editors will read the RFA and it takes time to come to a conclusion. When that conclusion is "well that was pointless, my input wasn't needed", it is not a good system. I think transparency and accountability is a very good thing, and we need more of it for resyssopings, but that should come from improving the normal process (BN) rather than using a different one (RFA). My ideas for improving the BN route to make it more transparent and better at getting community input is outlined over on the idea lab BugGhost 🦗👻 08:59, 16 December 2024 (UTC)
- Option 2, though I'd be for option 3 too. I'm all for administrators who feel like they want/should go through an RfA to solicit feedback even if they've been given the tools back already. I see multiple people talk about going through BN, but if I had to hazard a guess, it's way less watched than RfA is. However I do feel like watchlist notifications should say something to the effect of "A request for re-adminship feedback is open for discussion" so that people that don't like these could ignore them. ♠JCW555 (talk)♠ 09:13, 16 December 2024 (UTC)
- Option 2 because WP:ADMINISTRATORS is well-established policy. Read WP:ADMINISTRATORS#Restoration of admin tools, which says quite clearly,
Regardless of the process by which the admin tools are removed, any editor is free to re-request the tools through the requests for adminship process.
I went back 500 edits to 2017 and the wording was substantially the same back then. So, I simply do not understand why various editors are berating former administrators to the point of accusing them of wasting time and being arrogant for choosing to go through a process which is specifically permitted by policy. It is bewildering to me. Cullen328 (talk) 09:56, 16 December 2024 (UTC) - Option 2 & 3 I think that there still should be the choice between BN and re-RFA for resysops, but I think that the re-RFA should stay like it is in Option 3, unless it is controversial, at which point it could be extended to the full RFA period. I feel like this would be the best compromise between not "wasting" community time (which I believe is a very overstated, yet understandable, point) and ensuring that the process is based on broad consensus and that our "representatives" are still supported. If I were WTT or Hog, I might choose to make the same decision so as to be respectful of the possibility of changing consensus. JuxtaposedJacob (talk) | :) | he/him | 10:45, 16 December 2024 (UTC)
- Option 2, for lack of a better choice. Banning re-RFAs is not a great idea, and we should not SNOW close a discussion that would give someone immunity from a certain degree of accountability. I've dropped an idea for an option 4 in the discussion section below. Giraffer (talk) 12:08, 16 December 2024 (UTC)
- Option 1 I agree with Graham87 that these sorts of RFAs should only be done in exceptional circumstances, and BN is the best place to ask for tools back. – DreamRimmer (talk) 12:11, 16 December 2024 (UTC)
- Option 2 I don't think prohibition makes sense. It also has weird side effects. eg: some admins' voluntary recall policies may now be completely void, because they would be unable to follow them even if they wanted to, because policy prohibits them from doing a RFA. (maybe if they're also 'under a cloud' it'd fit into exemptions, but if an admins' policy is "3 editors on this named list tell me I'm unfit, I resign" then this isn't really a cloud.) Personally, I think Hog Farm's RFA was unwise, as he's textbook uncontroversial. Worm's was a decent RFA; he's also textbook uncontroversial but it happened at a good time. But any editor participating in these discussions to give the "support" does so using their own time. Everyone who feels their time is wasted can choose to ignore the discussion, and instead it'll pass as 10-0-0 instead of 198-2-4. It just doesn't make sense to prohibit someone from seeking a community discussion, though. For almost anything, really. ProcrastinatingReader (talk) 12:33, 16 December 2024 (UTC)
- Option 2 It takes like two seconds to support or ignore an RFA you think is "useless"... can't understand the hullabaloo around them. I stand by what I said on WTT's re-RFA regarding RFAs being about evaluating trustworthiness and accountability. Trustworthy people don't skip the process. —k6ka 🍁 (Talk · Contributions) 15:24, 16 December 2024 (UTC)
- Option 1 - Option 2 is a waste of community time. - Ratnahastin (talk) 15:30, 16 December 2024 (UTC)
- 2 is fine. Strong oppose to 1 and 3. Opposing option 1 because there is nothing wrong with asking for extra community feedback. opposing option 3 because once an RfA has been started, it should follow the standard rules. Note that RfAs are extremely rare and non-contentious RfAs require very little community time (unlike this RfC which seems a waste of community time, but there we are). —Kusma (talk) 16:59, 16 December 2024 (UTC)
- 2, with no opposition to 3. I see nothing wrong with a former administrator getting re-confirmed by the community, and community vetting seems like a good thing overall. If people think it's a waste of time, then just ignore the RfA. Natg 19 (talk) 17:56, 16 December 2024 (UTC)
- 2 Sure, and clarify that should such an RFA be unsuccessful they may only regain through a future rfa. — xaosflux Talk 18:03, 16 December 2024 (UTC)
- Option 2 If contributing to such an RFA is a waste of your time, just don't participate. TheWikiToby (talk) 18:43, 16 December 2024 (UTC)
- No individual is wasting their time participating. Instead the person asking for a re-rfa is using tons of editor time by asking hundreds of people to vet them. Even the choice not to participate requires at least some time to figure out that this is not a new RfA; though at least in the two we've had recently it would require only as long as it takes to get to the RfA - for many a click from the watchlist and then another click into the rfa page - and to read the first couple of sentences of the self-nomination which isn't terribly long all things considered. Best, Barkeep49 (talk) 22:55, 16 December 2024 (UTC)
- I agree with you (I think) that it's a matter of perspective. For me, clicking the RFA link in my watchlist and reading the first paragraph of Hog Farm's nomination (where they explained that they were already a respected admin) took me about 10 seconds. Ten seconds is nothing; in my opinion, this is just a nonissue. But then again, I'm not an admin, checkuser, or an oversighter. Maybe the time to read such a nomination is really wasting their time. I don't know. TheWikiToby (talk) 23:15, 16 December 2024 (UTC)
- I'm an admin and an oversighter (but not a checkuser). None of my time was wasted by either WTT or Hog Farm's nominations. Thryduulf (talk) 23:30, 16 December 2024 (UTC)
- I agree with you (I think) that it's a matter of perspective. For me, clicking the RFA link in my watchlist and reading the first paragraph of Hog Farm's nomination (where they explained that they were already a respected admin) took me about 10 seconds. Ten seconds is nothing; in my opinion, this is just a nonissue. But then again, I'm not an admin, checkuser, or an oversighter. Maybe the time to read such a nomination is really wasting their time. I don't know. TheWikiToby (talk) 23:15, 16 December 2024 (UTC)
- No individual is wasting their time participating. Instead the person asking for a re-rfa is using tons of editor time by asking hundreds of people to vet them. Even the choice not to participate requires at least some time to figure out that this is not a new RfA; though at least in the two we've had recently it would require only as long as it takes to get to the RfA - for many a click from the watchlist and then another click into the rfa page - and to read the first couple of sentences of the self-nomination which isn't terribly long all things considered. Best, Barkeep49 (talk) 22:55, 16 December 2024 (UTC)
- 2. Maintain the status quo. And stop worrying about a trivial non-problem. --Tryptofish (talk) 22:57, 16 December 2024 (UTC)
- 2. This reminds me of banning plastic straws (bear with me). Sure, I suppose in theory, that this is a burden on the community's time (just as straws do end up in landfills/the ocean). However, the amount of community time that is drained is minuscule compared to the amount of community time drained in countless, countless other fora and processes (just like the volume of plastic waste contributed by plastic straws is less than 0.001% of the total plastic waste). When WP becomes an efficient, well oiled machine, then maybe we can talk about saving community time by banning re-RFA's. But this is much ado about nothing, and indeed this plan to save people from themselves, and not allow them to simply decide whether to participate or not, is arguably more damaging than some re-RFAs (just as banning straws convinced some people that "these save-the-planet people are so ridiculous that I'm not going to bother listening to them about anything."). And, in fact, on a separate note, I'd actually love it if more admins just ran a re-RFA whenever they wanted. They would certainly get better feedback than just posting "What do my talk page watchers think?" on their own talk page. Or waiting until they get yelled at on their talk page, AN/ANI, AARV, etc. We say we want admins to respect feedback; does it have to be in a recall petition? --Floquenbeam (talk) 23:44, 16 December 2024 (UTC)
- What meaningful feedback has Hog Farm gotten? "A minority of people think you choose poorly in choosing this process to regain adminship". What are they supposed to do with that? I share your desire for editors to share meaningful feedback with administrators. My own attempt yielded some, though mainly offwiki where I was told I was both too cautious and too impetuous (and despite the seeming contradiction each was valuable in its own way). So yes let's find ways to get meaningful feedback to admins outside of recall or being dragged to ANI. Unfortunately re-RfA seems to be poorly suited to the task and so we can likely find a better way. Best, Barkeep49 (talk) 03:38, 17 December 2024 (UTC)
- Let us all take some comfort in the fact that no one has yet criticized this RfC comment as being a straw man argument. --Tryptofish (talk) 23:58, 18 December 2024 (UTC)
- No hard rule, but we should socially discourage confirmation RfAs There is a difference between a hard rule, and a soft social rule. A hard rule against confirmation RfA's, like option 1, would not do a good job of accounting for edge cases and would thus be ultimately detrimental here. But a soft social rule against them would be beneficial. Unfortunately, that is not one of the options of this RfC. In short, a person should have a good reason to do a confirmation RfA. If you're going to stand up before the community and ask "do you trust me," that should be for a good reason. It shouldn't just be because you want the approval of your peers. (Let me be clear: I am not suggesting that is why either Worm or Hogfarm re-upped, I'm just trying to create a general purpose rule here.) That takes some introspection and humility to ask yourself: is it worth me inviting two or three hundred people to spend part of their lives to comment on me as a person?A lot of people have thrown around editor time in their reasonings. Obviously, broad generalizations about it aren't convincing anyone. So let me just share my own experience. I saw the watchlist notice open that a new RfA was being run. I reacted with some excitement, because I always like seeing new admins. When I got to the page and saw Hogfarm's name, I immediately thought "isn't he already an admin?" I then assumed, ah, its just the classic RfA reaction at seeing a qualified candidate, so I'll probably support him since I already think he's an admin. But then as I started to do my due diligence and read, I saw that he really, truly, already had been an admin. At that point, my previous excitement turned to a certain unease. I had voted yes for Worm's confirmation RfA, but here was another...and I realized that my blind support for Worm might have been the start of an entirely new process. I then thought "bet there's an RfC going about this," and came here. I then spent a while polishing up my essay on editor time, before taking time to write this message. All in all, I probably spent a good hour doing this. Previously, I'd just been clicking the random article button and gnoming. So, the longwinded moral: yeah, this did eat up a lot of my editor time that could have and was being spent doing something else. And I'd do it again! It was important to do my research and to comment here. But in the future...maybe I won't react quite as excitedly to seeing that RfA notice. Maybe I'll feel a little pang of dread...wondering if its going to be a confirmation RfA. We can't pretend that confirmation RfA's are costless, and that we don't lose anything even if editors just ignore them. When run, it should be because they are necessary. CaptainEek Edits Ho Cap'n!⚓ 03:29, 17 December 2024 (UTC)
- And for what its worth, support Option 3 because I'm generally a fan of putting more tools in people's toolboxes. CaptainEek Edits Ho Cap'n!⚓ 03:36, 17 December 2024 (UTC)
In short, a person should have a good reason to do a confirmation RfA. If you're going to stand up before the community and ask "do you trust me," that should be for a good reason. It shouldn't just be because you want the approval of your peers.
Asking the community whether you still have their trust to be an administrator, which is what an reconfirmation RFA is, is a good reason. I expect getting a near-unanimous "yes" is good for one's ego, but that's just a (nice) side-effect of the far more important benefits to the entire community: a trusted administrator.- The time you claim is being eaten up unnecessarily by reconfirmation RFAs was actually taken up by you choosing to spend your time writing an essay about using time for things you don't approve of and then hunting out an RFC in which you wrote another short essay about using time on things you don't approve of. Absolutely none of that is a necessary consequence of reconfirmation RFAs - indeed the response consistent with your stated goals would have been to read the first two sentences of Hog Farm's RFA and then closed the tab and returned to whatever else it was you were doing. Thryduulf (talk) 09:16, 17 December 2024 (UTC)
- WTT's and Hog Farm's RFAs would have been completely uncontentious, something I hope for at RfA and certainly the opposite of what I "dread" at RfA, if it were not for the people who attack the very concept of standing for RfA again despite policy being crystal clear that it is absolutely fine. I don't see how any blame for this situation can be put on WTT or HF. We can't pretend that dismissing uncontentious reconfirmation RfAs is costless; discouraging them removes one of the few remaining potentially wholesome bits about the process. —Kusma (talk) 09:53, 17 December 2024 (UTC)
- @CaptainEek Would you find it better if Watchlist notices and similar said "(re?)confirmation RFA" instead of "RFA"? Say for all voluntary RFAs from an existing admin or someone who could have used BN?
- As a different point, I would be quite against any social discouraging if we're not making a hard rule as such. Social discouraging is what got us the opposes at WTT/Hog Farm's RFAs, which I found quite distasteful and badgering. If people disagree with a process, they should change it. But if the process remains the same, I think it's important to not enable RFA's toxicity by encouraging others to namecall or re-argue the process in each RRFA. It's a short road from social discouragement to toxicity, unfortunately. Soni (talk) 18:41, 19 December 2024 (UTC)
- Yes I think the watchlist notice should specify what kind of RfA, especially with the introduction of recall. CaptainEek Edits Ho Cap'n!⚓ 16:49, 23 December 2024 (UTC)
- Option 1. Will prevent the unnecessary drama trend we are seeing in the recent. – Ammarpad (talk) 07:18, 17 December 2024 (UTC)
- Option 2 if people think there's a waste of community time, don't spend your time voting or discussing. Or add "reconfirmation" or similar to the watchlist notice. ~~ AirshipJungleman29 (talk) 15:08, 17 December 2024 (UTC)
- Option 3 (which I think is a subset of option 2, so I'm okay with the status quo, but I want to endorse giving 'crats the option to SNOW). While they do come under scrutiny from time to time for the extensive dicsussions in the "maybe" zone following RfAs, this should be taken as an indiciation that they are unlikely to do something like close it as SNOW in the event there is real and substantial concerns being rasied. This is an okay tool to give the 'crats. As far as I can tell, no one has ever accused the them of moving too quickly in this direction (not criticism; love you all, keep up the good work). Bobby Cohn (talk) 17:26, 17 December 2024 (UTC)
- Option 3 or Option 2. Further, if Option 2 passes, I expect it also ends all the bickering about lost community time. A consensus explicitly in favour of "This is allowed" should also be a consensus to discourage relitigation of this RFC. Soni (talk) 17:35, 17 December 2024 (UTC)
- Option 2: Admins who do not exude entitlement are to be praised. Those who criticize this humility should have a look in the mirror before accusing those who ask for reanointment from the community of "arrogance". I agree that it wouldn't be a bad idea to mention in parentheses that the RFA is a reconfirmation (watchlist) and wouldn't see any problem with crats snow-closing after, say, 96 hours. -- SashiRolls 🌿 · 🍥 18:48, 17 December 2024 (UTC)
- I disagree that BN shouldn't be the normal route. RfA is already as hard and soul-crushing as it is. Aaron Liu (talk) 20:45, 17 December 2024 (UTC)
- Who are you disagreeing with? This RfC is about voluntary RRfA. -- SashiRolls 🌿 · 🍥 20:59, 17 December 2024 (UTC)
- I know. I see a sizable amount of commenters here starting to say that voluntary re-RfAs should be encouraged, and your first sentence can be easily read as implying that admins who use the BN route exude entitlement. I disagree with that (see my reply to Thryduulf below). Aaron Liu (talk) 12:56, 18 December 2024 (UTC)
- One way to improve the reputation of RFA is for there to be more RFAs that are not terrible, such as reconfirmations of admins who are doing/have done a good job who sail through with many positive comments. There is no proposal to make RFA mandatory in circumstances it currently isn't, only to reaffirm that those who voluntarily choose RFA are entitled to do so. Thryduulf (talk) 21:06, 17 December 2024 (UTC)
- I know it's not a proposal, but there's enough people talking about this so far that it could become a proposal.
There's nearly nothing in between that could've lost the trust of the community. I'm sure there are many who do not want to be pressured into this without good reason. Aaron Liu (talk) 12:57, 18 December 2024 (UTC)- Absolutely nobody is proposing, suggesting or hinting here that reconfirmation RFAs should become mandatory - other than comments from a few people who oppose the idea of people voluntarily choosing to do something policy explicitly allows them to choose to do. The best way to avoid people being pressured into being accused of arrogance for seeking reconfirmation of their status from the community is to sanction those people who accuse people of arrogance in such circumstances as such comments are in flagrant breach of AGF and NPA. Thryduulf (talk) 14:56, 18 December 2024 (UTC)
- Yes, I’m saying that they should not become preferred. There should be no social pressure to do RfA instead of BN, only pressure intrinsic to the candidate. Aaron Liu (talk) 15:37, 18 December 2024 (UTC)
- Whether they should become preferred in any situation forms no part of this proposal in any way shape or form - this seeks only to reaffirm that they are permitted. A separate suggestion, completely independent of this one, is to encourage (explicitly not mandate) them in some (but explicitly not all) situations. All discussions on this topic would benefit if people stopped misrepresenting the policies and proposals - especially when the falsehoods have been explicitly called out. Thryduulf (talk) 15:49, 18 December 2024 (UTC)
- I am talking and worrying over that separate proposal many here are suggesting. I don’t intend to oppose Option 2, and sorry if I came off that way. Aaron Liu (talk) 16:29, 18 December 2024 (UTC)
- Whether they should become preferred in any situation forms no part of this proposal in any way shape or form - this seeks only to reaffirm that they are permitted. A separate suggestion, completely independent of this one, is to encourage (explicitly not mandate) them in some (but explicitly not all) situations. All discussions on this topic would benefit if people stopped misrepresenting the policies and proposals - especially when the falsehoods have been explicitly called out. Thryduulf (talk) 15:49, 18 December 2024 (UTC)
- Yes, I’m saying that they should not become preferred. There should be no social pressure to do RfA instead of BN, only pressure intrinsic to the candidate. Aaron Liu (talk) 15:37, 18 December 2024 (UTC)
- Absolutely nobody is proposing, suggesting or hinting here that reconfirmation RFAs should become mandatory - other than comments from a few people who oppose the idea of people voluntarily choosing to do something policy explicitly allows them to choose to do. The best way to avoid people being pressured into being accused of arrogance for seeking reconfirmation of their status from the community is to sanction those people who accuse people of arrogance in such circumstances as such comments are in flagrant breach of AGF and NPA. Thryduulf (talk) 14:56, 18 December 2024 (UTC)
- I know it's not a proposal, but there's enough people talking about this so far that it could become a proposal.
- Who are you disagreeing with? This RfC is about voluntary RRfA. -- SashiRolls 🌿 · 🍥 20:59, 17 December 2024 (UTC)
- I disagree that BN shouldn't be the normal route. RfA is already as hard and soul-crushing as it is. Aaron Liu (talk) 20:45, 17 December 2024 (UTC)
- Option 2. In fact, I'm inclined to encourage an RRfA over BN, because nothing requires editors to participate in an RRfA, but the resulting discussion is better for reaffirming community consensus for the former admin or otherwise providing helpful feedback. --Pinchme123 (talk) 21:45, 17 December 2024 (UTC)
- Option 2 WP:RFA has said "
Former administrators may seek reinstatement of their privileges through RfA...
" for over ten years and this is not a problem. I liked the opportunity to be consulted in the current RfA and don't consider this a waste of time. Andrew🐉(talk) 22:14, 17 December 2024 (UTC) - Option 2. People who think it’s not a good use of their time always have the option to scroll past. Innisfree987 (talk) 01:41, 18 December 2024 (UTC)
- 2 - If an administrator gives up sysop access because they plan to be inactive for a while and want to minimize the attack surface of Wikipedia, they should be able to ask for permissions back the quickest way possible. If an administrator resigns because they do not intend to do the job anymore, and later changes their mind, they should request a community discussion. The right course of action depends on the situation. Jehochman Talk 14:00, 18 December 2024 (UTC)
- Option 1. I've watched a lot of RFAs and re-RFAs over the years. There's a darn good reason why the community developed the "go to BN" option: saves time, is straightforward, and if there are issues that point to a re-RFA, they're quickly surfaced. People who refuse to take the community-developed process of going to BN first are basically telling the community that they need the community's full attention on their quest to re-admin. Yes, there are those who may be directed to re-RFA by the bureaucrats, in which case, they have followed the community's carefully crafted process, and their re-RFA should be evaluated from that perspective. Risker (talk) 02:34, 19 December 2024 (UTC)
- Option 2. If people want to choose to go through an RFA, who are we to stop them? Stifle (talk) 10:25, 19 December 2024 (UTC)
- Option 2 (status quo/no changes) per meh. This is bureaucratic rulemongering at its finest. Every time RFA reform comes up some editors want admins to be required to periodically reconfirm, then when some admins decide to reconfirm voluntarily, suddenly that's seen as a bad thing. The correct thing to do here is nothing. If you don't like voluntary reconfirmation RFAs, you are not required to participate in them. Ivanvector (Talk/Edits) 19:34, 19 December 2024 (UTC)
- Option 2 I would probably counsel just going to BN most of the time, however there are exceptions and edge cases. To this point these RfAs have been few in number, so the costs incurred are relatively minor. If the number becomes large then it might be worth revisiting, but I don't see that as likely. Some people will probably impose social costs on those who start them by opposing these RfAs, with the usual result, but that doesn't really change the overall analysis. Perhaps it would be better if our idiosyncratic internal logic didn't produce such outcomes, but that's a separate issue and frankly not really worth fighting over either. There's probably some meta issues here I'm unaware off, it's long since I've had my finger on the community pulse so to speak, but they tend to matter far less than people think they do. 184.152.68.190 (talk) 02:28, 20 December 2024 (UTC)
- Option 1, per WP:POINT, WP:NOT#SOCIALNETWORK, WP:NOT#BUREAUCRACY, WP:NOTABOUTYOU, and related principles. We all have far better things to do that read through and argue in/about a totally unnecessary RfA invoked as a "Show me some love!" abuse of process and waste of community time and productivity. I could live with option 3, if option 1 doesn't fly (i.e. shut these silly things down as quickly as possible). But option 2 is just out of the question. — SMcCandlish ☏ ¢ 😼 04:28, 22 December 2024 (UTC)
- Except none of the re-RFAs complained about have been
RfA invoked as a "Show me some love!" abuse of process
, you're arguing against a strawman. Thryduulf (talk) 11:41, 22 December 2024 (UTC)- It's entirely a matter of opinion and perception, or A) this RfC wouldn't exist, and B) various of your fellow admins like TonyBallioni would not have come to the same conclusion I have. Whether the underlying intent (which no one can determine, lacking as we do any magical mind-reading powers) is solely egotistical is ultimately irrelevant. The actual effect (what matters) of doing this whether for attention, or because you've somehow confused yourself into think it needs to be done, is precisely the same: a showy waste of community volunteers' time with no result other than a bunch of attention being drawn to a particular editor and their deeds, without any actual need for the community to engage in a lengthy formal process to re-examine them. — SMcCandlish ☏ ¢ 😼 05:49, 23 December 2024 (UTC)
I and many others here agree and stand behind the very reasoning that has "confused" such candidates, at least for WTT. Aaron Liu (talk) 15:37, 23 December 2024 (UTC)or because you've somehow confused yourself into think it needs to be done
- It's entirely a matter of opinion and perception, or A) this RfC wouldn't exist, and B) various of your fellow admins like TonyBallioni would not have come to the same conclusion I have. Whether the underlying intent (which no one can determine, lacking as we do any magical mind-reading powers) is solely egotistical is ultimately irrelevant. The actual effect (what matters) of doing this whether for attention, or because you've somehow confused yourself into think it needs to be done, is precisely the same: a showy waste of community volunteers' time with no result other than a bunch of attention being drawn to a particular editor and their deeds, without any actual need for the community to engage in a lengthy formal process to re-examine them. — SMcCandlish ☏ ¢ 😼 05:49, 23 December 2024 (UTC)
- Except none of the re-RFAs complained about have been
- Option 2. I see no legitimate reason why we should be changing the status quo. Sure, some former admins might find it easier to go through BN, and it might save community time, and most former admins already choose the easier option. However, if a candidate last ran for adminship several years ago, or if issues were raised during their tenure as admin, then it may be helpful for them to ask for community feedback, anyway. There is no "wasted" community time in such a case. I really don't get the claims that this violates WP:POINT, because it really doesn't apply when a former admin last ran for adminship 10 or 20 years ago or wants to know if they still have community trust.On the other hand, if an editor thinks a re-RFA is a waste of community time, they can simply choose not to participate in that RFA. Opposing individual candidates' re-RFAs based solely on opposition to re-RFAs in general is a violation of WP:POINT. – Epicgenius (talk) 14:46, 22 December 2024 (UTC)
- But this isn't the status quo? We've never done a re-RfA before now. The question is whether this previously unconsidered process, which appeared as an emergent behavior, is a feature or a bug. CaptainEek Edits Ho Cap'n!⚓ 23:01, 22 December 2024 (UTC)
- There have been lots of re-RFAs, historically. There were more common in the 2000s. Evercat in 2003 is the earliest I can find, back before the re-sysopping system had been worked out fully. Croat Canuck back in 2007 was snow-closed after one day, because the nominator and applicant didn't know that they could have gone to the bureaucrats' noticeboard. For more modern examples, HJ Mitchell (2011) is relatively similar to the recent re-RFAs in the sense that the admin resigned uncontroversially but chose to re-RFA before getting the tools back. Immediately following and inspired by HJ Mitchell's, there was the slightly more controversial SarekOfVulcan. That ended successful re-RFAS until 2019's Floquenbeam, which crat-chatted. Since then, there have been none that I remember. There have been several re-RFAs from admins who were de-sysopped or at serious risk of de-sysopping, and a few interesting edge cases such as the potentially optional yet no-consensus SarekVulcan 3 in 2014 and the Rich Farmbrough case in 2015, but those are very different than what we're talking about today. GreenLipstickLesbian (talk) 00:01, 23 December 2024 (UTC)
- To add on to that, Wikipedia:Requests for adminship/Harrias 2 was technically a reconfirmation RFA, which in a sense can be treated as a re-RFA. My point is, there is some precedent for re-RFAs, but the current guidelines are ambiguous as to when re-RFAs are or aren't allowed. – Epicgenius (talk) 16:34, 23 December 2024 (UTC)
- Well thank you both, I've learned something new today. It turns out I was working on a false assumption. It has just been so long since a re-RfA that I assumed it was a truly new phenomenon, especially since there were two in short succession. I still can't say I'm thrilled by the process and think it should be used sparingly, but perhaps I was a bit over concerned. CaptainEek Edits Ho Cap'n!⚓ 16:47, 23 December 2024 (UTC)
- To add on to that, Wikipedia:Requests for adminship/Harrias 2 was technically a reconfirmation RFA, which in a sense can be treated as a re-RFA. My point is, there is some precedent for re-RFAs, but the current guidelines are ambiguous as to when re-RFAs are or aren't allowed. – Epicgenius (talk) 16:34, 23 December 2024 (UTC)
- There have been lots of re-RFAs, historically. There were more common in the 2000s. Evercat in 2003 is the earliest I can find, back before the re-sysopping system had been worked out fully. Croat Canuck back in 2007 was snow-closed after one day, because the nominator and applicant didn't know that they could have gone to the bureaucrats' noticeboard. For more modern examples, HJ Mitchell (2011) is relatively similar to the recent re-RFAs in the sense that the admin resigned uncontroversially but chose to re-RFA before getting the tools back. Immediately following and inspired by HJ Mitchell's, there was the slightly more controversial SarekOfVulcan. That ended successful re-RFAS until 2019's Floquenbeam, which crat-chatted. Since then, there have been none that I remember. There have been several re-RFAs from admins who were de-sysopped or at serious risk of de-sysopping, and a few interesting edge cases such as the potentially optional yet no-consensus SarekVulcan 3 in 2014 and the Rich Farmbrough case in 2015, but those are very different than what we're talking about today. GreenLipstickLesbian (talk) 00:01, 23 December 2024 (UTC)
- But this isn't the status quo? We've never done a re-RfA before now. The question is whether this previously unconsidered process, which appeared as an emergent behavior, is a feature or a bug. CaptainEek Edits Ho Cap'n!⚓ 23:01, 22 December 2024 (UTC)
- Option 2 or 3 per Gnoming and CaptainEek. Such RfAs only require at most 30 seconds for one to decide whether or not to spend their time on examination. Unlike other prohibited timesinks, it's not like something undesirable will happen if one does not sink their time. Voluntary reconfirmation RfAs are socially discouraged, so there is usually a very good reason for someone to go back there, such as accountability for past statements in the case of WTT or large disputes during adminship in the case of Hog Farm. I don't think we should outright deny these, and there is no disruption incurred if we don't. Aaron Liu (talk) 15:44, 23 December 2024 (UTC)
- Option 2 but for largely the reasons presented by CaptainEek. KevinL (aka L235 · t · c) 21:58, 23 December 2024 (UTC)
- Option 2 (fine with better labeling) These don't seem harmful to me and, if I don't have time, I'll skip one and trust the judgment of my fellow editors. No objection to better labeling them though, as discussed above. RevelationDirect (talk) 22:36, 23 December 2024 (UTC)
- Option 1 because it's just a waste of time to go through and !vote on candidates who just want the mop restored when he or she or they could get it restored BN with no problems. But I can also see option 2 being good for a former mod not in good standing. Therapyisgood (talk) 23:05, 23 December 2024 (UTC)
- If you think it is a waste of time to !vote on a candidate, just don't vote on that candidate and none of your time has been wasted. Thryduulf (talk) 23:28, 23 December 2024 (UTC)
- Option 2 per QoH (or me? who knows...) Kline • talk • contribs 04:24, 27 December 2024 (UTC)
- Option 2 Just because someone may be entitled to get the bit back doesn't mean they necessarily should. Look at my RFA3. I did not resign under a cloud, so I could have gotten the bit back by request. However, the RFA established that I did not have the community support at that point, so it was a good thing that I chose that path. I don't particularly support option 3, but I could deal with it. --SarekOfVulcan (talk) 16:05, 27 December 2024 (UTC)
- Option 1 Asking hundreds of people to vet a candidate who has already passed a RfA and is eligible to get the tools back at BN is a waste of the community's time. -- Pawnkingthree (talk) 16:21, 27 December 2024 (UTC)
- Option 2 Abolishing RFA in favour of BN may need to be considered, but I am unconvinced by arguments about RFA being a waste of time. Hawkeye7 (discuss) 19:21, 27 December 2024 (UTC)
- Option 2 I really don't think there's a problem that needs to be fixed here. I am grateful at least a couple administrators have asked for the support of the community recently. SportingFlyer T·C 00:12, 29 December 2024 (UTC)
- Option 2. Keep the status quo of
any editor is free to re-request the tools through the requests for adminship process
. Voluntary RfA are rare enough not to be a problem, it's not as though we are overburdened with RfAs. And it’s my time to waste. --Malcolmxl5 (talk) 17:58, 7 January 2025 (UTC) - Option 2 or Option 3. These are unlikely to happen anyway, it's not like they're going to become a trend. I'm already wasting my time here instead of other more important activities anyway, so what's a little more time spent giving an easy support?fanfanboy (blocktalk) 16:39, 10 January 2025 (UTC)
- Option 1 Agree with Daniel Quinlan that for the problematic editors eligible for re-sysop at BN despite unpopularity, we should rely on our new process of admin recall, rather than pre-emptive RRFAs. I'll add the novel argument that when goliaths like Hog Farm unnecessarily showcase their achievements at RFA, it scares off nonetheless qualified candidates. ViridianPenguin 🐧 ( 💬 ) 17:39, 14 January 2025 (UTC)
- Option 2 per Gnoming /CaptainEeek Bluethricecreamman (talk) 20:04, 14 January 2025 (UTC)
- Option 2 or Option 3 - if you regard a re-RfA as a waste of your time, just don't waste it by participating; it's not mandatory. BastunĖġáḍβáś₮ŭŃ! 12:13, 15 January 2025 (UTC)
Discussion
- @Voorts: If option 2 gets consensus how would this RfC change the wording
Regardless of the process by which the admin tools are removed, any editor is free to re-request the tools through the requests for adminship process.
Or is this an attempt to see if that option no longer has consensus? If so why wasn't alternative wording proposed? As I noted above this feels premature in multiple ways. Best, Barkeep49 (talk) 21:43, 15 December 2024 (UTC)- That is not actually true. ArbCom can (and has) forbidden some editors from re-requesting the tools through RFA. Hawkeye7 (discuss) 19:21, 27 December 2024 (UTC)
- I've re-opened this per a request on my talk page. If other editors think this is premature, they can !vote accordingly and an uninvolved closer can determine if there's consensus for an early close in deference to the VPI discussion. voorts (talk/contributions) 21:53, 15 December 2024 (UTC)
- The discussion at VPI, which I have replied on, seems to me to be different enough from this discussion that both can run concurrently. That is, however, my opinion as a mere editor. — Jkudlick ⚓ (talk) 22:01, 15 December 2024 (UTC)
- @Voorts, can you please reword the RfC to make it clear that Option 2 is the current consensus version? It does not need to be clarified – it already says precisely what you propose. – bradv 22:02, 15 December 2024 (UTC)
- Question: May someone clarify why many view such confirmation RfAs as a waste of community time? No editor is obligated to take up their time and participate. If there's nothing to discuss, then there's no friction or dis-cussing, and the RfA smooth-sails; if a problem is identified, then there was a good reason to go to RfA. I'm sure I'm missing something here. Aaron Liu (talk) 22:35, 15 December 2024 (UTC)
- The intent of RfA is to provide a comprehensive review of a candidate for adminship, to make sure that they meet the community's standards. Is that happening with vanity re-RfAs? Absolutely not, because these people don't need that level of vetting. I wouldn't consider a week long, publicly advertized back patting to be a productive use of volunteer time. -- Ajraddatz (talk) 23:33, 15 December 2024 (UTC)
- But no volunteer is obligated to pat such candidates on the back. Aaron Liu (talk) 00:33, 16 December 2024 (UTC)
- Sure, but that logic could be used to justify any time sink. We're all volunteers and nobody is forced to do anything here, but that doesn't mean that we should promote (or stay silent with our criticism of, I suppose) things that we feel don't serve a useful purpose. I don't think this is a huge deal myself, but we've got two in a short period of time and I'd prefer to do a bit of push back now before they get more common. -- Ajraddatz (talk) 01:52, 16 December 2024 (UTC)
- Unlike other prohibited timesinks, it's not like something undesirable will happen if one does not sink their time. Aaron Liu (talk) 02:31, 16 December 2024 (UTC)
- Except someone who has no need for advanced tools and is not going to use them in any useful fashion, would then skate through with nary a word said about their unsuitability, regardless of the foregone conclusion. The point of RFA is not to rubber-stamp. Unless their is some actual issue or genuine concern they might not get their tools back, they should just re-request them at BN and stop wasting people's time with pointless non-process wonkery. Only in death does duty end (talk) 09:05, 16 December 2024 (UTC)
- I’m confused. Adminship requires continued use of the tools. If you think they’s suitable for BN, I don’t see how doing an RfA suddenly makes them unsuitable. If you have concerns, raise them. Aaron Liu (talk) 13:02, 16 December 2024 (UTC)
- Except someone who has no need for advanced tools and is not going to use them in any useful fashion, would then skate through with nary a word said about their unsuitability, regardless of the foregone conclusion. The point of RFA is not to rubber-stamp. Unless their is some actual issue or genuine concern they might not get their tools back, they should just re-request them at BN and stop wasting people's time with pointless non-process wonkery. Only in death does duty end (talk) 09:05, 16 December 2024 (UTC)
- Unlike other prohibited timesinks, it's not like something undesirable will happen if one does not sink their time. Aaron Liu (talk) 02:31, 16 December 2024 (UTC)
- Sure, but that logic could be used to justify any time sink. We're all volunteers and nobody is forced to do anything here, but that doesn't mean that we should promote (or stay silent with our criticism of, I suppose) things that we feel don't serve a useful purpose. I don't think this is a huge deal myself, but we've got two in a short period of time and I'd prefer to do a bit of push back now before they get more common. -- Ajraddatz (talk) 01:52, 16 December 2024 (UTC)
- But no volunteer is obligated to pat such candidates on the back. Aaron Liu (talk) 00:33, 16 December 2024 (UTC)
- The intent of RfA is to provide a comprehensive review of a candidate for adminship, to make sure that they meet the community's standards. Is that happening with vanity re-RfAs? Absolutely not, because these people don't need that level of vetting. I wouldn't consider a week long, publicly advertized back patting to be a productive use of volunteer time. -- Ajraddatz (talk) 23:33, 15 December 2024 (UTC)
- I don't think the suggested problem (which I acknowledge not everyone thinks is a problem) is resolved by these options. Admins can still run a re-confirmation RfA after regaining adminsitrative privileges, or even initiate a recall petition. I think as discussed on Barkeep49's talk page, we want to encourage former admins who are unsure if they continue to be trusted by the community at a sufficient level to explore lower cost ways of determining this. isaacl (talk) 00:32, 16 December 2024 (UTC)
- Regarding option 3, establishing a consensus view takes patience. The intent of having a reconfirmation request for administrative privileges is counteracted by closing it swiftly. It provides incentive for rapid voting that may not provide the desired considered feedback. isaacl (talk) 17:44, 17 December 2024 (UTC)
- In re the idea that RfAs use up a lot of community time: I first started editing Wikipedia in 2014. There were 62 RfAs that year, which was a historic low. Even counting all of the AElect candidates as separate RfAs, including those withdrawn before voting began, we're still up to only 53 in 2024 – counting only traditional RfAs it's only 18, which is the second lowest number ever. By my count we've has 8 resysop requests at BN in 2024; even if all of those went to RfA, I don't see how that would overwhelm the community. That would still leave us on 26 traditional RfAs per year, or (assuming all of them run the full week) one every other week. Caeciliusinhorto-public (talk) 10:26, 16 December 2024 (UTC)
- What about an option 4 encouraging eligible candidates to go through BN? At the end of the Procedure section, add something like "Eligible users are encouraged to use this method rather than running a new request for adminship." The current wording makes re-RfAing sound like a plausible alternative to a BN request, when in actual fact the former rarely happens and always generates criticism. Giraffer (talk) 12:08, 16 December 2024 (UTC)
- Discouraging RFAs is the second last thing we should be doing (after prohibiting them), rather per my comments here and in the VPI discussion we should be encouraging former administrators to demonstrate that they still have the approval of the community. Thryduulf (talk) 12:16, 16 December 2024 (UTC)
- I think this is a good idea if people do decide to go with option 2, if only to stave off any further mixed messages that people are doing something wrong or rude or time-wasting or whatever by doing a second RfA, when it's explicitly mentioned as a valid thing for them to do. Gnomingstuff (talk) 15:04, 16 December 2024 (UTC)
- If RFA is explicitly a valid thing for people to do (which it is, and is being reaffirmed by the growing consensus for option 2) then we don't need to (and shouldn't) discourage people from using that option. The mixed messages can be staved off by people simply not making comments that explicitly contradict policy. Thryduulf (talk) 15:30, 16 December 2024 (UTC)
- Also a solid option, the question is whether people will actually do it. Gnomingstuff (talk) 22:55, 16 December 2024 (UTC)
- The simplest way would be to just quickly hat/remove all such comments. Pretty soon people will stop making them. Thryduulf (talk) 23:20, 16 December 2024 (UTC)
- Also a solid option, the question is whether people will actually do it. Gnomingstuff (talk) 22:55, 16 December 2024 (UTC)
- If RFA is explicitly a valid thing for people to do (which it is, and is being reaffirmed by the growing consensus for option 2) then we don't need to (and shouldn't) discourage people from using that option. The mixed messages can be staved off by people simply not making comments that explicitly contradict policy. Thryduulf (talk) 15:30, 16 December 2024 (UTC)
- This is not new. We've had sporadic "vanity" RfAs since the early days of the process. I don't believe they're particularly harmful, and think that it unlikely that we will begin to see so many of them that they pose a problem. As such I don't think this policy proposal solves any problem we actually have. UninvitedCompany 21:56, 16 December 2024 (UTC)
- This apparent negative feeling evoked at an RFA for a former sysop everyone agrees is fully qualified and trusted certainly will put a bad taste in the mouths of other former admins who might consider a reconfirmation RFA without first visiting BN. This comes in the wake of Worm That Turned's similar rerun. BusterD (talk) 23:29, 16 December 2024 (UTC)
- Nobody should ever be discouraged from seeking community consensus for significant changes. Adminship is a significant change. Thryduulf (talk) 23:32, 16 December 2024 (UTC)
- No argument from me. I was a big Hog Farm backer way back when he was merely one of Wikipedia's best content contributors. BusterD (talk) 12:10, 17 December 2024 (UTC)
- Nobody should ever be discouraged from seeking community consensus for significant changes. Adminship is a significant change. Thryduulf (talk) 23:32, 16 December 2024 (UTC)
- All these mentions of editor time make me have to mention The Grand Unified Theory of Editor Time (TLDR: our understanding of how editor time works is dreadfully incomplete). CaptainEek Edits Ho Cap'n!⚓ 02:44, 17 December 2024 (UTC)
- I went looking for @Tamzin's comment because I know they had hung up the tools and came back, and I was interested in their perspective. But they've given me a different epiphany. I suddenly realize why people are doing confirmation RfAs: it's because of RECALL, and the one year immunity a successful RfA gives you. Maybe everyone else already figured that one out and is thinking "well duh Eek," but I guess I hadn't :) I'm not exactly sure what to do with that epiphany, besides note the emergent behavior that policy change can create. We managed to generate an entirely new process without writing a single word about it, and that's honestly impressive :P CaptainEek Edits Ho Cap'n!⚓ 18:18, 17 December 2024 (UTC)
- Worm That Turned followed through on a pledge he made in January 2024, before the 2024 review of the request for adminship process began. I don't think a pattern can be extrapolated from a sample size of one (or even two). That being said, it's probably a good thing if admins occasionally take stock of whether or not they continue to hold the trust of the community. As I previously commented, it would be great if these admins would use a lower cost way of sampling the community's opinion. isaacl (talk) 18:31, 17 December 2024 (UTC)
- @CaptainEek: You are correct that a year's "immunity" results from a successful RRFA, but I see no evidence that this has been the reason for the RRFAs. Regards, Newyorkbrad (talk) 00:14, 22 December 2024 (UTC)
- If people decide to go through a community vote to get a one year immunity from a process that only might lead to a community vote which would then have a lower threshold then the one they decide to go through, and also give a year's immunity, then good for them. CMD (talk) 01:05, 22 December 2024 (UTC)
- @CaptainEek: You are correct that a year's "immunity" results from a successful RRFA, but I see no evidence that this has been the reason for the RRFAs. Regards, Newyorkbrad (talk) 00:14, 22 December 2024 (UTC)
- @CaptainEek I'm mildly bothered by this comment, mildly because I assume it's lighthearted and non-serious. But just in case anyone does feel this way - I was very clear about my reasons for RRFA, I've written a lot about it, anyone is welcome to use my personal recall process without prejudice, and just to be super clear - I waive my "1 year immunity" - if someone wants to start a petition in the next year, do not use my RRfA as a reason not to. I'll update my userpage accordingly. I can't speak for Hog Farm, but his reasoning seems similar to mine, and immunity isn't it. WormTT(talk) 10:28, 23 December 2024 (UTC)
- @Worm That Turned my quickly written comment was perhaps not as clear as it could have been :) I'm sorry, I didn't mean to suggest that y'all had run for dubious reasons. As I said in my !vote,
Let me be clear: I am not suggesting that is why either Worm or Hogfarm re-upped, I'm just trying to create a general purpose rule here
. I guess what I really meant was that the reason that we're having this somewhat spirited conversation seems to be the sense that re-RfA could provide a protection from recall. If not for recall and the one year immunity period, I doubt we'd have cared so much as to suddenly run two discussions about this. CaptainEek Edits Ho Cap'n!⚓ 16:59, 23 December 2024 (UTC)- I don't agree. No one else has raised a concern about someone seeking a one-year respite from a recall petition. Personally, I think essentially self-initiating the recall process doesn't really fit the profile of someone who wants to avoid the recall process. (I could invent some nefarious hypothetical situation, but since opening an arbitration case is still a possibility, I don't think it would work out as planned.) isaacl (talk) 05:19, 24 December 2024 (UTC)
- @Worm That Turned my quickly written comment was perhaps not as clear as it could have been :) I'm sorry, I didn't mean to suggest that y'all had run for dubious reasons. As I said in my !vote,
- I really don't think this is the reason behind WTT's and HF's reconfirmation RFA's. I don't think their RFA's had much utility and could have been avoided, but I don't doubt for a second that their motivations were anything other than trying to provide transparency and accountability for the community. BugGhost 🦗👻 12:04, 23 December 2024 (UTC)
- Worm That Turned followed through on a pledge he made in January 2024, before the 2024 review of the request for adminship process began. I don't think a pattern can be extrapolated from a sample size of one (or even two). That being said, it's probably a good thing if admins occasionally take stock of whether or not they continue to hold the trust of the community. As I previously commented, it would be great if these admins would use a lower cost way of sampling the community's opinion. isaacl (talk) 18:31, 17 December 2024 (UTC)
- I went looking for @Tamzin's comment because I know they had hung up the tools and came back, and I was interested in their perspective. But they've given me a different epiphany. I suddenly realize why people are doing confirmation RfAs: it's because of RECALL, and the one year immunity a successful RfA gives you. Maybe everyone else already figured that one out and is thinking "well duh Eek," but I guess I hadn't :) I'm not exactly sure what to do with that epiphany, besides note the emergent behavior that policy change can create. We managed to generate an entirely new process without writing a single word about it, and that's honestly impressive :P CaptainEek Edits Ho Cap'n!⚓ 18:18, 17 December 2024 (UTC)
- I don't really care enough about reconf RFAs to think they should be restricted, but what about a lighter ORCP-like process (maybe even in the same place) where fewer editors can indicate, "yeah OK, there aren't really any concerns here, it would probably save a bit of time if you just asked at BN". Alpha3031 (t • c) 12:40, 19 December 2024 (UTC)
- Can someone accurately describe for me what the status quo is? I reread this RfC twice now and am having a hard time figuring out what the current state of affairs is, and how the proposed alternatives will change them. Duly signed, ⛵ WaltClipper -(talk) 14:42, 13 January 2025 (UTC)
- Option 2 is the status quo. The goal of the RFC is to see if the community wants to prohibit reconfirmation RFAs (option 1). The idea is that reconfirmation RFAs take up a lot more community time than a BN request so are unnecessary. There were 2 reconfirmation RFAs recently after a long dry spell. –Novem Linguae (talk) 20:49, 13 January 2025 (UTC)
- The status quo, documented at Wikipedia:Administrators#Restoration of admin tools, is that admins who resigned without being under controversy can seek readminship through either BN (where it's usually given at the discreetion of an arbitrary bureaucrat according to the section I linked) or RfA (where all normal RfA procedures apply, and you see a bunch of people saying "the candidate's wasting the community's time and could've uncontroversially gotten adminship back at BN instead). Aaron Liu (talk) 12:27, 14 January 2025 (UTC)
Guideline against use of AI images in BLPs and medical articles?
I have recently seen AI-generated images be added to illustrate both BLPs (e.g. Laurence Boccolini, now removed) and medical articles (e.g. Legionella#Mechanism). While we don't have any clear-cut policy or guideline about these yet, they appear to be problematic. Illustrating a living person with an AI-generated image might misinform as to how that person actually looks like, while using AI in medical diagrams can lead to anatomical inaccuracies (such as the lung structure in the second image, where the pleura becomes a bronnchiole twisting over the primary bronchi), or even medical misinformation. While a guideline against AI-generated images in general might be more debatable, do we at least have a consensus for a guideline against these two specific use cases?
To clarify, I am not including potentially relevant AI-generated images that only happen to include a living person (such as in Springfield pet-eating hoax), but exclusively those used to illustrate a living person in a WP:BLP context. Chaotic Enby (talk · contribs) 12:11, 30 December 2024 (UTC)
- What about any biographies, including dead people. The lead image shouldn't be AI generated for any biography. - Sebbog13 (talk) 12:17, 30 December 2024 (UTC)
- Same with animals, organisms etc. - Sebbog13 (talk) 12:20, 30 December 2024 (UTC)
- I personally am strongly against using AI in biographies and medical articles - as you highlighted above, AI is absolutely not reliable in generating accurate imagery and may contribute to medical or general misinformation. I would 100% support a proposal banning AI imagery from these kinds of articles - and a recommendation to not use such imagery other than in specific scenarios. jolielover♥talk 12:28, 30 December 2024 (UTC)
- I'd prefer a guideline prohibiting the use of AI images full stop. There are too many potential issues with accuracy, honesty, copyright, etc. Has this already been proposed or discussed somewhere? – Joe (talk) 12:38, 30 December 2024 (UTC)
- There hasn't been a full discussion yet, and we have a list of uses at Wikipedia:WikiProject AI Cleanup/AI images in non-AI contexts, but it could be good to deal with clear-cut cases like this (which are already a problem) first, as the wider discussion is less certain to reach the same level of consensus. Chaotic Enby (talk · contribs) 12:44, 30 December 2024 (UTC)
- Discussions are going on at Wikipedia_talk:Biographies_of_living_persons#Proposed_addition_to_BLP_guidelines and somewhat at Wikipedia_talk:No_original_research#Editor-created_images_based_on_text_descriptions. I recommend workshopping an RfC question (or questions) then starting an RfC. Some1 (talk) 13:03, 30 December 2024 (UTC)
- Oh, didn't catch the previous discussions! I'll take a look at them, thanks! Chaotic Enby (talk · contribs) 14:45, 30 December 2024 (UTC)
- There is one very specific exception I would put to a very sensible blanket prohibition on using AI images to illustrate people, especially BLPs. That is where the person themselves is known to use that image, which I have encountered in Simon Ekpa. CMD (talk) 15:00, 30 December 2024 (UTC)
- While the Ekpa portrait is just an upscale (and I'm not sure what positive value that has for us over its source; upscaling does not add accuracy, nor is it an artistic interpretation meant to reveal something about the source), this would be hard to translate to the general case. Many AI portraits would have copyright concerns, not just from the individual (who may have announced some appropriate release for it), but due to the fact that AI portraits can lean heavily on uncredited individual sources. --Nat Gertler (talk) 16:04, 30 December 2024 (UTC)
- For the purposes of discussing whether to allow AI images at all, we should always assume that, for the purposes of (potential) policies and guidelines, there exist AI images we can legally use to illustrate every topic. We cannot use those that are not legal (including, but not limited to, copyright violations) so they are irrelevant. An image generator trained exclusively on public domain and cc0 images (and any other licenses that explicitly allow derivative works without requiring attribution) would not be subject to any copyright restrictions (other than possibly by the prompter and/or generator's license terms, which are both easy to determine). Similarly we should not base policy on the current state of the technology, but assume that the quality of its output will improve to the point it is equal to that of a skilled human artist. Thryduulf (talk) 17:45, 30 December 2024 (UTC)
- The issue is, either there are public domain/CC0 images of the person (in which case they can be used directly) or there aren't, in which case the AI is making up how a person looks. Chaotic Enby (talk · contribs) 20:00, 30 December 2024 (UTC)
- We tend to use art representations either where no photographs are available (in which case, AI will also not have access to photographs) or where what we are showing is an artist's insight on how this person is perceived, which is not something that AI can give us. In any case, we don't have to build policy now around some theoretical AI in the future; we can deal with the current reality, and policy can be adjusted if things change in the future. And even that theoretical AI does make it more difficult to detect copyvio -- Nat Gertler (talk) 20:54, 30 December 2024 (UTC)
- I wouldn't call it an upscale given whatever was done appears to have removed detail, but we use that image as it was specifically it is the edited image which was sent to VRT. CMD (talk) 10:15, 31 December 2024 (UTC)
- For the purposes of discussing whether to allow AI images at all, we should always assume that, for the purposes of (potential) policies and guidelines, there exist AI images we can legally use to illustrate every topic. We cannot use those that are not legal (including, but not limited to, copyright violations) so they are irrelevant. An image generator trained exclusively on public domain and cc0 images (and any other licenses that explicitly allow derivative works without requiring attribution) would not be subject to any copyright restrictions (other than possibly by the prompter and/or generator's license terms, which are both easy to determine). Similarly we should not base policy on the current state of the technology, but assume that the quality of its output will improve to the point it is equal to that of a skilled human artist. Thryduulf (talk) 17:45, 30 December 2024 (UTC)
- While the Ekpa portrait is just an upscale (and I'm not sure what positive value that has for us over its source; upscaling does not add accuracy, nor is it an artistic interpretation meant to reveal something about the source), this would be hard to translate to the general case. Many AI portraits would have copyright concerns, not just from the individual (who may have announced some appropriate release for it), but due to the fact that AI portraits can lean heavily on uncredited individual sources. --Nat Gertler (talk) 16:04, 30 December 2024 (UTC)
- Is there any clarification on using purely AI-generated images vs. using AI to edit or alter images? AI tools have been implemented in a lot of photo editing software, such as to identify objects and remove them, or generate missing content. The generative expand feature would appear to be unreliable (and it is), but I use it to fill in gaps of cloudless sky produced from stitching together photos for a panorama (I don't use it if there are clouds, or for starry skies, as it produces non-existent stars or unrealistic clouds). Photos of Japan (talk) 18:18, 30 December 2024 (UTC)
- Yes, my proposal is only about AI-generated images, not AI-altered ones. That could in fact be a useful distinction to make if we want to workshop a RfC on the matter. Chaotic Enby (talk · contribs) 20:04, 30 December 2024 (UTC)
- I'm not sure if we need a clear cut policy or guideline against them... I think we treat them the same way as we would treat an editor's kitchen table sketch of the same figure. Horse Eye's Back (talk) 18:40, 30 December 2024 (UTC)
- For those wanting to ban AI images full stop, well, you are too late. Most professional image editing software, including the software in one's smartphone as well as desktop, uses AI somewhere. Noise reduction software uses AI to figure out what might be noise and what might be texture. Sharpening software uses AI to figure out what should be smooth and what might have a sharp detail it can invent. For example, a bird photo not sharp enough to capture feather detail will have feather texture imagined onto it. Same for hair. Or grass. Any image that has been cleaned up to remove litter or dust or spots will have the cleaned area AI generated based on its surroundings. The sky might be extended with AI. These examples are a bit different from a 100% imagined image created from a prompt. But probably not in a way that is useful as a rule.
- I think we should treat AI generated images the same as any user-generated image. It might be a great diagram or it might be terrible. Remove it from the article if the latter, not because someone used AI. If the image claims to photographically represent something, we may judge whether the creator has manipulated the image too much to be acceptable. For example, using AI to remove a person in the background of an image taken of the BLP subject might be perfectly fine. People did that with traditional Photoshop/Lightroom techniques for years. Using AI to generate what claims to be a photo of a notable person is on dodgy ground wrt copyright. -- Colin°Talk 19:12, 30 December 2024 (UTC)
- I'm talking about the case of using AI to generate a depiction of a living person, not using AI to alter details in the background. That is why I only talk about AI-generated images, not AI-altered images. Chaotic Enby (talk · contribs) 20:03, 30 December 2024 (UTC)
- Regarding some sort of brightline ban on the use of any such image in anything article medical related: absolutely not. For example, if someone wanted to use AI tools as opposed to other tools to make an image such as this one (as used in the "medical" article Fluconazole) I don't see a problem, so long as it is accurate. Accurate models and illustrations are useful and that someone used AI assistance as opposed to a chisel and a rock is of no concern. — xaosflux Talk 19:26, 30 December 2024 (UTC)
- I believe that the appropriateness of AI images depends on how its used by the user. In BLP and medical articles, it is inappropriate for the images, but it is inappropriate to ban it completely across thw site. By the same logic, if you want full ban of AI, you are banning fire just because people can get burned, without considering cooking. JekyllTheFabulous (talk) 13:33, 31 December 2024 (UTC)
- I agree that AI-generated images should not be used in most cases. They essentially serve as misinformation. I also don't think that they're really comparable to drawings or sketches because AI-generation uses a level of photorealism that can easily trick the untrained eye into thinking it is real. Di (they-them) (talk) 20:46, 30 December 2024 (UTC)
- AI doesn't need to be photorealistic though. I see two potential issues with AI. The first is images that might deceive the viewer into thinking they are photos, when they are not. The second is potential copyright issues. Outside of the copyright issues I don't see any unique concerns for an AI-generated image (that doesn't appear photorealistic). Any accuracy issues can be handled the same way a user who manually drew an image could be handled. Photos of Japan (talk) 21:46, 30 December 2024 (UTC)
- AI-generated depictions of BLP subjects are often more "illustrative" than drawings/sketches of BLP subjects made by 'regular' editors like you and me. For example, compare the AI-generated image of Pope Francis and the user-created cartoon of Brigette Lundy-Paine. Neither image belongs on their respective bios, of course, but the AI-generated image is no more "misinformation" than the drawing. Some1 (talk) 00:05, 31 December 2024 (UTC)
- I would argue the opposite: neither are made up, but the first one, because of its realism, might mislead readers into thinking that it is an actual photograph, while the second one is clearly a drawing. Which makes the first one less illustrative, as it carries potential for misinformation, despite being technically more detailed. Chaotic Enby (talk · contribs) 00:31, 31 December 2024 (UTC)
- AI-generated images should always say "AI-generated image of [X]" in the image caption. No misleading readers that way. Some1 (talk) 00:36, 31 December 2024 (UTC)
- Yes, and they don't always do it, and we don't have a guideline about this either. The issue is, many people have many different proposals on how to deal with AI content, meaning we always end up with "no consensus" and no guidelines on use at all, even if most people are against it. Chaotic Enby (talk · contribs) 00:40, 31 December 2024 (UTC)
always end up with "no consensus" and no guidelines on use at all, even if most people are against it
Agreed. Even a simple proposal to have image captions note whether an image is AI-generated will have editors wikilawyer over the definition of 'AI-generated.' I take back my recommendation of starting an RfC; we can already predict how that RfC will end. Some1 (talk) 02:28, 31 December 2024 (UTC)
- Yes, and they don't always do it, and we don't have a guideline about this either. The issue is, many people have many different proposals on how to deal with AI content, meaning we always end up with "no consensus" and no guidelines on use at all, even if most people are against it. Chaotic Enby (talk · contribs) 00:40, 31 December 2024 (UTC)
- AI-generated images should always say "AI-generated image of [X]" in the image caption. No misleading readers that way. Some1 (talk) 00:36, 31 December 2024 (UTC)
- I would argue the opposite: neither are made up, but the first one, because of its realism, might mislead readers into thinking that it is an actual photograph, while the second one is clearly a drawing. Which makes the first one less illustrative, as it carries potential for misinformation, despite being technically more detailed. Chaotic Enby (talk · contribs) 00:31, 31 December 2024 (UTC)
- AI-generated depictions of BLP subjects are often more "illustrative" than drawings/sketches of BLP subjects made by 'regular' editors like you and me. For example, compare the AI-generated image of Pope Francis and the user-created cartoon of Brigette Lundy-Paine. Neither image belongs on their respective bios, of course, but the AI-generated image is no more "misinformation" than the drawing. Some1 (talk) 00:05, 31 December 2024 (UTC)
- Of interest perhaps is this 2023 NOR noticeboard discussion on the use of drawn cartoon images in BLPs. Zaathras (talk) 22:38, 30 December 2024 (UTC)
- We should absolutely not be including any AI images in anything that is meant to convey facts (with the obvious exception of an AI image illustrating the concept of an AI image). I also don't think we should be encouraging AI-altered images -- the line between "regular" photo enhancement and what we'd call "AI alteration" is blurry, but we shouldn't want AI edits for the same reason we wouldn't want fake Photoshop composites.
- That said, I would assume good faith here: some of these images are probably being sourced from Commons, and Commons is dealing with a lot of undisclosed AI images. Gnomingstuff (talk) 23:31, 30 December 2024 (UTC)
- Why wouldn't we want "fake Photoshop composites"? A Composite photo can be very useful. I'd be sad if we banned c:Category:Chronophotographic photomontages. WhatamIdoing (talk) 06:40, 31 December 2024 (UTC)
- Sorry, should have been more clear -- composites that present themselves as the real thing, basically what people would use deepfakes for now. Gnomingstuff (talk) 20:20, 31 December 2024 (UTC)
- Yeah I think there is a very clear line between images built by a diffusion model and images modified using photoshop through techniques like compositing. That line is that the diffusion model is reverse-engineering an image to match a text prompt from a pattern of semi-random static associated with similar text prompts. As such it's just automated glurge, at best it's only as good as the ability of the software to parse a text prompt and the ability of a prompter to draft sufficiently specific language. And absolutely none of that does anything to solve the "hallucination" problem. On the other hand, in photoshop, if I put in two layers both containing a bird on a transparent background, what I, the human making the image, sees is what the software outputs. Simonm223 (talk) 18:03, 15 January 2025 (UTC)
Yeah I think there is a very clear line between images built by a diffusion model and images modified using photoshop
others do not. If you want to ban or restrict one but not the other then you need to explain how the difference can be reliably determined, and how one is materially different to the other in ways other than your personal opinion. Thryduulf (talk) 18:45, 15 January 2025 (UTC)
- Yeah I think there is a very clear line between images built by a diffusion model and images modified using photoshop through techniques like compositing. That line is that the diffusion model is reverse-engineering an image to match a text prompt from a pattern of semi-random static associated with similar text prompts. As such it's just automated glurge, at best it's only as good as the ability of the software to parse a text prompt and the ability of a prompter to draft sufficiently specific language. And absolutely none of that does anything to solve the "hallucination" problem. On the other hand, in photoshop, if I put in two layers both containing a bird on a transparent background, what I, the human making the image, sees is what the software outputs. Simonm223 (talk) 18:03, 15 January 2025 (UTC)
- Sorry, should have been more clear -- composites that present themselves as the real thing, basically what people would use deepfakes for now. Gnomingstuff (talk) 20:20, 31 December 2024 (UTC)
- I don't think any guideline, let alone policy, would be beneficial and indeed on balance is more likely to be harmful. There are always only two questions that matter when determining whether we should use an image, and both are completely independent of whether the image is AI-generated or not:
- Can we use this image in this article? This depends on matters like copyright, fair use, whether the image depicts content that is legal for an organisation based in the United States to host, etc. Obviously if the answer is "no", then everything else is irrelevant, but as the law and WMF, Commons and en.wp policies stand today there exist some images in both categories we can use, and some images in both categories we cannot use.
- Does using this image in this article improve the article? This is relative to other options, one of which is always not using any image, but in many cases also involves considering alternative images that we can use. In the case of depictions of specific, non-hypothetical people or objects one criteria we use to judge whether the image improves the article is whether it is an accurate representation of the subject. If it is not an accurate representation then it doesn't improve the article and thus should not be used, regardless of why it is inaccurate. If it is an accurate representation, then its use in the article will not be misrepresentative or misleading, regardless of whether it is or is not AI generated. It may or may not be the best option available, but if it is then it should be used regardless of whether it is or is not AI generated.
- The potential harm I mentioned above is twofold, firstly Wikipedia is, by definition, harmed when an images exists we could use that would improve an article but we do not use it in that article. A policy or guideline against the use of AI images would, in some cases, prevent us from using an image that would improve an article. The second aspect is misidentification of an image as AI-generated when it isn't, especially when it leads to an image not being used when it otherwise would have been.
- Finally, all the proponents of a policy or guideline are assuming that the line between images that are and are not AI-generated is sharp and objective. Other commenters here have already shown that in reality the line is blurry and it is only going to get blurrier in the future as more AI (and AI-based) technology is built into software and especially firmware. Thryduulf (talk) 00:52, 31 December 2024 (UTC)
- I agree with almost the entirety of your post with a caveat on whether something "is an accurate representation". We can tell whether non-photorealistic images are accurate by assessing whether the image accurately conveys the idea of what it is depicting. Photos do more than convey an idea, they convey the actual look of something. With AI generated images that are photorealistic it is difficult to assess whether they accurately convey the look of something (the shading might be illogical in subtle ways, there could be an extra finger that goes unnoticed, a mole gets erased), but readers might be deceived by the photo-like presentation into thinking they are looking at an actual photographic depiction of the subject which could differ significantly from the actual subject in ways that go unnoticed. Photos of Japan (talk) 04:34, 31 December 2024 (UTC)
A policy or guideline against the use of AI images would, in some cases, prevent us from using an image that would improve an article.
That's why I'm suggesting a guideline, not a policy. Guidelines are by design more flexible, and WP:IAR still does (and should) apply in edge cases.The second aspect is misidentification of an image as AI-generated when it isn't, especially when it leads to an image not being used when it otherwise would have been.
In that case, there is a licensing problem. AI-generated images on Commons are supposed to be clearly labeled as such. There is no guesswork here, and we shouldn't go hunting for images that might have been AI-generated.Finally, all the proponents of a policy or guideline are assuming that the line between images that are and are not AI-generated is sharp and objective. Other commenters here have already shown that in reality the line is blurry and it is only going to get blurrier in the future as more AI (and AI-based) technology is built into software and especially firmware.
In that case, it's mostly because the ambiguity in wording: AI-edited images are very common, and are sometimes called "AI-generated", but here we should focus on actual prompt outputs, of the style "I asked a model to generate me an image of a BLP". Chaotic Enby (talk · contribs) 11:13, 31 December 2024 (UTC)- Simply not having a completely unnecessary policy or guideline is infinitely better than relying on IAR - especially as this would have to be ignored every time it is relevant. When the AI image is not the best option (which obviously includes all the times its unsuitable or inaccurate) existing policies, guidelines, practice and frankly common sense mean it won't be used. This means the only time the guideline would be relevant is when an AI image is the best option and as we obviously should be using the best option in all cases we would need to ignore the guideline against using AI images.
AI-generated images on Commons are supposed to be clearly labeled as such. There is no guesswork here, and we shouldn't go hunting for images that might have been AI-generated.
The key words here are "supposed to be" and "shouldn't", editors absolutely will speculate that images are AI-generated and that the Commons labelling is incorrect. We are supposed to assume good faith, but this very discussion shows that when it comes to AI some editors simply do not do that.- Regarding your final point, that might be what you are meaning but it is not what all other commenters mean when they want to exclude all AI images. Thryduulf (talk) 11:43, 31 December 2024 (UTC)
- For your first point, the guideline is mostly to take care of the "prompt fed in model" BLP illustrations, where it is technically hard to prove that the person doesn't look like that (as we have no available image), but the model likely doesn't have any available image either and most likely just made it up. As my proposal is essentially limited to that (I don't include AI-edited images, only those that are purely generated by a model), I don't think there will be many cases where IAR would be needed.Regarding your two other points, you are entirely correct, and while I am hoping for nuance on the AI issue, it is clear that some editors might not do that. For the record, I strongly disagree with a blanket ban of "AI images" (which includes both blatant "prompt in model" creations and a wide range of more subtle AI retouching tools) or anything like that. Chaotic Enby (talk · contribs) 11:49, 31 December 2024 (UTC)
the guideline is mostly to take care of the "prompt fed in model" BLP illustrations, where it is technically hard to prove that the person doesn't look like that (as we have no available image)
. There are only two possible scenarios regarding verifiability:- The image is an accurate representation and we can verify that (e.g. by reference to non-free photos).
- Verifiability is no barrier to using the image, whether it is AI generated or not.
- If it is the best image available, and editors agree using it is better than not having an image, then it should be used whether it is AI generated or not.
- The image is either not an accurate representation, or we cannot verify whether it is or is not an accurate representation
- The only reasons we should ever use the image are:
- It has been the subject of notable commentary and we are presenting it in that context.
- The subject verifiably uses it as a representation of themselves (e.g. as an avatar or logo)
- This is already policy, whether the image is AI generated or not is completely irrelevant.
- The only reasons we should ever use the image are:
- The image is an accurate representation and we can verify that (e.g. by reference to non-free photos).
- You will note that in no circumstance is it relevant whether the image is AI generated or not. Thryduulf (talk) 13:27, 31 December 2024 (UTC)
- In your first scenario, there is the issue of an accurate AI-generated image misleading people into thinking it is an actual photograph of the person, especially as they are most often photorealistic. Even besides that, a mostly accurate representation can still introduce spurious details, and this can mislead readers as they do not know to what level it is actually accurate. This scenario doesn't really happen with drawings (which are clearly not photographs), and is very much a consequence of AI-generated photorealistic pictures being a thing.In the second scenario, if we cannot verify that it is not an accurate representation, it can be hard to remove the image with policy-based reasons, which is why a guideline will again be helpful. Having a single guideline against fully AI-generated images takes care of all of these scenarios, instead of having to make new specific guidelines for each case that emerges because of them. Chaotic Enby (talk · contribs) 13:52, 31 December 2024 (UTC)
- If the image is misleading or unverifiable it should not be used, regardless of why it is misleading or unverifiable. This is existing policy and we don't need anything specifically regarding AI to apply it - we just need consensus that the image is misleading or unverifiable. Whether it is or is not AI generated is completely irrelevant. Thryduulf (talk) 15:04, 31 December 2024 (UTC)
- In your first scenario, there is the issue of an accurate AI-generated image misleading people into thinking it is an actual photograph of the person, especially as they are most often photorealistic. Even besides that, a mostly accurate representation can still introduce spurious details, and this can mislead readers as they do not know to what level it is actually accurate. This scenario doesn't really happen with drawings (which are clearly not photographs), and is very much a consequence of AI-generated photorealistic pictures being a thing.In the second scenario, if we cannot verify that it is not an accurate representation, it can be hard to remove the image with policy-based reasons, which is why a guideline will again be helpful. Having a single guideline against fully AI-generated images takes care of all of these scenarios, instead of having to make new specific guidelines for each case that emerges because of them. Chaotic Enby (talk · contribs) 13:52, 31 December 2024 (UTC)
AI-generated images on Commons are supposed to be clearly labeled as such. There is no guesswork here, and we shouldn't go hunting for images that might have been AI-generated.
- I mean... yes, we should? At the very least Commons should go hunting for mislabeled images -- that's the whole point of license review. The thing is that things are absolutely swamped over there and there are hundreds of thousands of images waiting for review of some kind. Gnomingstuff (talk) 20:35, 31 December 2024 (UTC)
- Yes, but that's a Commons thing. A guideline on English Wikipedia shouldn't decide of what is to be done on Commons. Chaotic Enby (talk · contribs) 20:37, 31 December 2024 (UTC)
- I just mean that given the reality of the backlogs, there are going to be mislabeled images, and there are almost certainly going to be more of them over time. That's just how it is. We don't have control over that, but we do have control over what images go into articles, and if someone has legitimate concerns about an image being AI-generated, then they should be raising those. Gnomingstuff (talk) 20:45, 31 December 2024 (UTC)
- Yes, but that's a Commons thing. A guideline on English Wikipedia shouldn't decide of what is to be done on Commons. Chaotic Enby (talk · contribs) 20:37, 31 December 2024 (UTC)
- For your first point, the guideline is mostly to take care of the "prompt fed in model" BLP illustrations, where it is technically hard to prove that the person doesn't look like that (as we have no available image), but the model likely doesn't have any available image either and most likely just made it up. As my proposal is essentially limited to that (I don't include AI-edited images, only those that are purely generated by a model), I don't think there will be many cases where IAR would be needed.Regarding your two other points, you are entirely correct, and while I am hoping for nuance on the AI issue, it is clear that some editors might not do that. For the record, I strongly disagree with a blanket ban of "AI images" (which includes both blatant "prompt in model" creations and a wide range of more subtle AI retouching tools) or anything like that. Chaotic Enby (talk · contribs) 11:49, 31 December 2024 (UTC)
- Support blanket ban on AI-generated images on Wikipedia. As others have highlighted above, the is not just a slippery slope but an outright downward spiral. We don't use AI-generated text and we shouldn't use AI-generated images: these aren't reliable and they're also WP:OR scraped from who knows what and where. Use only reliable material from reliable sources. As for the argument of 'software now has AI features', we all know that there's a huge difference between someone using a smoothing feature and someone generating an image from a prompt. :bloodofox: (talk) 03:12, 31 December 2024 (UTC)
- Reply, the section of WP:OR concerning images is WP:OI which states "Original images created by a Wikimedian are not considered original research, so long as they do not illustrate or introduce unpublished ideas or arguments". Using AI to generate an image only violates WP:OR if you are using it to illustrate unpublished ideas, which can be assessed just by looking at the image itself. COPYVIO, however, cannot be assessed from looking at just the image alone, which AI could be violating. However, some images may be too simple to be copyrightable, for example AI-generated images of chemicals or mathematical structures potentially. Photos of Japan (talk) 04:34, 31 December 2024 (UTC)
- Prompt generated images are unquestionably violation of WP:OR and WP:SYNTH: Type in your description and you get an image scraping who knows what and from who knows where, often Wikipedia. Wikipedia isn't an WP:RS. Get real. :bloodofox: (talk) 23:35, 1 January 2025 (UTC)
- "Unquestionably"? Let me question that, @Bloodofox.
;-)
- If an editor were to use an AI-based image-generating service and the prompt is something like this:
- "I want a stacked bar chart that shows the number of games won and lost by FC Bayern Munich each year. Use the team colors, which are red #DC052D, blue #0066B2, and black #000000. The data is:
- 2014–15: played 34 games, won 25, tied 4, lost 5
- 2015–16: played 34 games, won 28, tied 4, lost 2
- 2016–17: played 34 games, won 25, tied 7, lost 2
- 2017–18: played 34 games, won 27, tied 3, lost 4
- 2018–19: played 34 games, won 24, tied 6, lost 4
- 2019–20: played 34 games, won 26, tied 4, lost 4
- 2020–21: played 34 games, won 24, tied 6, lost 4
- 2021–22: played 34 games, won 24, tied 5, lost 5
- 2022–23: played 34 games, won 21, tied 8, lost 5
- 2023–24: played 34 games, won 23, tied 3, lost 8"
- I would expect it to produce something that is not a violation of either OR in general or OR's SYNTH section specifically. What would you expect, and why do you think it would be okay for me to put that data into a spreadsheet and upload a screenshot of the resulting bar chart, but you don't think it would be okay for me to put that same data into a image generator, get the same thing, and upload that?
- We must not mistake the tools for the output. Hand-crafted bad output is bad. AI-generated good output is good. WhatamIdoing (talk) 01:58, 2 January 2025 (UTC)
- Assuming you'd even get what you requested the model without fiddling with the prompt for a while, these sort of 'but we can use it for graphs and charts' devil's advocate scenarios aren't helpful. We're discussing generating images of people, places, and objects here and in those cases, yes, this would unquestionably be a form of WP:OR & WP:SYNTH. As for the charts and graphs, there are any number of ways to produce these. :bloodofox: (talk) 03:07, 2 January 2025 (UTC)
We're discussing generating images of people, places, and objects here
The proposal contains no such limitation.and in those cases, yes, this would unquestionably be a form of WP:OR & WP:SYNTH.
Do you have a citation for that? Other people have explained better than I can how that it is not necessarily true, and certainly not unquestionable. Thryduulf (talk) 03:14, 2 January 2025 (UTC)- As you're well aware, these images are produced by scraping and synthesized material from who knows what and where: it's ultimately pure WP:OR to produce these fake images and they're a straightforward product of synthesis of multiple sources (WP:SYNTH) - worse yet, these sources are unknown because training data is by no means transparent. Personally, I'm strongly for a total ban on generative AI on the site exterior to articles on the topic of generative AI. Not only do I find this incredible unethical, I believe it is intensely detrimental to Wikipedia, which is an already a flailing and shrinking project. :bloodofox: (talk) 03:23, 2 January 2025 (UTC)
- So you think the lead image at Gisèle Pelicot is a SYNTH violation? Its (human) creator explicitly says "This is not done from one specific photo. As I usually do when I draw portraits of people that I can't see in person, I look at a lot of photos of them and then create my own rendition" in the image description, which sounds like the product of synthesis of multiple sources" to me, and "these sources are unknown because" the the images the artist looked at are not disclosed.
- A lot of my concern about blanket statements is the principle that what's sauce for the goose is sauce for the gander, too. If it's okay for a human to do something by hand, then it should be okay for a human using a semi-automated tool to do it, too.
- (Just in case you hadn't heard, the rumors that the editor base is shrinking have been false for over a decade now. Compared to when you created your account in mid-2005, we have about twice as many high-volume editors.) WhatamIdoing (talk) 06:47, 2 January 2025 (UTC)
- Review WP:SYNTH and your attempts at downplaying a prompt-generated image as "semi-automated" shows the root of the problem: if you can't detect the difference between a human sketching from a reference and a machine scraping who-knows-what on the internet, you shouldn't be involved in this discussion. As for editor retention, this remains a serious problem on the site: while the site continues to grow (and becomes core fodder for AI-scraping) and becomes increasingly visible, editorial retention continues to drop. :bloodofox: (talk) 09:33, 2 January 2025 (UTC)
- Please scroll down below SYNTH to the next section titled "What is not original research" which begins with WP:OI, our policies on how images relate to OR. OR (including SYNTH) only applies to images with regards to if they illustrate "unpublished ideas or arguments". It does not matter, for instance, if you synthesize an original depiction of something, so long as the idea of that thing is not original. Photos of Japan (talk) 09:55, 2 January 2025 (UTC)
- Yes, which explicitly states:
- It is not acceptable for an editor to use photo manipulation to distort the facts or position illustrated by an image. Manipulated images should be prominently noted as such. Any manipulated image where the encyclopedic value is materially affected should be posted to Wikipedia:Files for discussion. Images of living persons must not present the subject in a false or disparaging light.
- Using a machine to generate a fake image of someone is far beyond "manipulation" and it is certainly "false". Clearly we need explicit policies on AI-generated images of people or we wouldn't be having this discussion, but this as it stands clarly also falls under WP:SYNTH: there is zero question that this is a result of "synthesis of published material", even if the AI won't list what it used. Ultimately it's just a synthesis of a bunch of published composite images of who-knows-what (or who-knows-who?) the AI has scraped together to produce a fake image of a person. :bloodofox: (talk) 10:07, 2 January 2025 (UTC)
- Yes, which explicitly states:
- Please scroll down below SYNTH to the next section titled "What is not original research" which begins with WP:OI, our policies on how images relate to OR. OR (including SYNTH) only applies to images with regards to if they illustrate "unpublished ideas or arguments". It does not matter, for instance, if you synthesize an original depiction of something, so long as the idea of that thing is not original. Photos of Japan (talk) 09:55, 2 January 2025 (UTC)
- As you're well aware, these images are produced by scraping and synthesized material from who knows what and where: it's ultimately pure WP:OR to produce these fake images and they're a straightforward product of synthesis of multiple sources (WP:SYNTH) - worse yet, these sources are unknown because training data is by no means transparent. Personally, I'm strongly for a total ban on generative AI on the site exterior to articles on the topic of generative AI. Not only do I find this incredible unethical, I believe it is intensely detrimental to Wikipedia, which is an already a flailing and shrinking project. :bloodofox: (talk) 03:23, 2 January 2025 (UTC)
- Assuming you'd even get what you requested the model without fiddling with the prompt for a while, these sort of 'but we can use it for graphs and charts' devil's advocate scenarios aren't helpful. We're discussing generating images of people, places, and objects here and in those cases, yes, this would unquestionably be a form of WP:OR & WP:SYNTH. As for the charts and graphs, there are any number of ways to produce these. :bloodofox: (talk) 03:07, 2 January 2025 (UTC)
- "Unquestionably"? Let me question that, @Bloodofox.
- Prompt generated images are unquestionably violation of WP:OR and WP:SYNTH: Type in your description and you get an image scraping who knows what and from who knows where, often Wikipedia. Wikipedia isn't an WP:RS. Get real. :bloodofox: (talk) 23:35, 1 January 2025 (UTC)
- The latter images you describe should be SVG regardless. If there are models that can generate that, that seems totally fine since it can be semantically altered by hand. Any generation with photographic or "painterly" characteristics (e.g. generating something in the style of a painting or any other convention of visual art that communicates aesthetic particulars and not merely abstract visual particulars) seems totally unacceptable. Remsense ‥ 论 07:00, 31 December 2024 (UTC)
- @Bloodofox, here's an image I created. It illustrates the concept of 1% in an article. I made this myself, by typing 100 emojis and taking a screenshot. Do you really mean to say that if I'd done this with an image-generating AI tool, using a prompt like "Give me 100 dots in a 10 by 10 grid. Make 99 a dark color and 1, randomly placed, look like a baseball" that it would be hopelessly tainted, because AI is always bad? Or does your strongly worded statement mean something more moderate?
- I'd worry about photos of people (including dead people). I'd worry about photos of specific or unique objects that have to be accurate or they're worse than worthless (e.g., artwork, landmarks, maps). But I'm not worried about simple graphs and charts like this one, and I'm not worried about ordinary, everyday objects. If you want to use AI to generate a photorealistic image of a cookie, or a spoon, and the output you get genuinely looks like those objects, I'm not actually going to worry about it. WhatamIdoing (talk) 06:57, 31 December 2024 (UTC)
- As you know, Wikipedia has the unique factor of being entirely volunteer-ran. Wikipedia has fewer and fewer editors and, long-term, we're seeing plummeting birth rates in areas where most Wikipedia editors do exist. I wouldn't expect a wave of new ones aimed at keeping the site free of bullshit in the near future.
- In addition, the Wikimedia Foundation's hair-brained continued effort to turn the site into its political cash machine is no doubt also not helping, harming the site's public perception and leading to fewer new editors.
- Over the course of decades (I've been here for around 20 years), it seems clear that the site will be negatively impacted by all this, especially in the face of generative AI.
- As a long-time editor who has frequently stumbled upon intense WP:PROFRINGE content, fended off armies of outside actors looking to shape the site into their ideological image (and sent me more than a few death threats), and who has identified large amount of politically-motivated nonsense explicitly designed to fool non-experts in areas I know intimately well (such as folklore and historical linguistics topics), I think it need be said that the use of generative AI for content is especially dangerous because of its capabilities of fooling Wikipedia readers and Wikipedia editors alike.
- Wikipedia is written by people for people. We need to draw a line in the sand to keep from being flooded by increasingly accessible hoax-machines.
- A blanket ban on generative AI resolves this issue or at least hands us another tool with which to attempt to fight back. We don't need what few editors we have here wasting what little time they can give the project checking over an ocean of AI-generated slop: we need more material from reliable sources and better tools to fend off bad actors usable by our shrinking editor base (anyone at the Wikimedia Foundation listening?), not more waves of generative AI garbage. :bloodofox: (talk) 07:40, 31 December 2024 (UTC)
- A blanket ban doesn't actually resolve most of the issues though, and introduces new ones. Bad usages of AI can already be dealt with by existing policy, and malicious users will ignore a blanket ban anyways. Meanwhile, a blanket ban would harm many legitimate usages for AI. For instance, the majority of professional translators (at least Japanese to English) incorporate AI (or similar tools) into their workflow to speed up translations. Just imagine a professional translator who uses AI to help generate rough drafts of foreign language Wikipedia articles, before reviewing and correcting them, and another editor learning of this and mass reverting them for breaking the blanket ban, and ultimately causing them to leave. Many authors (particularly with carpal tunnel) use AI now to control their voice-to-text (you can train the AI on how you want character names spelled, the formatting of dialogue and other text, etc.). A wikipedia editor could train an AI to convert their voice into Wikipedia-formatted text. AI is subtly incorporated now into spell-checkers, grammar-checkers, photo editors, etc., in ways many people are not aware of. A blanket AI ban has the potential to cause many issues for a lot of people, without actually being that affective at dealing with malicious users. Photos of Japan (talk) 08:26, 31 December 2024 (UTC)
- I think this is the least convincing one I've seen here yet: It contains the ol' 'there are AI features in programs now' while also attempting to invoke accessibility and a little bit of 'we must have machines to translate!'.
- As a translator myself, I can only say: Oh please. Generative AI is notoriously terrible at translating and that's not likely to change. And I mean ever beyond a very, very basic level. Due to the complexities of communication and little matters like nuance, all machine translated material must be thoroughly checked and modified by, yes, human translators, who often encounter it spitting out complete bullshit scraped from who-knows-where (often Wikipedia itself).
- I get that this topic attracts a lot of 'but what if generative AI is better than humans?' from the utopian tech crowd but the reality is that anyone who needs a machine to invent text and visuals for whatever reason simply shouldn't be using it on Wikipedia.
- Either you, a human being, can contribute to the project or you can't. Slapping a bunch of machine-generated (generative AI) visuals and text (much of it ultimately coming from Wikipedia in the first place!) isn't some kind of human substitute, it's just machine-regurgitated slop and is not helping the project.
- If people can't be confident that Wikipedia is made by humans, for humans the project is finally on its way out.:bloodofox: (talk) 09:55, 31 December 2024 (UTC)
- I don't know how up to date you are on the current state of translation, but:
- In a previous State of the industry report for freelance translators, the word on TMs and CAT tools was to take them as "a given." A high percentage of translators use at least one CAT tool, and reports on the increased productivity and efficiency that can accompany their use are solid enough to indicate that, unless the kind of translation work you do by its very nature excludes the use of a CAT tool, you should be using one.
- Over three thousand full-time professional translators from around the world responded to the surveys, which were broken into a survey for CAT tool users and one for those who do not use any CAT tool at all.
- 88% of respondents use at least one CAT tool for at least some of their translation tasks.
- Of those using CAT tools, 83% use a CAT tool for most or all of their translation work.
- Mind you, traditionally CAT tools didn't use AI, but many do now, which only adds to potential sources of confusion in a blanket ban of AI. Photos of Japan (talk) 17:26, 31 December 2024 (UTC)
- You're barking up the tree with the pro-generative AI propaganda in response to me. I think we're all quite aware that generative AI tool integration is now common and that there's also a big effort to replace human translators — and anything that can be "written" with machines-generated text. I'm also keenly aware that generative AI is absolutely horrible at translation and all of it must be thoroughly checked by humans, as you would be if you were a translator yourself. :bloodofox: (talk) 22:20, 31 December 2024 (UTC)
- "all machine translated material must be thoroughly checked and modified by, yes, human translators"
- You are just agreeing with me here.
- There are translators (particularly with non-creative works) who are using these tools to shift more towards reviewing. It should be up to them to decide what they think is the most efficient method for them. Photos of Japan (talk) 06:48, 1 January 2025 (UTC)
- And any translator who wants to use generative AI to attempt to translate can do so off the site. We're not here to check it for them. I strongly support a total ban on any generative AI used on the site exterior to articles on generative AI. :bloodofox: (talk) 11:09, 1 January 2025 (UTC)
- I wonder what you mean by "on the site". The question here is "Is it okay for an editor to go to a completely different website, generate an image all by themselves, upload it to Commons, and put it in a Wikipedia article?" The question here is not "Shall we put AI-generating buttons on Wikipedia's own website?" WhatamIdoing (talk) 02:27, 2 January 2025 (UTC)
- I'm talking about users slapping machine-translated and/or machine-generated nonsense all over the site, only for us to have to go behind and not only check it but correct it. It takes users minutes to do this and it's already happening. It's the same for images. There are very few of us who volunteer here and our numbers are growing fewer. We need to be spending our time improving the site rather than opening the gate as wide as possible for a flood of AI-generated/rendered garbage. The site has enough problems that compound every day rather than having to fend off users armed with hoax machines at every corner. :bloodofox: (talk) 03:20, 2 January 2025 (UTC)
- Sure, we're all opposed to "nonsense", but my question is: What about when the machine happens to generate something that is not "nonsense"?
- I have some worries about AI content. I worry, for example, that they'll corrupt our sources. I worry that List of scholarly publishing stings will get dramatically longer, and also that even more undetected, unconfessed, unretracted papers will get published and believed to be true and trustworthy. I worry that academia will go back to a model in which personal connections are more important, because you really can't trust what's published. I worry that scientific journals will start refusing to publish research unless it comes from someone employed by a trusted institution, that is willing to put its reputation on the line by saying they have directly verified that the work described in the paper was actually performed to their standards, thus scuttling the citizen science movement and excluding people whose institutions are upset with them for other reasons (Oh, you thought you'd take a job elsewhere? Well, we refuse to certify the work you did for the last three years...).
- But I'm not worried about a Wikipedia editor saying "Hey AI, give me a diagram of swingset" or "Make a chart for me out of the data I'm going to give you". In fact, if someone wants to pull the numbers out of Template:Wikipedia editor graph (100 per month), feed it to an AI, and replace the template's contents with an AI-generated image (until they finally fix the Graphs extension), I'd consider that helpful. WhatamIdoing (talk) 07:09, 2 January 2025 (UTC)
- Translators are not using generative AI for translation, the applicability of LLMs to regular translation is still in its infancy and regardless will not be implementing any generative faculties to its output since that is the exact opposite of what translation is supposed to do. JoelleJay (talk) 02:57, 2 January 2025 (UTC)
Translators are not using generative AI for translation
this entirely depends on what you mean by "generative". There are at least three contradictory understandings of the term in this one thread alone. Thryduulf (talk) 03:06, 2 January 2025 (UTC)- Please, you can just go through the entire process with a simple prompt command now. The results are typically shit but you can generate a ton of it quickly, which is perfect for flooding a site like this one — especially without a strong policy against it. I've found myself cleaning up tons of AI-generated crap (and, yes, rendered) stuff here and elsewhere, and now I'm even seeing AI-generated responses to my own comments. It's beyond ridiculous. :bloodofox: (talk) 03:20, 2 January 2025 (UTC)
- And any translator who wants to use generative AI to attempt to translate can do so off the site. We're not here to check it for them. I strongly support a total ban on any generative AI used on the site exterior to articles on generative AI. :bloodofox: (talk) 11:09, 1 January 2025 (UTC)
- I don't know how up to date you are on the current state of translation, but:
- A blanket ban doesn't actually resolve most of the issues though, and introduces new ones. Bad usages of AI can already be dealt with by existing policy, and malicious users will ignore a blanket ban anyways. Meanwhile, a blanket ban would harm many legitimate usages for AI. For instance, the majority of professional translators (at least Japanese to English) incorporate AI (or similar tools) into their workflow to speed up translations. Just imagine a professional translator who uses AI to help generate rough drafts of foreign language Wikipedia articles, before reviewing and correcting them, and another editor learning of this and mass reverting them for breaking the blanket ban, and ultimately causing them to leave. Many authors (particularly with carpal tunnel) use AI now to control their voice-to-text (you can train the AI on how you want character names spelled, the formatting of dialogue and other text, etc.). A wikipedia editor could train an AI to convert their voice into Wikipedia-formatted text. AI is subtly incorporated now into spell-checkers, grammar-checkers, photo editors, etc., in ways many people are not aware of. A blanket AI ban has the potential to cause many issues for a lot of people, without actually being that affective at dealing with malicious users. Photos of Japan (talk) 08:26, 31 December 2024 (UTC)
- Reply, the section of WP:OR concerning images is WP:OI which states "Original images created by a Wikimedian are not considered original research, so long as they do not illustrate or introduce unpublished ideas or arguments". Using AI to generate an image only violates WP:OR if you are using it to illustrate unpublished ideas, which can be assessed just by looking at the image itself. COPYVIO, however, cannot be assessed from looking at just the image alone, which AI could be violating. However, some images may be too simple to be copyrightable, for example AI-generated images of chemicals or mathematical structures potentially. Photos of Japan (talk) 04:34, 31 December 2024 (UTC)
- Ban AI-generated from all articles, AI anything from BLP and medical articles is the position that seems it would permit all instances where there are plausible defenses that AI use does not fabricate or destroy facts intended to be communicated in the context of the article. That scrutiny is stricter with BLP and medical articles in general, and the restriction should be stricter to match. Remsense ‥ 论 06:53, 31 December 2024 (UTC)
- @Remsense, please see my comment immediately above. (We had an edit conflict.) Do you really mean "anything" and everything? Even a simple chart? WhatamIdoing (talk) 07:00, 31 December 2024 (UTC)
- I think my previous comment is operative: almost anything we can see AI used programmatically to generate should be SVG, not raster—even if it means we are embedding raster images in SVG to generate examples like the above. I do not know if there are models that can generate SVG, but if there are I happily state I have no problem with that. I think I'm at risk of seeming downright paranoid—but understanding how errors can propagate and go unnoticed in practice, if we're to trust a black box, we need to at least be able to check what the black box has done on a direct structural level. Remsense ‥ 论 07:02, 31 December 2024 (UTC)
- A quick web search indicates that there are generative AI programs that create SVG files. WhatamIdoing (talk) 07:16, 31 December 2024 (UTC)
- Makes perfect sense that there would be. Again, maybe I come off like a paranoid lunatic, but I really need either the ability to check what the thing is doing, or the ability to check and correct exactly what a black box has done. (In my estimation, if you want to know what procedures person has done, theoretically you can ask them to get a fairly satisfactory result, and the pre-AI algorithms used in image manipulation are canonical and more or less transparent. Acknowledging human error etc., with AI there is not even the theoretical promise that one can be given a truthful account of how it decided to do what it did.) Remsense ‥ 论 07:18, 31 December 2024 (UTC)
- Like everyone said, there should be a de facto ban on using AI images in Wikipedia articles. They are effectively fake images pretending to be real, so they are out of step with the values of Wikipedia.--♦IanMacM♦ (talk to me) 08:20, 31 December 2024 (UTC)
- Except, not everybody has said that, because the majority of those of us who have refrained from hyperbole have pointed out that not all AI images are "fake images pretending to be real" (and those few that are can already be removed under existing policy). You might like to try actually reading the discussion before commenting further. Thryduulf (talk) 10:24, 31 December 2024 (UTC)
- @Remsense, exactly how much "ability to check what the thing is doing" do you need to be able to do, when the image shows 99 dots and 1 baseball, to illustrate the concept of 1%? If the image above said {{pd-algorithm}} instead of {{cc-by-sa-4.0}}, would you remove if from the article, because you just can't be sure that it shows 1%? WhatamIdoing (talk) 02:33, 2 January 2025 (UTC)
- The above is a useful example to an extent, but it is a toy example. I really do think i is required in general when we aren't dealing with media we ourselves are generating. Remsense ‥ 论 04:43, 2 January 2025 (UTC)
- How do we differentiate in policy between a "toy example" (that really would be used in an article) and "real" examples? Is it just that if I upload it, then you know me, and assume I've been responsible? WhatamIdoing (talk) 07:13, 2 January 2025 (UTC)
- The above is a useful example to an extent, but it is a toy example. I really do think i is required in general when we aren't dealing with media we ourselves are generating. Remsense ‥ 论 04:43, 2 January 2025 (UTC)
- Like everyone said, there should be a de facto ban on using AI images in Wikipedia articles. They are effectively fake images pretending to be real, so they are out of step with the values of Wikipedia.--♦IanMacM♦ (talk to me) 08:20, 31 December 2024 (UTC)
- There definitely exist generative AI for SVG files. Here's an example: I used generative AI in Adobe Illustrator to generate the SVG gear in File:Pinwheel scheduling.svg (from Pinwheel scheduling) before drawing by hand the more informative parts of the image. The gear drawing is not great (a real gear would have uniform tooth shape) but maybe the shading is better than I would have done by hand, giving an appearance of dimensionality and surface material while remaining deliberately stylized. Is that the sort of thing everyone here is trying to forbid?
- I can definitely see a case for forbidding AI-generated photorealistic images, especially of BLPs, but that's different from human oversight of AI in the generation of schematic images such as this one. —David Eppstein (talk) 01:15, 1 January 2025 (UTC)
- I'd include BDPs, too. I had to get a few AI-generated images of allegedly Haitian presidents deleted a while ago. The "paintings" were 100% fake, right down to the deformed medals on their military uniforms. An AI-generated "generic person" would be okay for some purposes. For a few purposes (e.g., illustrations of Obesity) it could even be preferable to have a fake "person" than a real one. But for individual/named people, it would be best not to have anything unless it definitely looks like the named person. WhatamIdoing (talk) 07:35, 2 January 2025 (UTC)
- Makes perfect sense that there would be. Again, maybe I come off like a paranoid lunatic, but I really need either the ability to check what the thing is doing, or the ability to check and correct exactly what a black box has done. (In my estimation, if you want to know what procedures person has done, theoretically you can ask them to get a fairly satisfactory result, and the pre-AI algorithms used in image manipulation are canonical and more or less transparent. Acknowledging human error etc., with AI there is not even the theoretical promise that one can be given a truthful account of how it decided to do what it did.) Remsense ‥ 论 07:18, 31 December 2024 (UTC)
- A quick web search indicates that there are generative AI programs that create SVG files. WhatamIdoing (talk) 07:16, 31 December 2024 (UTC)
- I think my previous comment is operative: almost anything we can see AI used programmatically to generate should be SVG, not raster—even if it means we are embedding raster images in SVG to generate examples like the above. I do not know if there are models that can generate SVG, but if there are I happily state I have no problem with that. I think I'm at risk of seeming downright paranoid—but understanding how errors can propagate and go unnoticed in practice, if we're to trust a black box, we need to at least be able to check what the black box has done on a direct structural level. Remsense ‥ 论 07:02, 31 December 2024 (UTC)
- @Remsense, please see my comment immediately above. (We had an edit conflict.) Do you really mean "anything" and everything? Even a simple chart? WhatamIdoing (talk) 07:00, 31 December 2024 (UTC)
- I put it to you that our decision on this requires nuance. It's obviously insane to allow AI-generated images of, for example, Donald Trump, and it's obviously insane to ban AI-generated images from, for example, artificial intelligence art or Théâtre D'opéra Spatial.—S Marshall T/C 11:21, 31 December 2024 (UTC)
- Of course, that's why I'm only looking at specific cases and refrain from proposing a blanket ban on generative AI. Regarding Donald Trump, we do have one AI-generated image of him that is reasonable to allow (in Springfield pet-eating hoax), as the image itself was the subject of relevant commentary. Of course, this is different from using an AI-generated image to illustrate Donald Trump himself, which is what my proposal would recommend against. Chaotic Enby (talk · contribs) 11:32, 31 December 2024 (UTC)
- That's certainly true, but others are adopting much more extreme positions than you are, and it was the more extreme views that I wished to challenge.—S Marshall T/C 11:34, 31 December 2024 (UTC)
- Thanks for the (very reasoned) addition, I just wanted to make my original proposal clear. Chaotic Enby (talk · contribs) 11:43, 31 December 2024 (UTC)
- That's certainly true, but others are adopting much more extreme positions than you are, and it was the more extreme views that I wished to challenge.—S Marshall T/C 11:34, 31 December 2024 (UTC)
- Of course, that's why I'm only looking at specific cases and refrain from proposing a blanket ban on generative AI. Regarding Donald Trump, we do have one AI-generated image of him that is reasonable to allow (in Springfield pet-eating hoax), as the image itself was the subject of relevant commentary. Of course, this is different from using an AI-generated image to illustrate Donald Trump himself, which is what my proposal would recommend against. Chaotic Enby (talk · contribs) 11:32, 31 December 2024 (UTC)
- Going off WAID's example above, perhaps we should be trying to restrict the use of AI where image accuracy/precision is essential, as it would be for BLP and medical info, among other cases, but in cases where we are talking generic or abstract concepts, like the 1% image, it's use is reasonable. I would still say we should strongly prefer am image made by a human with high control of the output, but when accuracy is not as important as just the visualization, it's reasonable to turn to AI to help. Masem (t) 15:12, 31 December 2024 (UTC)
- Support total ban of AI imagery - There are probable copyright problems and veracity problems with anything coming out of a machine. In a word of manipulated reality, Wikipedia will be increasingly respected for holding a hard line against synthetic imagery. Carrite (talk) 15:39, 31 December 2024 (UTC)
- For both issues AI vs not AI is irrelevant. For copyright, if the image is a copyvio we can't use it regardless of whether it is AI or not AI, if it's not a copyvio then that's not a reason to use or not use the image. If the images is not verifiably accurate then we already can (and should) exclude it, regardless of whether it is AI or not AI. For more detail see the extensive discussion above you've either not read or ignored. Thryduulf (talk) 16:34, 31 December 2024 (UTC)
- Yes, we absolutely should ban the use of AI-generated images in these subjects (and beyond, but that's outside the scope of this discussion). AI should not be used to make up a simulation of a living person. It does not actually depict the person and may introduce errors or flaws that don't actually exist. The picture does not depict the real person because it is quite simply fake.
- Even worse would be using AI to develop medical images in articles in any way. The possibility for error there is unacceptable. Yes, humans make errors too, but there there is a) someone with the responsibility to fix it and b) someone conscious who actually made the picture, rather than a black box that spat it out after looking at similar training data. Cremastra 🎄 u — c 🎄 20:08, 31 December 2024 (UTC)
- It's incredibly disheartening to see multiple otherwise intelligent editors who have apparently not read and/or not understood what has been said in the discussion but rather responding with what appears to be knee-jerk reactions to anti-AI scaremongering. The sky will not fall in, Wikipedia is not going to be taken over by AI, AI is not out to subvert Wikipedia, we already can (and do) remove (and more commonly not add in the first placE) false and misleading information/images. Thryduulf (talk) 20:31, 31 December 2024 (UTC)
- So what benefit does allowing AI images bring? We shouldn't be forced to decide these on a case-by-case basis.
- I'm sorry to dishearten you, but I still respectfully disagree with you. And I don't think this is "scaremongering" (although I admit that if it was, I would of course claim it wasn't). Cremastra 🎄 u — c 🎄 21:02, 31 December 2024 (UTC) Cremastra 🎄 u — c 🎄 20:56, 31 December 2024 (UTC)
- Determining what benefits any image brings to Wikipedia can only be done on a case-by-case basis. It is literally impossible to know whether any image improves the encyclopaedia without knowing the context of which portion of what article it would illustrate, and what alternative images are and are not available for that same spot.
- The benefit of allowing AI images is that when an AI image is the best option for a given article we use it. We gain absolutely nothing by prohibiting using the best image available, indeed doing so would actively harm the project without bringing any benefits. AI images that are misleading, inaccurate or any of the other negative things any image can be are never the best option and so are never used - we don't need any policies or guidelines to tell us that. Thryduulf (talk) 21:43, 31 December 2024 (UTC)
- It's incredibly disheartening to see multiple otherwise intelligent editors who have apparently not read and/or not understood what has been said in the discussion but rather responding with what appears to be knee-jerk reactions to anti-AI scaremongering. The sky will not fall in, Wikipedia is not going to be taken over by AI, AI is not out to subvert Wikipedia, we already can (and do) remove (and more commonly not add in the first placE) false and misleading information/images. Thryduulf (talk) 20:31, 31 December 2024 (UTC)
- Support blanket ban on AI-generated text or images in articles, except in contexts where the AI-generated content is itself the subject of discussion (in a specific or general sense). Generative AI is fundamentally at odds with Wikipedia's mission of providing reliable information, because of its propensity to distort reality or make up information out of whole cloth. It has no place in our encyclopedia. —pythoncoder (talk | contribs) 21:34, 31 December 2024 (UTC)
- Support blanket ban on AI-generated images except in ABOUTSELF contexts. This is especially a problem given the preeminence Google gives to Wikipedia images in its image search. JoelleJay (talk) 22:49, 31 December 2024 (UTC)
- Ban across the board, except in articles which are actually about AI-generated imagery or the tools used to create them, or the image itself is the subject of substantial commentary within the article for some reason. Even in those cases, clearly indicating that the image is AI-generated should be required. Seraphimblade Talk to me 00:29, 1 January 2025 (UTC)
- Oppose blanket bans that would forbid the use of AI assistance in creating diagrams or other deliberately stylized content. Also oppose blanket bans that would forbid AI illustrations in articles about AI illustrations. I am not opposed to banning photorealistic AI-generated images in non-AI-generation contexts or banning AI-generated images from BLPs unless the image itself is specifically relevant to the subject of the BLP. —David Eppstein (talk) 01:27, 1 January 2025 (UTC)
- Oppose blanket bans AI is just a new buzzword so, for example, Apple phones now include "Apple Intelligence" as a standard feature. Does this means that photographs taken using Apple phones will be inadmissable? That would be silly because legacy technologies are already rife with issues of accuracy and verification. For example, there's an image on the main page right now (right). This purports to be a particular person ("The Father of Australia") but, if you check the image description, you find that it may have been his brother and even the attribution to the artist is uncertain. AI features may help in exposing such existing weaknesses in our image use and so we should be free to use them in an intelligent way. Andrew🐉(talk) 08:03, 1 January 2025 (UTC)
- So, you expect an the AI, notoriously trained on Wikipedia (and whatever else is floating around on the internet), to correct Wikipedia where humans have failed... using the data it scraped from Wikipedia (and who knows where else)? :bloodofox: (talk) 11:12, 1 January 2025 (UTC)
- I tried using the Deep Research option of Gemini to assess the attribution of the Macquarie portrait. Its stated methodology seemed quite respectable and sensible.
- So, you expect an the AI, notoriously trained on Wikipedia (and whatever else is floating around on the internet), to correct Wikipedia where humans have failed... using the data it scraped from Wikipedia (and who knows where else)? :bloodofox: (talk) 11:12, 1 January 2025 (UTC)
The Opie Portrait of Lachlan Macquarie: An Examination of its Attribution: Methodology
|
---|
To thoroughly investigate the attribution of the Opie portrait of Lachlan Macquarie, a comprehensive research process was undertaken. This involved several key steps:
|
- It was quite transparent in listing and citing the sources that it used for its analysis. These included the Wikipedia image but if one didn't want that included, it would be easy to exclude it.
- So, AIs don't have to be inscrutable black boxes. They can have programmatic parameters like the existing bots and scripts that we use routinely on Wikipedia. Such power tools seem needed to deal with the large image backlogs that we have on Commons. Perhaps they could help by providing captions and categories where these don't exist.
- Andrew🐉(talk) 09:09, 2 January 2025 (UTC)
- They don't have to be black boxes but they are by design: they exist in a legally dubious area and thus hide what they're scraping to avoid further legal problems. That's no secret. We know for example that Wikipedia is a core data set for likely most AIs today. They also notoriously and quite confidently spit out a lie ("hallucinate") and frequently spit out total nonsense. Add to that that they're restricted to whatever is floating around on the internet or whatever other data set they've been fed (usually just more internet), and many specialist topics, like texts on ancient history and even standard reference works, are not accessible on the internet (despite Google's efforts). :bloodofox: (talk) 09:39, 2 January 2025 (UTC)
- While its stated methodology seems sensible, there's no evidence that it actually followed that methodology. The bullet points are pretty vague, and are pretty much the default methodologies used to examine actual historical works. Chaotic Enby (talk · contribs) 17:40, 2 January 2025 (UTC)
- Yes, there's evidence. As I stated above, the analysis is transparent and cites the sources that it used. And these all seem to check out rather than being invented. So, this level of AI goes beyond the first generation of LLM and addresses some of their weaknesses. I suppose that image generation is likewise being developed and improved and so we shouldn't rush to judgement while the technology is undergoing rapid development. Andrew🐉(talk) 17:28, 4 January 2025 (UTC)
- Oppose blanket ban: best of luck to editors here who hope to be able to ban an entirely undefined and largely undetectable procedure. The term 'AI' as commonly used is no more than a buzzword - what exactly would be banned? And how does it improve the encyclopedia to encourage editors to object to images not simply because they are inaccurate, or inappropriate for the article, but because they subjectively look too good? Will the image creator be quizzed on Commons about the tools they used? Will creators who are transparent about what they have created have their images deleted while those who keep silent don’t? Honestly, this whole discussion is going to seem hopelessly outdated within a year at most. It’s like when early calculators were banned in exams because they were ‘cheating’, forcing students to use slide rules. MichaelMaggs (talk) 12:52, 1 January 2025 (UTC)
- I am genuinely confused as to why this has turned into a discussion about a blanket ban, even though the original proposal exclusively focused on AI-generated images (the kind that is generated by an AI model from a prompt, which are already tagged on Commons, not regular images with AI enhancement or tools being used) and only in specific contexts. Not sure where the "subjectively look too good" thing even comes from, honestly. Chaotic Enby (talk · contribs) 12:58, 1 January 2025 (UTC)
- That just show how ill-defined the whole area is. It seem you restrict the term 'AI-generated' to mean "images generated solely(?) from a text prompt". The question posed above has no such restriction. What a buzzword means is largely in the mind of the reader, of course, but to me and I think to many, 'AI-generated' means generated by AI. MichaelMaggs (talk) 13:15, 1 January 2025 (UTC)
- I used the text prompt example because that is the most common way to have an AI model generate an image, but I recognize that I should've clarified it better. There is definitely a distinction between an image being generated by AI (like the Laurence Boccolini example below) and an image being altered or retouched by AI (which includes many features integrated in smartphones today). I don't think it's a "buzzword" to say that there is a meaningful difference between an image being made up by an AI model and a preexisting image being altered in some way, and I am surprised that many people understand "AI-generated" as including the latter. Chaotic Enby (talk · contribs) 15:24, 1 January 2025 (UTC)
- That just show how ill-defined the whole area is. It seem you restrict the term 'AI-generated' to mean "images generated solely(?) from a text prompt". The question posed above has no such restriction. What a buzzword means is largely in the mind of the reader, of course, but to me and I think to many, 'AI-generated' means generated by AI. MichaelMaggs (talk) 13:15, 1 January 2025 (UTC)
- I am genuinely confused as to why this has turned into a discussion about a blanket ban, even though the original proposal exclusively focused on AI-generated images (the kind that is generated by an AI model from a prompt, which are already tagged on Commons, not regular images with AI enhancement or tools being used) and only in specific contexts. Not sure where the "subjectively look too good" thing even comes from, honestly. Chaotic Enby (talk · contribs) 12:58, 1 January 2025 (UTC)
- Oppose as unenforceable. I just want you to imagine enforcing this policy against people who have not violated it. All this will do is allow Wikipedians who primarily contribute via text to accuse artists of using AI because they don't like the results to get their contributions taken down. I understand the impulse to oppose AI on principle, but the labor and aesthetic issues don't actually have anything to do with Wikipedia. If there is not actually a problem with the content conveyed by the image—for example, if the illustrator intentionally corrected any hallucinations—then someone objecting over AI is not discussing page content. If the image was not even made with AI, they are hallucinating based on prejudices that are irrelevant to the image. The bottom line is that images should be judged on their content, not how they were made. Besides all the policy-driven stuff, if Wikipedia's response to the creation of AI imaging tools is to crack down on all artistic contributions to Wikipedia (which seems to be the inevitable direction of these discussions), what does that say? Categorical bans of this kind are ill-advised and anti-illustrator. lethargilistic (talk) 15:41, 1 January 2025 (UTC)
- And the same applies to photography, of course. If in my photo of a garden I notice there is a distracting piece of paper on the lawn, nobody would worry if I used the old-style clone-stamp tool to remove it in Photoshop, adding new grass in its place (I'm assuming here that I don't change details of the actual landscape in any way). Now, though, Photoshop uses AI to achieve essentially the same result while making it simpler for the user. A large proportion of all processed photos will have at least some similar but essentially undetectable "generated AI" content, even if only a small area of grass. There is simply no way to enforce the proposed policy, short of banning all high-quality photography – which requires post-processing by design, and in which similar encyclopedically non-problematic edits are commonplace. MichaelMaggs (talk) 17:39, 1 January 2025 (UTC)
- Before anyone objects that my example is not "an image generated from a text prompt", note that there's no mention of such a restriction in the proposal we are discussing. Even if there were, it makes no difference. Photoshop can already generate photo-realistic areas from a text prompt. If such use is non-misleading and essentially undetectable, it's fine; if if changes the image in such a way as to make it misleading, inaccurate or non-encycpopedic in any way it can be challenged on that basis. MichaelMaggs (talk) 17:58, 1 January 2025 (UTC)
- As I said previously, the text prompt is just an example, not a restriction of the proposal. The point is that you talk about editing an existing image (which is what you talk about, as you say
if if changes the image
), while I am talking about creating an image ex nihilo, which is what "generating" means. Chaotic Enby (talk · contribs) 18:05, 1 January 2025 (UTC)- I'm talking about a photograph with AI-generated areas within it. This is commonplace, and is targeted by the proposal. Categorical bans of the type suggested are indeed ill-advised. MichaelMaggs (talk) 18:16, 1 January 2025 (UTC)
- As I said previously, the text prompt is just an example, not a restriction of the proposal. The point is that you talk about editing an existing image (which is what you talk about, as you say
- Even if the ban is unenforceable, there are many editors who will choose to use AI images if they are allowed and just as cheerfully skip them if they are not allowed. That would mean the only people posting AI images are those who choose to break the rule and/or don't know about it. That would probably add up to many AI images not used. Darkfrog24 (talk) 22:51, 3 January 2025 (UTC)
- Support blanket ban because "AI" is a fundamentally unethical technology based on the exploitation of labor, the wanton destruction of the planetary environment, and the subversion of every value that an encyclopedia should stand for. ABOUTSELF-type exceptions for "AI" output that has already been generated might be permissible, in order to document the cursed time in which we live, but those exceptions are going to be rare. How many examples of Shrimp Jesus slop do we need? XOR'easter (talk) 23:30, 1 January 2025 (UTC)
- Support blanket ban - Primarily because of the "poisoning the well"/"dead internet" issues created by it. FOARP (talk) 14:30, 2 January 2025 (UTC)
- Support a blanket ban to assure some control over AI-creep in Wikipedia. And per discussion. Randy Kryn (talk) 10:50, 3 January 2025 (UTC)
- Support that WP:POLICY applies to images: images should be verifiable, neutral, and absent of original research. AI is just the latest quickest way to produce images that are original, unverifiable, and potentially biased. Is anyone in their right mind saying that we allow people to game our rules on WP:OR and WP:V by using images instead of text? Shooterwalker (talk) 17:04, 3 January 2025 (UTC)
- As an aside on this: in some cases Commons is being treated as a way of side-stepping WP:NOR and other restrictions. Stuff that would get deleted if it were written content on WP gets in to WP as images posted on Commons. The worst examples are those conflict maps that are created from a bunch of Twitter posts (eg the Syrian civil war one). AI-generated imagery is another field where that appears to be happening. FOARP (talk) 10:43, 4 January 2025 (UTC)
- Support temporary blanket ban with a posted expiration/requred rediscussion date of no more than two years from closing. AI as the term is currently used is very, very new. Right now these images would do more harm than good, but it seems likely that the culture will adjust to them. I support an exception for the when the article is about the image itself and that image is notable, such as the photograph of the black-and-blue/gold-and-white dress in The Dress and/or examples of AI images in articles in which they are relevant. E.g. "here is what a hallucination is: count the fingers." Darkfrog24 (talk) 23:01, 3 January 2025 (UTC)
- First, I think any guidance should avoid referring to specific technology, as that changes rapidly and is used for many different purposes. Second, assuming that the image in question has a suitable copyright status for use on Wikipedia, the key question is whether or not the reliability of the image has been established. If the intent of the image is to display 100 dots with 99 having the same appearance and 1 with a different appearance, then ordinary math skills are sufficient and so any Wikipedia editor can evaluate the reliability without performing original research. If the intent is to depict a likeness of a specific person, then there needs to be reliable sources indicating that the image is sufficiently accurate. This is the same for actual photographs, re-touched ones, drawings, hedcuts, and so forth. Typically this can be established by a reliable source using that image with a corresponding description or context. isaacl (talk) 17:59, 4 January 2025 (UTC)
- Support Blanket Ban on AI generated imagery per most of the discussion above. It's a very slippery slope. I might consider a very narrow exception for an AI generated image of a person that was specifically authorized or commissioned by the subject. -Ad Orientem (talk) 02:45, 5 January 2025 (UTC)
- Oppose blanket ban It is far too early to take an absolutist position, particularly when the potential is enormous. Wikipedia is already is image desert and to reject something that is only at the cusp of development is unwise. scope_creepTalk 20:11, 5 January 2025 (UTC)
- Support blanket ban on AI-generated images except in ABOUTSELF contexts. An encyclopedia should not be using fake images. I do not believe that further nuance is necessary. LEPRICAVARK (talk) 22:44, 5 January 2025 (UTC)
- Support blanket ban as the general guideline, as accuracy, personal rights, and intellectual rights issues are very weighty, here (as is disclosure to the reader). (I could see perhaps supporting adoption of a sub-guideline for ways to come to a broad consensus in individual use cases (carve-outs, except for BLPs) which address all the weighty issues on an individual use basis -- but that needs to be drafted and agreed to, and there is no good reason to wait to adopt the general ban in the meantime). Alanscottwalker (talk) 15:32, 8 January 2025 (UTC)
- Support indefinite blanket ban except ABOUTSELF and simple abstract examples (such as the image of 99 dots above). In addition to all the issues raised above, including copyvio and creator consent issues, in cases of photorealistic images it may never be obvious to all readers exactly which elements of the image are guesswork. The cormorant picture at the head of the section reminded me of the first video of a horse in gallop, in 1878. Had AI been trained on paintings of horses instead of actual videos and used to "improve" said videos, we would've ended up with serious delusions about the horse's gait. We don't know what questions -- scientific or otherwise -- photography will be used to settle in the coming years, but we do know that consumer-grade photo AI has already been trained to intentionally fake detail to draw sales, such as on photos of the Moon[1][2]. I think it's unrealistic to require contributors to take photos with expensive cameras or specially-made apps, but Wikipedia should act to limit its exposure to this kind of technology as far as is feasible. Daß Wölf 20:57, 9 January 2025 (UTC)
- Support at least some sort of recomendation against the use AI generated imagery in non-AI contexts−except obviously where the topic of the article is specificly related to AI generated imagery (Generative artificial intelligence, Springfield pet-eating hoax, AI slop, etc.). At the very least the consensus bellow about BLPs should be extened to all historical biographies, as all the examples I've seen (see WP:AIIMAGE) fail WP:IMAGERELEVANCE (failing to add anything to the sourced text) and serving only to mislead the reader. We inclued images for a reason, not just for decoration. I'm also reminded the essay WP:PORTRAIT, and the distinction it makes between notable depictions of histoical people (which can be useful to illustarate articles) and non-notable fictional portraits which in its (imo well argued) view
have no legitimate encyclopedic function whatsoever
. Cakelot1 ☞️ talk 14:36, 14 January 2025 (UTC)- Anything that fails WP:IMAGERELEVANCE can be, should be, and is, excluded from use already, likewise any images which
have no legitimate encyclopedic function whatsoever.
This applies to AI and none AI images equally and identically. Just as we don't have or need a policy or guideline specifically saying don't use irrelevant or otherwise non-encyclopaedic watercolour images in articles we don't need any policy or guideline specifically calling out AI - because it would (as you demonstrate) need to carve out exceptions for when it's use is relevant. Thryduulf (talk) 14:45, 14 January 2025 (UTC)- That would be an easy change; just add a sentence like "AI-generated images of individual people are primarily decorative and should not be used". We should probably do that no matter what else is decided. WhatamIdoing (talk) 23:24, 14 January 2025 (UTC)
- Except that is both not true and irrelevant. Some AI-generated images of individual people are primarily decorative, but not all of them. If an image is purely decorative it shouldn't be used, regardless of whether it is AI-generated or not. Thryduulf (talk) 13:43, 15 January 2025 (UTC)
- Can you give an example of an AI-generated image of an individual person that is (a) not primarily decorative and also (b) not copied from the person's social media/own publications, and that (c) at least some editors think would be a good idea?
- "Hey, AI, please give me a realistic-looking photo of this person who died in the 12th century" is not it. "Hey, AI, we have no freely licensed photos of this celebrity, so please give me a line-art caricature" is not it. What is? WhatamIdoing (talk) 17:50, 15 January 2025 (UTC)
- Criteria (b) and (c) were not part of the statement I was responding to, and make it a very significantly different assertion. I will assume that you are not making motte-and-bailey arguments in bad faith, but the frequent fallacious argumentation in these AI discussions is getting tiresome.
- Even with the additional criteria it is still irrelevant - if no editor thinks an image is a good idea, then it won't be used in an article regardless of why they don't think it's a good idea. If some editors think an individual image is a good idea then it's obviously potentially encyclopaedic and needs to be judged on its merits (whether it is AI-generated is completely irrelevant to it's encyclopaedic value). An image that the subject uses on their social media/own publications to identify themselves (for example as an avatar) is the perfect example of the type of image which is frequently used in articles about that individual. Thryduulf (talk) 18:56, 15 January 2025 (UTC)
- Except that is both not true and irrelevant. Some AI-generated images of individual people are primarily decorative, but not all of them. If an image is purely decorative it shouldn't be used, regardless of whether it is AI-generated or not. Thryduulf (talk) 13:43, 15 January 2025 (UTC)
- That would be an easy change; just add a sentence like "AI-generated images of individual people are primarily decorative and should not be used". We should probably do that no matter what else is decided. WhatamIdoing (talk) 23:24, 14 January 2025 (UTC)
- Anything that fails WP:IMAGERELEVANCE can be, should be, and is, excluded from use already, likewise any images which
BLPs
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
Are AI-generated images (generated via text prompts, see also: text-to-image model) okay to use to depict BLP subjects? The Laurence Boccolini example was mentioned in the opening paragraph. The image was created using Grok / Aurora,
a text-to-image model developed by xAI, to generate images...As with other text-to-image models, Aurora generates images from natural language descriptions, called prompts.Some1 (talk) 12:34, 31 December 2024 (UTC)
03:58, January 3, 2025: Note: that these images can either be photorealistic in style (such as the Laurence Boccolini example) or non-photorealistic in style (see the Germán Larrea Mota-Velasco example, which was generated using DALL-E, another text-to-image model).
Some1 (talk) 11:10, 3 January 2025 (UTC)notified: Wikipedia talk:Biographies of living persons, Wikipedia talk:No original research, Wikipedia talk:Manual of Style/Images, Template:Centralized discussion -- Some1 (talk) 11:27, 2 January 2025 (UTC)
- No. I don't think they are at all, as, despite looking photorealistic, they are essentially just speculation about what the person might look like. A photorealistic image conveys the look of something up to the details, and giving a false impression of what the person looks like (or, at best, just guesswork) is actively counterproductive. (Edit 21:59, 31 December 2024 (UTC): clarified bolded !vote since everyone else did it) Chaotic Enby (talk · contribs) 12:46, 31 December 2024 (UTC)
- That AI generated image looks like Dick Cheney wearing a Laurence Boccolini suit. ScottishFinnishRadish (talk) 12:50, 31 December 2024 (UTC)
- There are plenty of non-free images of Laurence Boccolini with which this image can be compared. Assuming at least most of those are accurate representations of them (I've never heard of them before and have no other frame of reference) the image above is similar to but not an accurate representation of them (most obviously but probably least significantly, in none of the available images are they wearing that design of glasses). This means the image should not be used to identify them unless they use it to identify themselves. It should not be used elsewhere in the article unless it has been the subject of notable commentary. That it is an AI image makes absolutely no difference to any of this. Thryduulf (talk) 16:45, 31 December 2024 (UTC)
- No. Well, that was easy.They are fake images; they do not actually depict the person. They depict an AI-generated simulation of a person that may be inaccurate. Cremastra 🎄 u — c 🎄 20:00, 31 December 2024 (UTC)
- Even if the subject uses the image to identify themselves, the image is still fake. Cremastra (u — c) 19:17, 2 January 2025 (UTC)
- No, with the caveat that its mostly on the grounds that we don't have enough information and when it comes to BLP we are required to exercise caution. If at some point in the future AI generated photorealistic simulacrums living people become mainstream with major newspapers and academic publishers it would be fair to revisit any restrictions, but in this I strongly believe that we should follow not lead. Horse Eye's Back (talk) 20:37, 31 December 2024 (UTC)
- No. The use of AI-generated images to depict people (living or otherwise) is fundamentally misleading, because the images are not actually depicting the person. —pythoncoder (talk | contribs) 21:30, 31 December 2024 (UTC)
- No except perhaps, maybe, if the subject explicitly is already using that image to represent themselves. But mostly no. -Kj cheetham (talk) 21:32, 31 December 2024 (UTC)
- Yes, when that image is an accurate representation and better than any available alternative, used by the subject to represent themselves, or the subject of notable commentary. However, as these are the exact requirements to use any image to represent a BLP subject this is already policy. Thryduulf (talk) 21:46, 31 December 2024 (UTC)
- How well can we determine how accurate a representation it is? Looking at the example above, I'd argue that the real Laurence Boccolini has a somewhat rounder/pointier chin, a wider mouth, and possibly different eye wrinkles, although the latter probably depends quite a lot on the facial expression.
- How accurate a representation a photorealistic AI image is is ultimately a matter of editor opinion. Cremastra 🎄 u — c 🎄 21:54, 31 December 2024 (UTC)
How well can we determine how accurate a representation it is?
in exactly the same way that we can determine whether a human-crafted image is an accurate representation. How accurate a representation any image is is ultimately a matter of editor opinion. Whether an image is AI or not is irrelevant. I agree the example image above is not sufficiently accurate, but we wouldn't ban photoshopped images because one example was not deemed accurate enough, because we are rational people who understand that one example is not representative of an entire class of images - at least when the subject is something other than AI. Thryduulf (talk) 23:54, 31 December 2024 (UTC)- I think except in a few exceptional circumstances of actual complex restorations, human photoshopping is not going to change or distort a person's appearance in the same way an AI image would. Modifications done by a person who is paying attention to what they are doing and merely enhancing an image, by person who is aware, while they are making changes, that they might be distorting the image and is, I only assume, trying to minimise it – those careful modifications shouldn't be equated with something made up by an AI image generator. Cremastra 🎄 u — c 🎄 00:14, 1 January 2025 (UTC)
- I'm guessing your filter bubble doesn't include Facetune and their notorious Filter (social media)#Beauty filter problems. WhatamIdoing (talk) 02:46, 2 January 2025 (UTC)
- A photo of a person can be connected to a specific time, place, and subject that existed. It can be compared to other images sharing one or more of those properties. A photo that was PhotoShopped is still either a generally faithful reproduction of a scene that existed, or has significant alterations that can still be attributed to a human or at least to a specific algorithm, e.g. filters. The artistic license of a painting can still be attributed to a human and doesn't run much risk of being misidentified as real, unless it's by Chuck Close et al. An AI-generated image cannot be connected to a particular scene that ever existed and cannot be attributable to a human's artistic license (and there is legal precedent that such images are not copyrightable to the prompter specifically because of this). Individual errors in a human-generated artwork are far more predictable, understandable, identifiable, traceable... than those in AI-generated images. We have innate assumptions when we encounter real images or artwork that are just not transferable. These are meaningful differences to the vast majority of people: according to a Getty poll, 87% of respondents want AI-generated art to at least be transparent, and 98% consider authentic images "pivotal in establishing trust". And even if you disagree with all that, can you not see the larger problem of AI images on Wikipedia getting propagated into generative AI corpora? JoelleJay (talk) 04:20, 2 January 2025 (UTC)
- I agree that our old assumptions don't hold true. I think the world will need new assumptions. We will probably have those in place in another decade or so.
- I think we're Wikipedia:Here to build an encyclopedia, not here to protect AI engines from ingesting AI-generated artwork. Figuring out what they should ingest is their problem, not mine. WhatamIdoing (talk) 07:40, 2 January 2025 (UTC)
- I think except in a few exceptional circumstances of actual complex restorations, human photoshopping is not going to change or distort a person's appearance in the same way an AI image would. Modifications done by a person who is paying attention to what they are doing and merely enhancing an image, by person who is aware, while they are making changes, that they might be distorting the image and is, I only assume, trying to minimise it – those careful modifications shouldn't be equated with something made up by an AI image generator. Cremastra 🎄 u — c 🎄 00:14, 1 January 2025 (UTC)
- Absolutely no fake/AI images of people, photorealistic or otherwise. How is this even a question? These images are fake. Readers need to be able to trust Wikipedia, not navigate around whatever junk someone has created with a prompt and presented as somehow representative. This includes text. :bloodofox: (talk) 22:24, 31 December 2024 (UTC)
- No except for edge cases (mostly, if the image itself is notable enough to go into the article). Gnomingstuff (talk) 22:31, 31 December 2024 (UTC)
- Absolutely not, except for ABOUTSELF. "They're fine if they're accurate enough" is an obscenely naive stance. JoelleJay (talk) 23:06, 31 December 2024 (UTC)
- No with no exceptions. Carrite (talk) 23:54, 31 December 2024 (UTC)
- No. We don't permit falsifications in BLPs. Seraphimblade Talk to me 00:30, 1 January 2025 (UTC)
- For the requested clarification by Some1, no AI-generated images (except when the image itself is specifically discussed in the article, and even then it should not be the lead image and it should be clearly indicated that the image is AI-generated), no drawings, no nothing of that sort. Actual photographs of the subject, nothing else. Articles are not required to have images at all; no image whatsoever is preferable to something which is not an image of the person. Seraphimblade Talk to me 05:42, 3 January 2025 (UTC)
- No, but with exceptions. I could imagine a case where a specific AI-generated image has some direct relevance to the notability of the subject of a BLP. In such cases, it should be included, if it could be properly licensed. But I do oppose AI-generated images as portraits of BLP subjects. —David Eppstein (talk) 01:27, 1 January 2025 (UTC)
- Since I was pinged on this point: when I wrote "I do oppose AI-generated images as portraits", I meant exactly that, including all AI-generated images, such as those in a sketchy or artistic style, not just the photorealistic ones. I am not opposed to certain uses of AI-generated images in BLPs when they are not the main portrait of the subject, for instance in diagrams (not depicting the subject) to illustrate some concept pioneered by the subject, or in case someone becomes famous for being the subject of an AI-generated image. —David Eppstein (talk) 05:41, 3 January 2025 (UTC)
- No, and no exceptions or do-overs. Better to have no images (or Stone-Age style cave paintings) than Frankenstein images, no matter how accurate or artistic. Akin to shopped manipulated photographs, they should have no room (or room service) at the WikiInn. Randy Kryn (talk) 01:34, 1 January 2025 (UTC)
- Some "shopped manipulated photographs" are misleading and inaccurate, others are not. We can and do exclude the former from the parts of the encyclopaedia where they don't add value without specific policies and without excluding them where they are relevant (e.g. Photograph manipulation) or excluding those that are not misleading or inaccurate. AI images are no different. Thryduulf (talk) 02:57, 1 January 2025 (UTC)
- Assuming we know. Assuming it's material. The infobox image in – and the only extant photo of – Blind Lemon Jefferson was "photoshopped" by a marketing team, maybe half a century before Adobe Photoshop was created. They wanted to show him wearing a necktie. I don't think that this level of manipulation is actually a problem. WhatamIdoing (talk) 07:44, 2 January 2025 (UTC)
- Some "shopped manipulated photographs" are misleading and inaccurate, others are not. We can and do exclude the former from the parts of the encyclopaedia where they don't add value without specific policies and without excluding them where they are relevant (e.g. Photograph manipulation) or excluding those that are not misleading or inaccurate. AI images are no different. Thryduulf (talk) 02:57, 1 January 2025 (UTC)
- Yes, so long as it is an accurate representation. Hawkeye7 (discuss) 03:40, 1 January 2025 (UTC)
- No not for BLPs. Traumnovelle (talk) 04:15, 1 January 2025 (UTC)
- No Not at all relevant for pictures of people, as the accuracy is not enough and can misrepresent. Also (and I'm shocked as it seems no one has mentioned this), what about Copyright issues? Who holds the copyright for an AI-generated image? The user who wrote the prompt? The creator(s) of the AI model? The creator(s) of the images in the database that the AI used to create the images? It's sounds to me such a clusterfuck of copyright issues that I don't understand how this is even a discussion. --SuperJew (talk) 07:10, 1 January 2025 (UTC)
- Under the US law / copyright office, machine-generated images including those by AI cannot be copyrighted. That also means that AI images aren't treated as derivative works.
What is still under legal concern is whether the use of bodies of copyrighted works without any approve or license from the copyright holders to train AI models is under fair use or not. There are multiple court cases where this is the primary challenge, and none have yet to reach a decision yet. Assuming the courts rule that there was no fair use, that would either require the entity that owns the AI to pay fines and ongoing licensing costs, or delete their trained model to start afresh with free licensed/works, but in either case, that would not impact how we'd use any resulting AI image from a copyright standpoint. — Masem (t) 14:29, 1 January 2025 (UTC)
- Under the US law / copyright office, machine-generated images including those by AI cannot be copyrighted. That also means that AI images aren't treated as derivative works.
- No, I'm in agreeance with Seraphimblade here. Whether we like it or not, the usage of a portrait on an article implies that it's just that, a portrait. It's incredibly disingenuous to users to represent an AI generated photo as truth. Doawk7 (talk) 09:32, 1 January 2025 (UTC)
- So you just said a portrait can be used because wikipedia tells you it's a portrait, and thus not a real photo. Can't AI be exactly the same? As long as we tell readers it is an AI representation? Heck, most AI looks closer to the real thing than any portrait. Fyunck(click) (talk) 10:07, 2 January 2025 (UTC)
- To clarify, I didn't mean "portrait" as in "painting," I meant it as "photo of person."
- However, I really want to stick to what you say at the end there:
Heck, most AI looks closer to the real thing than any portrait.
- That's exactly the problem: by looking close to the "real thing" it misleads users into believing a non-existent source of truth.
- Per the wording of the RfC of "
depict BLP subjects
," I don't think there would be any valid case to utilize AI images. I hold a strong No. Doawk7 (talk) 04:15, 3 January 2025 (UTC)
- So you just said a portrait can be used because wikipedia tells you it's a portrait, and thus not a real photo. Can't AI be exactly the same? As long as we tell readers it is an AI representation? Heck, most AI looks closer to the real thing than any portrait. Fyunck(click) (talk) 10:07, 2 January 2025 (UTC)
- No. We should not use AI-generated images for situations like this, they are basically just guesswork by a machine as Quark said and they can misinform readers as to what a person looks like. Plus, there's a big grey area regarding copyright. For an AI generator to know what somebody looks like, it has to have photos of that person in its dataset, so it's very possible that they can be considered derivative works or copyright violations. Using an AI image (derivative work) to get around the fact that we have no free images is just fair use with extra steps. Di (they-them) (talk) 19:33, 1 January 2025 (UTC)
- Maybe There was a prominent BLP image which we displayed on the main page recently. (right) This made me uneasy because it was an artistic impression created from photographs rather than life. And it was "colored digitally". Functionally, this seems to be exactly the same sort of thing as the Laurence Boccolini composite. The issue should not be whether there's a particular technology label involved but whether such creative composites and artists' impressions are acceptable as better than nothing. Andrew🐉(talk) 08:30, 1 January 2025 (UTC)
- Except it is clear to everyone that the illustration to the right is a sketch, a human rendition, while in the photorealistic image above, it is less clear. Cremastra (u — c) 14:18, 1 January 2025 (UTC)
- Except it says right below it "AI-generated image of Laurence Boccolini." How much more clear can it be when it say point-blank "AI-generated image." Fyunck(click) (talk) 10:12, 2 January 2025 (UTC)
- Commons descriptions do not appear on our articles. CMD (talk) 10:28, 2 January 2025 (UTC)
- People taking a quick glance at an infobox image that looks pretty like a photograph are not going to scrutinize commons tagging. Cremastra (u — c) 14:15, 2 January 2025 (UTC)
- Keep in mind that many AIs can produce works that match various styles, not just photographic quality. It is still possible for AI to produce something that looks like a watercolor or sketched drawing. — Masem (t) 14:33, 1 January 2025 (UTC)
- Yes, you're absolutely right. But so far photorealistic images have been the most common to illustrate articles (see Wikipedia:WikiProject AI Cleanup/AI images in non-AI contexts for some examples. Cremastra (u — c) 14:37, 1 January 2025 (UTC)
- Then push to ban photorealistic images, rather than pushing for a blanket ban that would also apply to obvious sketches. —David Eppstein (talk) 20:06, 1 January 2025 (UTC)
- Same thing I wrote above, but for "photoshopping" read "drawing": (Bold added for emphasis)
...human [illustration] is not going to change or distort a person's appearance in the same way an AI image would. [Drawings] done by a [competent] person who is paying attention to what they are doing [...] by person who is aware, while they are making [the drawing], that they might be distorting the image and is, I only assume, trying to minimise it – those careful modifications shouldn't be equated with something made up by an AI image generator.
Cremastra (u — c) 20:56, 1 January 2025 (UTC)- @Cremastra then why are you advocating for a ban on AI images rather than a ban on distorted images? Remember that with careful modifications by someone who is aware of what they are doing that AI images can be made more accurate. Why are you assuming that a human artist is trying to minimise the distortions but someone working with AI is not? Thryduulf (talk) 22:12, 1 January 2025 (UTC)
- I believe that AI-generated images are fundamentally misleading because they are a simulation by a machine rather than a drawing by a human. To quote pythoncoder above:
The use of AI-generated images to depict people (living or otherwise) is fundamentally misleading, because the images are not actually depicting the person.
Cremastra (u — c) 00:16, 2 January 2025 (UTC)- Once again your actual problem is not AI, but with misleading images. Which can be, and are, already a violation of policy. Thryduulf (talk) 01:17, 2 January 2025 (UTC)
- I think all AI-generated images, except simple diagrams as WhatamIdoing point out above, are misleading. So yes, my problem is with misleading images, which includes all photorealistic images generated by AI, which is why I support this proposal for a blanket ban in BLPs and medical articles. Cremastra (u — c) 02:30, 2 January 2025 (UTC)
- To clarify, I'm willing to make an exception in this proposal for very simple geometric diagrams. Cremastra (u — c) 02:38, 2 January 2025 (UTC)
- Despite the fact that not all AI-generated images are misleading, not all misleading images are AI-generated and it is not always possible to tell whether an image is or is not AI-generated? Thryduulf (talk) 02:58, 2 January 2025 (UTC)
- Enforcement is a separate issue. Whether or not all (or the vast majority) of AI images are misleading is the subject of this dispute.
- I'm not going to mistreat the horse further, as we've each made our points and understand where the other stands. Cremastra (u — c) 15:30, 2 January 2025 (UTC)
- Even "simple diagrams" are not clear-cut. The process of AI-generating any image, no matter how simple, is still very complex and can easily follow any number of different paths to meet the prompt constraints. These paths through embedding space are black boxes and the likelihood they converge on the same output is going to vary wildly depending on the degrees of freedom in the prompt, the dimensionality of the embedding space, token corpus size, etc. The only thing the user can really change, other than switching between models, is the prompt, and at some point constructing a prompt that is guaranteed to yield the same result 100% of the time becomes a Borgesian exercise. This is in contrast with non-generative AI diagram-rendering software that follow very fixed, reproducible, known paths. JoelleJay (talk) 04:44, 2 January 2025 (UTC)
- Why does the path matter? If the output is correct it is correct no matter what route was taken to get there. If the output is incorrect it is incorrect no matter what route was taken to get there. If it is unknown or unknowable whether the output is correct or not that is true no matter what route was taken to get there. Thryduulf (talk) 04:48, 2 January 2025 (UTC)
- If I use BioRender or GraphPad to generate a figure, I can be confident that the output does not have errors that would misrepresent the underlying data. I don't have to verify that all 18,000 data points in a scatter plot exist in the correct XYZ positions because I know the method for rendering them is published and empirically validated. Other people can also be certain that the process of getting from my input to the product is accurate and reproducible, and could in theory reconstruct my raw data from it. AI-generated figures have no prescribed method of transforming input beyond what the prompt entails; therefore I additionally have to be confident in how precise my prompt is and confident that the training corpus for this procedure is so accurate that no error-producing paths exist (not to mention absolutely certain that there is no embedded contamination from prior prompts). Other people have all those concerns, and on top of that likely don't have access to the prompt or the raw data to validate the output, nor do they necessarily know how fastidious I am about my generative AI use. At least with a hand-drawn diagram viewers can directly transfer their trust in the author's knowledge and reliability to their presumptions about the diagram's accuracy. JoelleJay (talk) 05:40, 2 January 2025 (UTC)
- If you've got 18,000 data points, we are beyond the realm of "simple geometric diagrams". WhatamIdoing (talk) 07:47, 2 January 2025 (UTC)
- The original "simple geometric diagrams" comment was referring to your 100 dots image. I don't think increasing the dots materially changes the discussion beyond increasing the laboriousness of verifying the accuracy of the image. Photos of Japan (talk) 07:56, 2 January 2025 (UTC)
- Yes, but since "the laboriousness of verifying the accuracy of the image" is exactly what she doesn't want to undertake for 18,000 dots, then I think that's very relevant. WhatamIdoing (talk) 07:58, 2 January 2025 (UTC)
- The original "simple geometric diagrams" comment was referring to your 100 dots image. I don't think increasing the dots materially changes the discussion beyond increasing the laboriousness of verifying the accuracy of the image. Photos of Japan (talk) 07:56, 2 January 2025 (UTC)
- If you've got 18,000 data points, we are beyond the realm of "simple geometric diagrams". WhatamIdoing (talk) 07:47, 2 January 2025 (UTC)
- If I use BioRender or GraphPad to generate a figure, I can be confident that the output does not have errors that would misrepresent the underlying data. I don't have to verify that all 18,000 data points in a scatter plot exist in the correct XYZ positions because I know the method for rendering them is published and empirically validated. Other people can also be certain that the process of getting from my input to the product is accurate and reproducible, and could in theory reconstruct my raw data from it. AI-generated figures have no prescribed method of transforming input beyond what the prompt entails; therefore I additionally have to be confident in how precise my prompt is and confident that the training corpus for this procedure is so accurate that no error-producing paths exist (not to mention absolutely certain that there is no embedded contamination from prior prompts). Other people have all those concerns, and on top of that likely don't have access to the prompt or the raw data to validate the output, nor do they necessarily know how fastidious I am about my generative AI use. At least with a hand-drawn diagram viewers can directly transfer their trust in the author's knowledge and reliability to their presumptions about the diagram's accuracy. JoelleJay (talk) 05:40, 2 January 2025 (UTC)
- Why does the path matter? If the output is correct it is correct no matter what route was taken to get there. If the output is incorrect it is incorrect no matter what route was taken to get there. If it is unknown or unknowable whether the output is correct or not that is true no matter what route was taken to get there. Thryduulf (talk) 04:48, 2 January 2025 (UTC)
- I think all AI-generated images, except simple diagrams as WhatamIdoing point out above, are misleading. So yes, my problem is with misleading images, which includes all photorealistic images generated by AI, which is why I support this proposal for a blanket ban in BLPs and medical articles. Cremastra (u — c) 02:30, 2 January 2025 (UTC)
- Once again your actual problem is not AI, but with misleading images. Which can be, and are, already a violation of policy. Thryduulf (talk) 01:17, 2 January 2025 (UTC)
- I believe that AI-generated images are fundamentally misleading because they are a simulation by a machine rather than a drawing by a human. To quote pythoncoder above:
- @Cremastra then why are you advocating for a ban on AI images rather than a ban on distorted images? Remember that with careful modifications by someone who is aware of what they are doing that AI images can be made more accurate. Why are you assuming that a human artist is trying to minimise the distortions but someone working with AI is not? Thryduulf (talk) 22:12, 1 January 2025 (UTC)
- Then push to ban photorealistic images, rather than pushing for a blanket ban that would also apply to obvious sketches. —David Eppstein (talk) 20:06, 1 January 2025 (UTC)
- Yes, you're absolutely right. But so far photorealistic images have been the most common to illustrate articles (see Wikipedia:WikiProject AI Cleanup/AI images in non-AI contexts for some examples. Cremastra (u — c) 14:37, 1 January 2025 (UTC)
- Except it says right below it "AI-generated image of Laurence Boccolini." How much more clear can it be when it say point-blank "AI-generated image." Fyunck(click) (talk) 10:12, 2 January 2025 (UTC)
- And where is that cutoff supposed to be? 1000 dots? A single straight line? An atomic diagram? What is "simple" to someone unfamiliar with a topic may be more complex.And I don't want to count 100 dots either! JoelleJay (talk) 17:43, 2 January 2025 (UTC)
- Maybe you don't. But I know for certain that you can count 10 across, 10 down, and multiply those two numbers to get 100. That's what I did when I made the image, after all. WhatamIdoing (talk) 07:44, 3 January 2025 (UTC)
- Except it is clear to everyone that the illustration to the right is a sketch, a human rendition, while in the photorealistic image above, it is less clear. Cremastra (u — c) 14:18, 1 January 2025 (UTC)
- Comment: when you Google search someone (at least from the Chrome browser), often the link to the Wikipedia article includes a thumbnail of the lead photo as a preview. Even if the photo is labelled as an AI image in the article, people looking at the thumbnail from Google would be misled (if the image is chosen for the preview). Photos of Japan (talk) 09:39, 1 January 2025 (UTC)
- This is why we should not use inaccurate images, regardless of how the image was created. It has absolutely nothing to do with AI. Thryduulf (talk) 11:39, 1 January 2025 (UTC)
- Already opposed a blanket ban: It's unclear to me why we have a separate BLP subsection, as BLPs are already included in the main section above. Anyway, I expressed my views there. MichaelMaggs (talk)
- Some editors might oppose a blanket ban on all AI-generated images, while at the same time, are against using AI-generated images (created by using text prompts/text-to-image models) to depict living people. Some1 (talk) 14:32, 1 January 2025 (UTC)
- No For at least now, let's not let the problems of AI intrude into BLP articles which need to have the highest level of scrutiny to protect the person represented. Other areas on WP may benefit from AI image use, but let's keep it far out of BLP at this point. --Masem (t) 14:35, 1 January 2025 (UTC)
- I am not a fan of “banning” AI images completely… but I agree that BLPs require special handling. I look at AI imagery as being akin to a computer generated painting. In a BLP, we allow paintings of the subject, but we prefer photos over paintings (if available). So… we should prefer photos over AI imagery. That said, AI imagery is getting good enough that it can be mistaken for a photo… so… If an AI generated image is the only option (ie there is no photo available), then the caption should clearly indicate that we are using an AI generated image. And that image should be replaced as soon as possible with an actual photograph. Blueboar (talk) 14:56, 1 January 2025 (UTC)
- The issue with the latter is that Wikipedia images get picked up by Google and other search engines, where the caption isn't there anymore to add the context that a photorealistic image was AI-generated. Chaotic Enby (talk · contribs) 15:27, 1 January 2025 (UTC)
- We're here to build an encyclopedia, not to protect commercial search engine companies.
- I think my view aligns with Blueboar's (except that I find no firm preference for photos over classical portrait paintings): We shouldn't have inaccurate AI images of people (living or dead). But the day appears to be coming when AI will generate accurate ones, or at least ones that are close enough to accurate that we can't tell the difference unless the uploader voluntarily discloses that information. Once we can no longer tell the difference, what's the point in banning them? Images need to look like the thing being depicted. When we put an photorealistic image in an article, we could be said to be implicitly claiming that the image looks like whatever's being depicted. We are not necessarily warranting that the image was created through a specific process, but the image really does need to look like the subject. WhatamIdoing (talk) 03:12, 2 January 2025 (UTC)
- You are presuming that sufficient accuracy will prevent us from knowing whether someone is uploading an AI photo, but that is not the case. For instance, if someone uploads large amounts of "photos" of famous people, and can't account for how they got them (e.g. can't give a source where they scraped them from, or dates or any Exif metadata at all for when they were taken), then it will still be obvious that they are likely using AI. Photos of Japan (talk) 17:38, 3 January 2025 (UTC)
- As another editor pointed out in their comment, there's the ethics/moral dilemma of creating fake photorealistic pictures of people and putting them on the internet, especially on a site such as Wikipedia and especially on their own biography. WP:BLP says the bios
must be written conservatively and with regard for the subject's privacy.
Some1 (talk) 18:37, 3 January 2025 (UTC) Once we can no longer tell the difference, what's the point in banning them?
Sounds like a wolf's in sheep's clothing to me. Just because the surface appeal of fake pictures gets better, doesn't mean we should let the horse in. Cremastra (u — c) 18:47, 3 January 2025 (UTC)
- If there are no appropriately-licensed images of a person, then by definition any AI-generated image of them will be either a copyright infringement or a complete fantasy. JoelleJay (talk) 04:48, 2 January 2025 (UTC)
- Whether it would be a copyright infringement or not is both an unsettled legal question and not relevant: If an image is a copyvio we can't use it and it is irrelevant why it is a copyvio. If an image is a "complete fantasy" then it is exactly as unusable as a complete fantasy generated by non-AI means, so again AI is irrelevant. I've had to explain this multiple times in this discussion, so read that for more detail and note the lack of refutation. Thryduulf (talk) 04:52, 2 January 2025 (UTC)
- But we can assume good faith that a human isn't blatantly copying something. We can't assume that from an LLM like Stability AI which has been shown to even copy the watermark from Getty's images. Photos of Japan (talk) 05:50, 2 January 2025 (UTC)
- Ooooh, I'm not sure that we can assume that humans aren't blatantly copying something. We can assume that they meant to be helpful, but that's not quite the same thing. WhatamIdoing (talk) 07:48, 2 January 2025 (UTC)
- But we can assume good faith that a human isn't blatantly copying something. We can't assume that from an LLM like Stability AI which has been shown to even copy the watermark from Getty's images. Photos of Japan (talk) 05:50, 2 January 2025 (UTC)
- Whether it would be a copyright infringement or not is both an unsettled legal question and not relevant: If an image is a copyvio we can't use it and it is irrelevant why it is a copyvio. If an image is a "complete fantasy" then it is exactly as unusable as a complete fantasy generated by non-AI means, so again AI is irrelevant. I've had to explain this multiple times in this discussion, so read that for more detail and note the lack of refutation. Thryduulf (talk) 04:52, 2 January 2025 (UTC)
- The issue with the latter is that Wikipedia images get picked up by Google and other search engines, where the caption isn't there anymore to add the context that a photorealistic image was AI-generated. Chaotic Enby (talk · contribs) 15:27, 1 January 2025 (UTC)
Oppose.Yes. I echo my comments from the other day regarding BLP illustrations:
lethargilistic (talk) 15:41, 1 January 2025 (UTC)What this conversation is really circling around is banning entire skillsets from contributing to Wikipedia merely because some of us are afraid of AI images and some others of us want to engineer a convenient, half-baked, policy-level "consensus" to point to when they delete quality images from Wikipedia. [...] Every time someone generates text based on a source, they are doing some acceptable level of interpretation to extract facts or rephrase it around copyright law, and I don't think illustrations should be considered so severely differently as to justify a categorical ban. For instance, the Gisele Pelicot portrait is based on non-free photos of her. Once the illustration exists, it is trivial to compare it to non-free images to determine if it is an appropriate likeness, which it is. That's no different than judging contributed text's compliance with fact and copyright by referring to the source. It shouldn't be treated differently just because most Wikipedians contribute via text.
Additionally, [when I say say "entire skillsets," I am not] referring to interpretive skillsets that synthesize new information like, random example, statistical analysis. Excluding those from Wikipedia is current practice and not controversial. Meanwhile, I think the ability to create images is more fundamental than that. It's not (inheretly) synthesizing new information. A portrait of a person (alongside the other examples in this thread) contains verifiable information. It is current practice to allow them to fill the gaps where non-free photos can't. That should continue. Honestly, it should expand.- Additionally, in direct response to "these images are fake": All illustrations of a subject could be called "fake" because they are not photographs. (Which can also be faked.) The standard for the inclusion of an illustration on Wikipedia has never been photorealism, medium, or previous publication in a RS. The standard is how adequately it reflects the facts which it claims to depict. If there is a better image that can be imported to Wikipedia via fair use or a license, then an image can be easily replaced. Until such a better image has been sourced, it is absolutely bewildering to me that we would even discuss removing images of people from their articles. What a person looked like is one of the most basic things that people want to know when they look someone up on Wikipedia. Including an image of almost any quality (yes, even a cartoon) is practically by definition an improvement to the article and addressing an important need. We should be encouraging artists to continue filling the gaps that non-free images cannot fill, not creating policies that will inevitably expand into more general prejudices against all new illustrations on Wikipedia. lethargilistic (talk) 15:59, 1 January 2025 (UTC)
- By "Oppose", I'm assuming your answer to the RfC question is "Yes". And this RfC is about using AI-generated images (generated via text prompts, see also: text-to-image model) to depict BLP subjects, not regarding human-created drawings/cartoons/sketches, etc. of BLPs. Some1 (talk) 16:09, 1 January 2025 (UTC)
- I've changed it to "yes" to reflect the reversed question. I think all of this is related because there is no coherent distinguishing point; AI can be used to create images in a variety of styles. These discussions have shown that a policy of banning AI images will be used against non-AI images of all kinds, so I think it's important to say these kinds of things now. lethargilistic (talk) 16:29, 1 January 2025 (UTC)
- Photorealistic images scraped from who knows where from who knows what sources are without question simply fake photographs and also clear WP:OR and outright WP:SYNTH. There's no two ways about it. Articles do not require images: An article with some Frankenstein-ed image scraped from who knows what, where and, when that you "created" from a prompt is not an improvement over having no image at all. If we can't provide a quality image (like something you didn't cook up from a prompt) then people can find quality, non-fake images elsewhere. :bloodofox: (talk) 23:39, 1 January 2025 (UTC)
- I really encourage you to read the discussion I linked before because it is on the WP:NOR talk page. Images like these do not inherently include either OR or SYNTH, and the arguments that they do cannot be distinguished from any other user-generated image content. But, briefly, I never said articles required images, and this is not about what articles require. It is about improvements to the articles. Including a relevant picture where none exists is almost always an improvement, especially for subjects like people. Your disdain for the method the person used to make an image is irrelevant to whether the content of the image is actually verifiable, and the only thing we ought to care about is the content. lethargilistic (talk) 03:21, 2 January 2025 (UTC)
- Images like these are absolutely nothing more than synthesis in the purest sense of the world and are clearly a violation of WP:SYNTH: Again, you have no idea what data was used to generate these images and you're going to have a very hard time convincing anyone to describe them as anything other than outright fakes.
- A reminder that WP:SYNTH shuts down attempts at manipulation of images ("It is not acceptable for an editor to use photo manipulation to distort the facts or position illustrated by an image. Manipulated images should be prominently noted as such. Any manipulated image where the encyclopedic value is materially affected should be posted to Wikipedia:Files for discussion. Images of living persons must not present the subject in a false or disparaging light.") and generating a photorealistic image (from who knows what!) is far beyond that.
- Fake images of people do not improve our articles in any way and only erode reader trust. What's next, an argument for the fake sources LLMs also love to "hallucinate"? :bloodofox: (talk) 03:37, 2 January 2025 (UTC)
- So, if you review the first sentence of SYNTH, you'll see it has no special relevance to this discussion:
Do not combine material from multiple sources to state or imply a conclusion not explicitly stated by any of the sources.
. My primary example has been a picture of a person; what a person looks like is verifiable by comparing the image to non-free images that cannot be used on Wikipedia. If the image resembles the person, it is not SYNTH. An illustration of a person created and intended to look like that person is not a manipulation. The training data used to make the AI is irrelevant to whether the image in fact resembles the person. You should also review WP:NOTSYNTH because SYNTH is not a policy; NOR is the policy:If a putative SYNTH doesn't constitute original research, then it doesn't constitute SYNTH.
Additionally, not all synthesis is even SYNTH. A categorical rule against AI cannot be justified by SYNTH because it does not categorically apply to all use cases of AI. To do so would be illogical on top of ill-advised. lethargilistic (talk) 08:08, 2 January 2025 (UTC)- "training data used to make the AI is irrelevant" — spoken like a true AI evangelist! Sorry, 'good enough' photorealism is still just synthetic slop, a fake image presented as real of a human being. A fake image of someone generated from who-knows-what that 'resembles' an article's subject is about as WP:SYNTH as it gets. Yikes. As for the attempts to pass of prompt-generated photorealistic fakes of people as somehow the same as someone's illustration, you're completely wasting your time. :bloodofox: (talk) 09:44, 2 January 2025 (UTC)
- NOR is a content policy and SYNTH is content guidance within NOR. Because you have admitted that this is not about the content for you, NOR and SYNTH are irrelevant to your argument, which boils down to WP:IDONTLIKEIT and, now, inaccurate personal attacks. Continuing this discussion between us would be pointless. lethargilistic (talk) 09:52, 2 January 2025 (UTC)
- This is in fact entirely about content (why the hell else would I bother?) but it is true that I also dismissed your pro-AI 'it's just like a human drawing a picture!' as outright nonsense a while back. Good luck convincing anyone else with that line - it didn't work here. :bloodofox: (talk) 09:59, 2 January 2025 (UTC)
- NOR is a content policy and SYNTH is content guidance within NOR. Because you have admitted that this is not about the content for you, NOR and SYNTH are irrelevant to your argument, which boils down to WP:IDONTLIKEIT and, now, inaccurate personal attacks. Continuing this discussion between us would be pointless. lethargilistic (talk) 09:52, 2 January 2025 (UTC)
- "training data used to make the AI is irrelevant" — spoken like a true AI evangelist! Sorry, 'good enough' photorealism is still just synthetic slop, a fake image presented as real of a human being. A fake image of someone generated from who-knows-what that 'resembles' an article's subject is about as WP:SYNTH as it gets. Yikes. As for the attempts to pass of prompt-generated photorealistic fakes of people as somehow the same as someone's illustration, you're completely wasting your time. :bloodofox: (talk) 09:44, 2 January 2025 (UTC)
- So, if you review the first sentence of SYNTH, you'll see it has no special relevance to this discussion:
- I really encourage you to read the discussion I linked before because it is on the WP:NOR talk page. Images like these do not inherently include either OR or SYNTH, and the arguments that they do cannot be distinguished from any other user-generated image content. But, briefly, I never said articles required images, and this is not about what articles require. It is about improvements to the articles. Including a relevant picture where none exists is almost always an improvement, especially for subjects like people. Your disdain for the method the person used to make an image is irrelevant to whether the content of the image is actually verifiable, and the only thing we ought to care about is the content. lethargilistic (talk) 03:21, 2 January 2025 (UTC)
- By "Oppose", I'm assuming your answer to the RfC question is "Yes". And this RfC is about using AI-generated images (generated via text prompts, see also: text-to-image model) to depict BLP subjects, not regarding human-created drawings/cartoons/sketches, etc. of BLPs. Some1 (talk) 16:09, 1 January 2025 (UTC)
- Additionally, in direct response to "these images are fake": All illustrations of a subject could be called "fake" because they are not photographs. (Which can also be faked.) The standard for the inclusion of an illustration on Wikipedia has never been photorealism, medium, or previous publication in a RS. The standard is how adequately it reflects the facts which it claims to depict. If there is a better image that can be imported to Wikipedia via fair use or a license, then an image can be easily replaced. Until such a better image has been sourced, it is absolutely bewildering to me that we would even discuss removing images of people from their articles. What a person looked like is one of the most basic things that people want to know when they look someone up on Wikipedia. Including an image of almost any quality (yes, even a cartoon) is practically by definition an improvement to the article and addressing an important need. We should be encouraging artists to continue filling the gaps that non-free images cannot fill, not creating policies that will inevitably expand into more general prejudices against all new illustrations on Wikipedia. lethargilistic (talk) 15:59, 1 January 2025 (UTC)
- Maybe: there is an implicit assumption with this RFC that an AI generated image would be photorealistic. There hasn't been any discussion of an AI generated sketch. If you asked an AI to generate a sketch (that clearly looked like a sketch, similar to the Gisèle Pelicot example) then I would potentially be ok with it. Photos of Japan (talk) 18:14, 1 January 2025 (UTC)
- That's an interesting thought to consider. At the same time, I worry about (well-intentioned) editors inundating image-less BLP articles with AI-generated images in the style of cartoons/sketches (if only photorealistic ones are prohibited) etc. At least requiring a human to draw/paint/whatever creates a barrier to entry; these AI-generated images can be created in under a minute using these text-to-image models. Editors are already wary about human-created cartoon portraits (see the NORN discussion), now they'll be tasked with dealing with AI-generated ones in BLP articles. Some1 (talk) 20:28, 1 January 2025 (UTC)
- It sounds like your problem is not with AI but with cartoon/sketch images in BLP articles, so AI is once again completely irrelevant. Thryduulf (talk) 22:14, 1 January 2025 (UTC)
- That is a good concern you brought up. There is a possibility of the spamming of low quality AI-generated images which would be laborious to discuss on a case-by-case basis but easy to generate. At the same time though that is a possibility, but not yet an actuality, and WP:CREEP states that new policies should address current problems rather than hypothetical concerns. Photos of Japan (talk) 22:16, 1 January 2025 (UTC)
- That's an interesting thought to consider. At the same time, I worry about (well-intentioned) editors inundating image-less BLP articles with AI-generated images in the style of cartoons/sketches (if only photorealistic ones are prohibited) etc. At least requiring a human to draw/paint/whatever creates a barrier to entry; these AI-generated images can be created in under a minute using these text-to-image models. Editors are already wary about human-created cartoon portraits (see the NORN discussion), now they'll be tasked with dealing with AI-generated ones in BLP articles. Some1 (talk) 20:28, 1 January 2025 (UTC)
- Easy no for me. I am not against the use of AI images wholesale, but I do think that using AI to represent an existent thing such as a person or a place is too far. Even a tag wouldn't be enough for me. Cessaune [talk] 19:05, 1 January 2025 (UTC)
- No obviously, per previous discussions about cartoonish drawn images in BLPs. Same issue here as there, it is essentially original research and misrepresentation of a living person's likeness. Zaathras (talk) 22:19, 1 January 2025 (UTC)
- No to photorealistic, no to cartoonish... this is not a hard choice. The idea that "this has nothing to do with AI" when "AI" magnifies the problem to stupendous proportions is just not tenable. XOR'easter (talk) 23:36, 1 January 2025 (UTC)
- While AI might "amplify" the thing you dislike, that does not make AI the problem. The problem is whatever underlying thing is being amplified. Thryduulf (talk) 01:16, 2 January 2025 (UTC)
- The thing that amplifies the problem is necessarily a problem. XOR'easter (talk) 02:57, 2 January 2025 (UTC)
- That is arguable, but banning the amplifier does not do anything to solve the problem. In this case, banning the amplifier would cause multiple other problems that nobody supporting this proposal as even attempted to address, let alone mitigate. Thryduulf (talk) 03:04, 2 January 2025 (UTC)
- The thing that amplifies the problem is necessarily a problem. XOR'easter (talk) 02:57, 2 January 2025 (UTC)
- While AI might "amplify" the thing you dislike, that does not make AI the problem. The problem is whatever underlying thing is being amplified. Thryduulf (talk) 01:16, 2 January 2025 (UTC)
- No for all people, per Chaotic Enby. Nikkimaria (talk) 03:23, 2 January 2025 (UTC) Add: no to any AI-generated images, whether photorealistic or not. Nikkimaria (talk) 04:00, 3 January 2025 (UTC)
- No - We should not be hosting faked images (except as notable fakes). We should also not be hosting copyvios (
"Whether it would be a copyright infringement or not is both an unsettled legal question and not relevant"
is just totally wrong - we should be steering clear of copyvios, and if the issue is unsettled then we shouldn't use them until it is). - If people upload faked images to WP or Commons the response should be as it is now. The fact that fakes are becoming harder to detect simply from looking at them hardly affects this - we simply confirm when the picture was supposed to have been taken and examine the plausibility of it from there. FOARP (talk) 14:39, 2 January 2025 (UTC) FOARP (talk) 14:39, 2 January 2025 (UTC)
we should be steering clear of copyvio
we do - if an image is a copyright violation it gets deleted, regardless of why it is a copyright violation. What we do not do is ban using images that are not copyright violations because they are copyright violations. Currently the WMF lawyers and all the people on Commons who know more about copyright than I do say that at least some AI images are legally acceptable for us to host and use. If you want to argue that, then go ahead, but it is not relevant to this discussion.if people upload faked images [...] the response should be as it is now
in other words you are saying that the problem is faked images not AI, and that current policies are entirely adequate to deal with the problem of faked images. So we don't need any specific rules for AI images - especially given that not all AI images are fakes. Thryduulf (talk) 15:14, 2 January 2025 (UTC)- The idea that
current policies are entirely adequate
is like saying that a lab shouldn't have specific rules about wearing eye protection when it already has a poster hanging on the wall that says "don't hurt yourself". XOR'easter (talk) 18:36, 2 January 2025 (UTC)- I rely on one of those rotating shaft warnings up in my workshop at home. I figure if that doesn't keep me safe, nothing will. ScottishFinnishRadish (talk) 18:41, 2 January 2025 (UTC)
- "
in other words you are saying that the problem is faked images not AI
" - AI generated images *are* fakes. This is merely confirming that for the avoidance of doubt. - "
at least some AI images are legally acceptable for us
" - Until they decide which ones that isn't much help. FOARP (talk) 19:05, 2 January 2025 (UTC)- Yes – what FOARP said. AI-generated images are fakes and are misleading. Cremastra (u — c) 19:15, 2 January 2025 (UTC)
- "
- Those specific rules exist because generic warnings have proven not to be sufficient. Nobody has presented any evidence that the current policies are not sufficient, indeed quite the contrary. Thryduulf (talk) 19:05, 2 January 2025 (UTC)
- I rely on one of those rotating shaft warnings up in my workshop at home. I figure if that doesn't keep me safe, nothing will. ScottishFinnishRadish (talk) 18:41, 2 January 2025 (UTC)
- The idea that
- No! This would be a massive can of worms; perhaps, however, we wish to cause problems in the new year. JuxtaposedJacob (talk) | :) | he/him | 15:00, 2 January 2025 (UTC)
- Noting that I think that no AI-generated images are acceptable in BLP articles, regardless of whether they are photorealistic or not. JuxtaposedJacob (talk) | :) | he/him | 15:40, 3 January 2025 (UTC)
- No, unless the AI image has encyclopedic significance beyond "depicts a notable person". AI images, if created by editors for the purpose of inclusion in Wikipedia, convey little reliable information about the person they depict, and the ways in which the model works are opaque enough to most people as to raise verifiability concerns. ModernDayTrilobite (talk • contribs) 15:25, 2 January 2025 (UTC)
- To clarify, do you object to uses of an AI image in a BLP when the subject uses that image for self-identification? I presume that AI images that have been the subject of notable discussion are an example of "significance beyond depict[ing] a notable person"? Thryduulf (talk) 15:54, 2 January 2025 (UTC)
- If the subject uses the image for self-identification, I'd be fine with it - I think that'd be analogous to situations such as "cartoonist represented by a stylized self-portrait", which definitely has some precedent in articles like Al Capp. I agree with your second sentence as well; if there's notable discussion around a particular AI image, I think it would be reasonable to include that image on Wikipedia. ModernDayTrilobite (talk • contribs) 19:13, 2 January 2025 (UTC)
- To clarify, do you object to uses of an AI image in a BLP when the subject uses that image for self-identification? I presume that AI images that have been the subject of notable discussion are an example of "significance beyond depict[ing] a notable person"? Thryduulf (talk) 15:54, 2 January 2025 (UTC)
- No, with obvious exceptions, including if the subject theyrself uses the image as a their representation, or if the image is notable itself. Not including the lack of a free aleternative, if there is no free alternative... where did the AI find data to build an image... non free too. Not including images generated by WP editors (that's kind of original research... - Nabla (talk) 18:02, 2 January 2025 (UTC
- Maybe I think the question is unfair as it is illustrated with what appears to be a photo of the subject but isn't. People are then getting upset that they've been misled. As others note, there are copyright concerns with AI reproducing copyrighted works that in turn make an image that is potentially legally unusable. But that is more a matter for Commons than for Wikipedia. As many have noted, a sketch or painting never claims to be an accurate depiction of a person, and I don't care if that sketch or painting was done by hand or an AI prompt. I strongly ask Some1 to abort the RFC. You've asked people to give a yes/no vote to what is a more complex issue. A further problem with the example used is the unfortunate prejudice on Wikipedia against user-generated content. While the text-generated AI of today is crude and random, there will come a point where many professionally published photos illustrating subjects, including people, are AI generated. Even today, your smartphone can create a groupshot where everyone is smiling and looking at the camera. It was "trained" on the 50 images it quickly took and responded to the build-in "text prompt" of "create a montage of these photos such that everyone is smiling and looking at the camera". This vote is a knee jerk reaction to content that is best addressed by some other measure (such as that it is a misleading image). And a good example of asking people to vote way too early, when the issues haven't been throught out -- Colin°Talk 18:17, 2 January 2025 (UTC)
- No This would very likely set a dangerous precedent. The only exception I think should be if the image itself is notable. If we move forward with AI images, especially for BLPs, it would only open up a whole slew of regulations and RfCs to keep them in check. Better no image than some digital multiverse version of someone that is "basically" them but not really. Not to mention the ethics/moral dilemma of creating fake photorealistic pictures of people and putting them on the internet. Tepkunset (talk) 18:31, 2 January 2025 (UTC)
- No. LLMs don't generate answers, they generate things that look like answers, but aren't; a lot of the time, that's good enough, but sometimes it very much isn't. It's the same issue for text-to-image models: they don't generate photos of people, they generate things that look like photos. Using them on BLPs is unacceptable. DS (talk) 19:30, 2 January 2025 (UTC)
- No. I would be pissed if the top picture of me on Google was AI-generated. I just don't think it's moral for living people. The exceptions given above by others are okay, such as if the subject uses the picture themselves or if the picture is notable (with context given). win8x (talk) 19:56, 2 January 2025 (UTC)
- No. Uploading alone, although mostly a Commons issue, would already a problem to me and may have personality rights issues. Illustrating an article with a fake photo (or drawing) of a living person, even if it is labeled as such, would not be acceptable. For example, it could end up being shown by search engines or when hovering over a Wikipedia link, without the disclaimer. ~ ToBeFree (talk) 23:54, 2 January 2025 (UTC)
- I was going to say no... but we allow paintings as portraits in BLPs. What's so different between an AI generated image, and a painting? Arguments above say the depiction may not be accurate, but the same is true of some paintings, right? (and conversely, not true of other paintings) ProcrastinatingReader (talk) 00:48, 3 January 2025 (UTC)
- A painting is clearly a painting; as such, the viewer knows that it is not an accurate representation of a particular reality. An AI-generated image made to look exactly like a photo, looks like a photo but is not.
- DS (talk) 02:44, 3 January 2025 (UTC)
- Not all paintings are clearly paintings. Not all AI-generated images are made to look like photographs. Not all AI-generated images made to look like photos do actually look like photos. This proposal makes no distinction. Thryduulf (talk) 02:55, 3 January 2025 (UTC)
- Not to mention, hyper-realism is a style an artist may use in virtually any medium. Colored pencils can be used to make extremely realistic portraits. If Wikipedia would accept an analog substitute like a painting, there's no reason Wikipedia shouldn't accept an equivalent painting made with digital tools, and there's no reason Wikipedia shouldn't accept an equivalent painting made with AI. That is, one where any obvious defects have been edited out and what remains is a straightforward picture of the subject. lethargilistic (talk) 03:45, 3 January 2025 (UTC)
- For the record (and for any media watching), while I personally find it fascinating that a few editors here are spending a substantial amount of time (in the face of an overwhelming 'absolutely not' consensus no less) attempting to convince others that computer-generated (that is, faked) photos of human article subjects are somehow a good thing, I also find it interesting that these editors seem to express absolutely no concern for the intensely negative reaction they're already seeing from their fellow editors and seem totally unconcerned about the inevitable trust drop we'd experience from Wikipedia readers when they would encounter fake photos on our BLP articles especially. :bloodofox: (talk) 03:54, 3 January 2025 (UTC)
- Wikipedia's reputation would not be affected positively or negatively by expanding the current-albeit-sparse use of illustrations to depict subjects that do not have available pictures. In all my writing about this over the last few days, you are the only one who has said anything negative about me as a person or, really, my arguments themselves. As loath as I am to cite it, WP:AGF means assuming that people you disagree with are not trying to hurt Wikipedia. Thryduulf, I, and others have explained in detail why we think our ultimate ideas are explicit benefits to Wikipedia and why our opposition to these immediate proposals comes from a desire to prevent harm to Wikipedia. I suggest taking a break to reflect on that, matey. lethargilistic (talk) 04:09, 3 January 2025 (UTC)
- Look, I don't know if you've been living under a rock or what for the past few years but the reality is that people hate AI images and dumping a ton of AI/fake images on Wikipedia, a place people go for real information and often trust, inevitably leads to a huge trust issue, something Wikipedia is increasingly suffering from already. This is especially a problem when they're intended to represent living people (!). I'll leave it to you to dig up the bazillion controversies that have arisen and continue to arise since companies worldwide have discovered that they can now replace human artists with 'AI art' produced by "prompt engineers" but you can't possibly expect us to ignore that reality when discussing these matters. :bloodofox: (talk) 04:55, 3 January 2025 (UTC)
- Those trust issues are born from the publication of hallucinated information. I have only said that it should be OK to use an image on Wikipedia when it contains only verifiable information, which is the same standard we apply to text. That standard is and ought to be applied independently of the way the initial version of an image was created. lethargilistic (talk) 06:10, 3 January 2025 (UTC)
- Look, I don't know if you've been living under a rock or what for the past few years but the reality is that people hate AI images and dumping a ton of AI/fake images on Wikipedia, a place people go for real information and often trust, inevitably leads to a huge trust issue, something Wikipedia is increasingly suffering from already. This is especially a problem when they're intended to represent living people (!). I'll leave it to you to dig up the bazillion controversies that have arisen and continue to arise since companies worldwide have discovered that they can now replace human artists with 'AI art' produced by "prompt engineers" but you can't possibly expect us to ignore that reality when discussing these matters. :bloodofox: (talk) 04:55, 3 January 2025 (UTC)
- Wikipedia's reputation would not be affected positively or negatively by expanding the current-albeit-sparse use of illustrations to depict subjects that do not have available pictures. In all my writing about this over the last few days, you are the only one who has said anything negative about me as a person or, really, my arguments themselves. As loath as I am to cite it, WP:AGF means assuming that people you disagree with are not trying to hurt Wikipedia. Thryduulf, I, and others have explained in detail why we think our ultimate ideas are explicit benefits to Wikipedia and why our opposition to these immediate proposals comes from a desire to prevent harm to Wikipedia. I suggest taking a break to reflect on that, matey. lethargilistic (talk) 04:09, 3 January 2025 (UTC)
- For the record (and for any media watching), while I personally find it fascinating that a few editors here are spending a substantial amount of time (in the face of an overwhelming 'absolutely not' consensus no less) attempting to convince others that computer-generated (that is, faked) photos of human article subjects are somehow a good thing, I also find it interesting that these editors seem to express absolutely no concern for the intensely negative reaction they're already seeing from their fellow editors and seem totally unconcerned about the inevitable trust drop we'd experience from Wikipedia readers when they would encounter fake photos on our BLP articles especially. :bloodofox: (talk) 03:54, 3 January 2025 (UTC)
- Not to mention, hyper-realism is a style an artist may use in virtually any medium. Colored pencils can be used to make extremely realistic portraits. If Wikipedia would accept an analog substitute like a painting, there's no reason Wikipedia shouldn't accept an equivalent painting made with digital tools, and there's no reason Wikipedia shouldn't accept an equivalent painting made with AI. That is, one where any obvious defects have been edited out and what remains is a straightforward picture of the subject. lethargilistic (talk) 03:45, 3 January 2025 (UTC)
- Not all paintings are clearly paintings. Not all AI-generated images are made to look like photographs. Not all AI-generated images made to look like photos do actually look like photos. This proposal makes no distinction. Thryduulf (talk) 02:55, 3 January 2025 (UTC)
- To my eye, the distinction between AI images and paintings here is less a question of medium and more of verifiability: the paintings we use (or at least the ones I can remember) are significant paintings that have been acknowledged in sources as being reasonable representations of a given person. By contrast, a purpose-generated AI image would be more akin to me painting a portrait of somebody here and now and trying to stick that on their article. The image could be a faithful representation (unlikely, given my lack of painting skills, but let's not get lost in the metaphor), but if my painting hasn't been discussed anywhere besides Wikipedia, then it's potentially OR or UNDUE to enshrine it in mainspace as an encyclopedic image. ModernDayTrilobite (talk • contribs) 05:57, 3 January 2025 (UTC)
- An image contains a collection of facts, and those facts need to be verifiable just like any other information posted on Wikipedia. An image that verifiably resembles a subject as it is depicted in reliable sources is categorically not OR. Discussion in other sources is not universally relevant; we don't restrict ourselves to only previously-published images. If we did that, Wikipedia would have very few images. lethargilistic (talk) 06:18, 3 January 2025 (UTC)
- Verifiable how? Only by the editor themselves comparing to a real photo (which was probably used by the LLM to create the image…).
- These things are fakes. The analysis stops there. FOARP (talk) 10:48, 4 January 2025 (UTC)
- Verifiable by comparing them to a reliable source. Exactly the same as what we do with text. There is no coherent reason to treat user-generated images differently than user-generated text, and the universalist tenor of this discussion has damaging implications for all user-generated images regardless of whether they were created with AI. Honestly, I rarely make arguments like this one, but I think it could show some intuition from another perspective: Imagine it's 2002 and Wikipedia is just starting. Most users want to contribute text to the encyclopedia, but there is a cadre of artists who want to contribute pictures. The text editors say the artists cannot contribute ANYTHING to Wikipedia because their images that have not been previously published are not verifiable. That is a double-standard that privileges the contributions of text-editors simply because most users are text-editors and they are used to verifying text; that is not a principled reason to treat text and images differently. Moreover, that is simply not what happened—The opposite happend, and images are treated as verifiable based on their contents just like text because that's a common sense reading of the rule. It would have been madness if images had been treated differently. And yet that is essentially the fundamentalist position of people who are extending their opposition to AI with arguments that apply to all images. If they are arguing verifiability seriously at all, they are pretending that the sort of degenerate situation I just described already exists when the opposite consensus has been reached consistently for years. In the related NOR thread, they even tried to say Wikipedians had "turned a blind eye" to these image issues as if negatively characterizing those decisions would invalidate the fact that those decisions were consensus. The motivated reasoning of these discussions has been as blatant as that.
At the bottom of this dispute, I take issue with trying to alter the rules in a way that creates a new double-standard within verifiability that applies to all images but not text. That's especially upsetting when (despite my and others' best efforts) so many of us are still focusing SOLELY on their hatred for AI rather than considering the obvious second-order consequences for user-generated images as a whole.
Frankly, in no other context has any Wikipedian ever allowed me to say text they wrote was "fake" or challenge an image based on whether it was "fake." The issue has always been verifiability, not provenance or falsity. Sometimes, IMO, that has lead to disaster and Wikipedia saying things I know to be factually untrue despite the contents of reliable sources. But that is the policy. We compare the contents of Wikipedia to reliable sources, and the contents of Wikipedia are considered verifiable if they cohere.
I ask again: If Wikipedia's response to the creation of AI imaging tools is to crack down on all artistic contributions to Wikipedia (which seems to be the inevitable direction of these discussions), what does that say? If our negative response to AI tools is to limit what humans can do on Wikipedia, what does that say? Are we taking a stand for human achievements, or is this a very heated discussion of cutting off our nose to save our face? lethargilistic (talk) 23:31, 4 January 2025 (UTC)"Verifiable by comparing them to a reliable source"
- comparing two images and saying that one looks like the other is not "verifying" anything. The text equivalent is presenting something as a quotation that is actually a user-generated paraphrasing."Frankly, in no other context has any Wikipedian ever allowed me to say text they wrote was "fake" or challenge an image based on whether it was "fake.""
- Try presenting a paraphrasing as a quotation and see what happens."Imagine it's 2002 and Wikipedia is just starting. Most users want to contribute text to the encyclopedia, but there is a cadre of artists who want to contribute pictures..."
- This basically happened, and is the origin of WP:NOTGALLERY. Wikipedia is not a host for original works. FOARP (talk) 22:01, 6 January 2025 (UTC)Comparing two images and saying that one looks like the other is not "verifying" anything.
Comparing text to text in a reliable source is literally the same thing.The text equivalent is presenting something as a quotation that is actually a user-generated paraphrasing.
No it isn't. The text equivalent is writing a sentence in an article and putting a ref tag on it. Perhaps there is room for improving the referencing of images in the sense that they should offer example comparisons to make. But an image created by a person is not unverifiable simply because it is user-generated. It is not somehow more unverifiable simply because it is created in a lifelike style.Try presenting a paraphrasing as a quotation and see what happens.
Besides what I just said, nobody is even presenting these images as equatable to quotations. People in this thread have simply been calling them "fake" of their own initiative; the uploaders have not asserted that these are literal photographs to my knowledge. The uploaders of illustrations obviously did not make that claim either. (And, if the contents of the image is a copyvio, that is a separate issue entirely.)This basically happened, and is the origin of WP:NOTGALLERY.
That is not the same thing. User-generated images that illustrate the subject are not prohibited by WP:NOTGALLERY. Wikipedia is a host of encyclopedic content, and user-generated images can have encyclopedic content. lethargilistic (talk) 02:41, 7 January 2025 (UTC)- Images are way more complex than text. Trying to compare them in the same way is a very dangerous simplification. Cremastra (u — c) 02:44, 7 January 2025 (UTC)
- Assume only non-free images exist of a person. An illustrator refers to those non-free images and produces a painting. From that painting, you see a person who looks like the person in the non-free photographs. The image is verified as resembling the person. That is a simplification, but to call it "dangerous" is disingenuous at best. The process for challenging the image is clear. Someone who wants to challenge the veracity of the image would just need to point to details that do not align. For instance, "he does not typically have blue hair" or "he does not have a scar." That is what we already do, and it does not come up much because it would be weird to deliberately draw an image that looks nothing like the person. Additionally, someone who does not like the image for aesthetic reasons rather than encyclopedic ones always has the option of sourcing a photograph some other way like permission, fair use, or taking a new one themself. This is not an intractable problem. lethargilistic (talk) 02:57, 7 January 2025 (UTC)
- So a photorealistic AI-generated image would be considered acceptable until someone identifies a "big enough" difference? How is that anything close to ethical? An portrait that's got an extra mole or slightly wider nose bridge or lacks a scar is still not an image of the person regardless of whether random Wikipedia editors notice. And while I don't think user-generated non-photorealistic images should ever be used on biographies either, at least those can be traced back to a human who is ultimately responsible for the depiction, who can point to the particular non-free images they used as references, and isn't liable to average out details across all time periods of the subject. And that's not even taking into account the copyright issues. JoelleJay (talk) 22:52, 7 January 2025 (UTC)
- +1 to what JoelleJay said. The problem is that AI-generated images are simulations trying to match existing images, sometimes, yes, with an impressive degree of accuracy. But they will always be inferior to a human-drawn painting that's trying to depict the person. We're a human encyclopedia, and we're built by humans doing human things and sometimes with human errors. Cremastra (u — c) 23:18, 7 January 2025 (UTC)
- You can't just raise this to an "ethical" issue by saying the word "ethical." You also can't just invoke copyright without articulating an actual copyright issue; we are not discussing copyvio. Everyone agrees that a photo with an actual copyvio in it is subject to that policy.
- But to address your actual point: Any image—any photo—beneath the resolution necessary to depict the mole would be missing the mole. Even with photography, we are never talking about science-fiction images that perfectly depict every facet of a person in an objective sense. We are talking about equipment that creates an approximation of reality. The same is true of illustrations and AI imagery.
- Finally, a human being is responsible for the contents of the image because a human is selecting it and is responsible for correcting any errors. The result is an image that someone is choosing to use because they believe it is an appropriate likeness. We should acknowledge that human decision and evaluate it naturally—Is it an appropriate likeness? lethargilistic (talk) 10:20, 8 January 2025 (UTC)
- (Second comment because I'm on my phone.) I realize I should also respond to this in terms of additive information. What people look like is not static in the way your comment implies. Is it inappropriate to use a photo because they had a zit on the day it was taken? Not necessarily. Is an image inappropriate because it is taken at a bad angle that makes them look fat? Judging by the prolific ComicCon photographs (where people seem to make a game of choosing the worst-looking options; seriously, it's really bad), not necessarily. Scars and bruises exist and then often heal over time. The standard for whether an image with "extra" details is acceptable would still be based on whether it comports acceptably with other images; we literally do what you have capriciously described as "unethical" and supplement it with our compassionate desire to not deliberately embarrass BLPs. (The ComicCon images aside, I guess.) So, no, I would not be a fan of using images that add prominent scars where the subject is not generally known to have one, but that is just an unverifiable fact that does not belong in a Wikipedia image. Simple as. lethargilistic (talk) 10:32, 8 January 2025 (UTC)
- So a photorealistic AI-generated image would be considered acceptable until someone identifies a "big enough" difference? How is that anything close to ethical? An portrait that's got an extra mole or slightly wider nose bridge or lacks a scar is still not an image of the person regardless of whether random Wikipedia editors notice. And while I don't think user-generated non-photorealistic images should ever be used on biographies either, at least those can be traced back to a human who is ultimately responsible for the depiction, who can point to the particular non-free images they used as references, and isn't liable to average out details across all time periods of the subject. And that's not even taking into account the copyright issues. JoelleJay (talk) 22:52, 7 January 2025 (UTC)
- Assume only non-free images exist of a person. An illustrator refers to those non-free images and produces a painting. From that painting, you see a person who looks like the person in the non-free photographs. The image is verified as resembling the person. That is a simplification, but to call it "dangerous" is disingenuous at best. The process for challenging the image is clear. Someone who wants to challenge the veracity of the image would just need to point to details that do not align. For instance, "he does not typically have blue hair" or "he does not have a scar." That is what we already do, and it does not come up much because it would be weird to deliberately draw an image that looks nothing like the person. Additionally, someone who does not like the image for aesthetic reasons rather than encyclopedic ones always has the option of sourcing a photograph some other way like permission, fair use, or taking a new one themself. This is not an intractable problem. lethargilistic (talk) 02:57, 7 January 2025 (UTC)
- Images are way more complex than text. Trying to compare them in the same way is a very dangerous simplification. Cremastra (u — c) 02:44, 7 January 2025 (UTC)
- We don't evaluate the reliability of a source solely by comparing it to other sources. For example, there is an ongoing discussion at the baseball WikiProject talk page about the reliability of a certain web site. It lists no authors nor any information on its editorial control policy, so we're not able to evaluate its reliability. The reliability of all content being used as a source, including images, needs to be considered in terms of its provenance. isaacl (talk) 23:11, 7 January 2025 (UTC)
- Verifiable by comparing them to a reliable source. Exactly the same as what we do with text. There is no coherent reason to treat user-generated images differently than user-generated text, and the universalist tenor of this discussion has damaging implications for all user-generated images regardless of whether they were created with AI. Honestly, I rarely make arguments like this one, but I think it could show some intuition from another perspective: Imagine it's 2002 and Wikipedia is just starting. Most users want to contribute text to the encyclopedia, but there is a cadre of artists who want to contribute pictures. The text editors say the artists cannot contribute ANYTHING to Wikipedia because their images that have not been previously published are not verifiable. That is a double-standard that privileges the contributions of text-editors simply because most users are text-editors and they are used to verifying text; that is not a principled reason to treat text and images differently. Moreover, that is simply not what happened—The opposite happend, and images are treated as verifiable based on their contents just like text because that's a common sense reading of the rule. It would have been madness if images had been treated differently. And yet that is essentially the fundamentalist position of people who are extending their opposition to AI with arguments that apply to all images. If they are arguing verifiability seriously at all, they are pretending that the sort of degenerate situation I just described already exists when the opposite consensus has been reached consistently for years. In the related NOR thread, they even tried to say Wikipedians had "turned a blind eye" to these image issues as if negatively characterizing those decisions would invalidate the fact that those decisions were consensus. The motivated reasoning of these discussions has been as blatant as that.
- An image contains a collection of facts, and those facts need to be verifiable just like any other information posted on Wikipedia. An image that verifiably resembles a subject as it is depicted in reliable sources is categorically not OR. Discussion in other sources is not universally relevant; we don't restrict ourselves to only previously-published images. If we did that, Wikipedia would have very few images. lethargilistic (talk) 06:18, 3 January 2025 (UTC)
- Can you note in your !vote whether AI-generated images (generated via text prompts/text-to-image models) that are not photo-realistic / hyper-realistic in style are okay to use to depict BLP subjects? For example, see the image to the right, which was added then removed from his article: Pinging people who !voted No above: User:Chaotic Enby, User:Cremastra, User:Horse Eye's Back, User:Pythoncoder, User:Kj cheetham, User:Bloodofox, User:Gnomingstuff, User:JoelleJay, User:Carrite, User:Seraphimblade, User:David Eppstein, User:Randy Kryn, User:Traumnovelle, User:SuperJew, User:Doawk7, User:Di (they-them), User:Masem, User:Cessaune, User:Zaathras, User:XOR'easter, User:Nikkimaria, User:FOARP, User:JuxtaposedJacob, User:ModernDayTrilobite, User:Nabla, User:Tepkunset, User:DragonflySixtyseven, User:Win8x, User:ToBeFree --- Some1 (talk) 03:55, 3 January 2025 (UTC)
- Still no, I thought I was clear on that but we should not be using AI-generated images in articles for anything besides representing the concept of AI-generated images, or if an AI-generated image is notable or irreplaceable in its own right -- e.g, a musician uses AI to make an album cover.
- (this isn't even a good example, it looks more like Steve Bannon)
- Gnomingstuff (talk) 04:07, 3 January 2025 (UTC)
- Was I unclear? No to all of them. XOR'easter (talk) 04:13, 3 January 2025 (UTC)
- Still no, because carving out that type of exception will just lead to arguments down the line about whether a given image is too realistic. —pythoncoder (talk | contribs) 04:24, 3 January 2025 (UTC)
- I still think no. My opposition isn't just to the fact that AI images are misinformation, but also that they essentially serve as a loophole for getting around Enwiki's image use policy. To know what somebody looks like, an AI generator needs to have images of that person in its dataset, and it draws on those images to generate a derivative work. If we have no free images of somebody and we use AI to make one, that's just using a fair use copyrighted image but removed by one step. The image use policy prohibits us from using fair use images for BLPs so I don't think we should entertain this loophole. If we do end up allowing AI images in BLPs, that just disqualifies the rationale of not allowing fair use in the first place. Di (they-them) (talk) 04:40, 3 January 2025 (UTC)
- No those are not okay, as this will just cause arguments from people saying a picture is obviously AI-generated, and that it is therefore appropriate. As I mentionned above, there are some exceptions to this, which Gnomingstuff perfectly describes. Fake sketches/cartoons are not appropriate and provide little encyclopedic value. win8x (talk) 05:27, 3 January 2025 (UTC)
- No to this as well, with the same carveout for individual images that have received notable discussion. Non-photorealistic AI images are going to be no more verifiable than photorealistic ones, and on top of that will often be lower-quality as images. ModernDayTrilobite (talk • contribs) 05:44, 3 January 2025 (UTC)
- Thanks for the ping, yes I can, the answer is no. ~ ToBeFree (talk) 07:31, 3 January 2025 (UTC)
- No, and that image should be deleted before anyone places it into a mainspace article. Changing the RfC intro long after its inception seems a second bite at an apple that's not aged well. Randy Kryn (talk) 09:28, 3 January 2025 (UTC)
- The RfC question has not been changed; another editor was complaining that the RfC question did not make a distinction between photorealistic/non-photorealistic AI-generated images, so I had to add a note to the intro and ping the editors who'd voted !No to clarify things. It has only been 3 days; there's still 27 more days to go. Some1 (talk) 11:18, 3 January 2025 (UTC)
- Also answering No to this one per all the arguments above. "It has only been 3 days" is not a good reason to change the RfC question, especially since many people have already !voted and the "30 days" is mostly indicative rather than an actual deadline for a RfC. Chaotic Enby (talk · contribs) 14:52, 3 January 2025 (UTC)
- The RfC question hasn't been changed; see my response to Zaathras below. Some1 (talk) 15:42, 3 January 2025 (UTC)
- Also answering No to this one per all the arguments above. "It has only been 3 days" is not a good reason to change the RfC question, especially since many people have already !voted and the "30 days" is mostly indicative rather than an actual deadline for a RfC. Chaotic Enby (talk · contribs) 14:52, 3 January 2025 (UTC)
- The RfC question has not been changed; another editor was complaining that the RfC question did not make a distinction between photorealistic/non-photorealistic AI-generated images, so I had to add a note to the intro and ping the editors who'd voted !No to clarify things. It has only been 3 days; there's still 27 more days to go. Some1 (talk) 11:18, 3 January 2025 (UTC)
- No, that's even a worse possible approach. — Masem (t) 13:24, 3 January 2025 (UTC)
- No. We're the human encyclopedia. We should have images drawn or taken by real humans who are trying to depict the subject, not by machines trying to simulate an image. Besides, the given example is horribly drawn. Cremastra (u — c) 15:03, 3 January 2025 (UTC)
- I like these even less than the photorealistic ones... This falls into the same basket for me: if we wouldn't let a random editor who drew this at home using conventional tools add it to the article why would we let a random editor who drew this at home using AI tools at it to the article? (and just to be clear the AI generated image of Germán Larrea Mota-Velasco is not recognizable as such) Horse Eye's Back (talk) 16:06, 3 January 2025 (UTC)
- I said *NO*. FOARP (talk) 10:37, 4 January 2025 (UTC)
- No Having such images as said above means the AI had to use copyrighted pictures to create it and we shouldn't use it. --SuperJew (talk) 01:12, 5 January 2025 (UTC)
- Still no. If for no other reason than that it's a bad precedent. As others have said, if we make one exception, it will just lead to arguments in the future about whether something is "realistic" or not. I also don't see why we would need cartoon/illustrated-looking AI pictures of people in BLPs. Tepkunset (talk) 20:43, 6 January 2025 (UTC)
- Absolutely not. These images are based on whatever the AI could find on the internet, with little to no regard for copyright. Wikipedia is better than this. Retswerb (talk) 10:16, 3 January 2025 (UTC)
- Comment The RfC question should not have been fiddled with, esp. for such a minor argument that the complai9nmant could have simply included in their own vote. I have no need to re-confirm my own entry. Zaathras (talk) 14:33, 3 January 2025 (UTC)
- The RfC question hasn't been modified; I've only added a 03:58, January 3, 2025: Note clarifying that these images can either be photorealistic in style or non-photorealistic in style. I pinged all the !No voters to make them aware. I could remove the Note if people prefer that I do (but the original RfC question is the exact same [3] as it is now, so I don't think the addition of the Note makes a whole ton of difference). Some1 (talk) 15:29, 3 January 2025 (UTC)
- No At this point it feels redundant, but I'll just add to the horde of responses in the negative. I don't think we can fully appreciate the issues that this would cause. The potential problems and headaches far outweigh whatever little benefit might come from AI images for BLPs. pillowcrow 21:34, 3 January 2025 (UTC)
- Support temporary blanket ban with a posted expiration/requred rediscussion date of no more than two years from closing. AI as the term is currently used is very, very new. Right now these images would do more harm than good, but it seems likely that the culture will adjust to them. Darkfrog24 (talk) 23:01, 3 January 2025 (UTC)
- No. Wikipedia is made by and for humans. I don't want to become Google. Adding an AI-generated image to a page whose topic isn't about generative AI makes me feel insulted. SWinxy (talk) 00:03, 4 January 2025 (UTC)
- No. Generative AI may have its place, and it may even have a place on Wikipedia in some form, but that place isn't in BLPs. There's no reason to use images of someone that do not exist over a real picture, or even something like a sketch, drawing, or painting. Even in the absence of pictures or human-drawn/painted images, I don't support using AI-generated images; they're not really pictures of the person, after all, so I can't support using them on articles of people. Using nothing would genuinely be a better choice than generated images. SmittenGalaxy | talk! 01:07, 4 January 2025 (UTC)
- No due to reasons of copyright (AI harvests copyrighted material) and verifiability. Gamaliel (talk) 18:12, 4 January 2025 (UTC)
- No. Even if you are willing to ignore the inherently fraught nature of using AI-generated anything in relation to BLP subjects, there is simply little to no benefit that could possibly come from trying something like this. There's no guarantee the images will actually look like the person in question, and therefore there's no actual context or information that the image is providing the reader. What a baffling proposal. Ithinkiplaygames (talk) 19:53, 4 January 2025 (UTC)
There's no guarantee the images will actually look like the person in question
there is no guarantee any image will look like the person in question. When an image is not a good likeness, regardless of why, we don't use it. When am image is a good likeness we consider using it. Whether an image is AI-generated or not it is completely independent of whether it is a good likeness. There are also reason other then identification why images are used on BLP-articles. Thryduulf (talk) 20:39, 4 January 2025 (UTC)
- Foreseeably there may come a time when people's official portraits are AI-enhanced. That time might not be very far in the future. Do we want an exception for official portraits?—S Marshall T/C 01:17, 5 January 2025 (UTC)
- This subsection is about purely AI-generated works, not about AI-enhanced ones. Chaotic Enby (talk · contribs) 01:23, 5 January 2025 (UTC)
- No. Per Cremastra, "We should have images drawn or taken by real humans who are trying to depict the subject," - User:RossEvans19 (talk) 02:12, 5 January 2025 (UTC)
- Yes, depending on specific case. One can use drawings by artists, even such as caricature. The latter is an intentional distortion, one could say an intentional misinformation. Still, such images are legitimate on many pages. Or consider numerous images of Jesus. How realiable are they? I am not saying we must deliberatly use AI images on all pages, but they may be fine in some cases. Now, speaking on "medical articles"... One might actually use the AI generated images of certain biological objects like proteins or organelles. Of course a qualified editorial judgement is always needed to decide if they would improve a specific page (frequently they would not), but making a blanket ban would be unacceptable, in my opinion. For example, the images of protein models generatated by AlphaFold would be fine. The AI-generated images of biological membranes I saw? I would say no. It depends. My very best wishes (talk) 02:50, 5 January 2025 (UTC) This is complicated of course. For example, there are tools that make an image of a person that (mis)represents him as someone much better and clever than he really is in life. That should be forbidden as an advertisement. This is a whole new world, but I do not think that a blanket rejection would be appropriate. My very best wishes (talk) 03:19, 5 January 2025 (UTC)
- No, I think there's legal and ethical issues here, especially with the current state of AI. Clovermoss🍀 (talk) 03:38, 5 January 2025 (UTC)
- No: Obviously, we shouldn't be using AI images to represent anyone. Lazman321 (talk) 05:31, 5 January 2025 (UTC)
- No Too risky for BLP's. Besides if people want AI generated content over editor made content, we should make it clear they are in the wrong place, and readers should be given no doubt as to our integrity, sincerity and effort to give them our best, not a program's. Alanscottwalker (talk) 14:51, 5 January 2025 (UTC)
- No, as AI's grasp on the Internet takes hold stronger and stronger, it's important Wikipedia, as the online encyclopedia it sets out to be, remains factual and real. Using AI images on Wiki would likely do more harm than good, further thinning the boundaries between what's real and what's not. – zmbro (talk) (cont) 16:52, 5 January 2025 (UTC)
- No, not at the moment. I think it will hard to avoid portraits that been enhanced by AI, as it already been on-going for a number of years and there is no way to avoid it, but I don't want arbitary generated AI portraits of any type. scope_creepTalk 20:19, 5 January 2025 (UTC)
- No for natural images (e.g. photos of people). Generative AI by itself is not a reliable source for facts. In principle, generating images of people and directly sticking them in articles is no different than generating text and directly sticking it in articles. In practice, however, generating images is worse: Text can at least be discussed, edited, and improved afterwards. In contrast, we have significantly less policy and fewer rigorous methods of discussing how AI-generated images of natural objects should be improved (e.g. "make his face slightly more oblong, it's not close enough yet"). Discussion will devolve into hunches and gut feelings about the fidelity of images, all of which essentially fall under WP:OR. spintheer (talk) 20:37, 5 January 2025 (UTC)
- No I'm appalled that even a small minority of editors would support such an idea. We have enough credibility issues already; using AI-generated images to represent real people is not something that a real encyclopedia should even consider. LEPRICAVARK (talk) 22:26, 5 January 2025 (UTC)
- No I understand the comparison to using illustrations in BLP articles, but I've always viewed that as less preferable to no picture in all honestly. Images of a person are typically presented in context, such as a performer on stage, or a politician's official portrait, and I feel like there would be too many edge cases to consider in terms of making it clear that the photo is AI generated and isn't representative of anything that the person specifically did, but is rather an approximation. Tpdwkouaa (talk) 06:50, 6 January 2025 (UTC)
- No - Too often the images resemble caricatures. Real caricatures may be included in articles if the caricature (e.g., political cartoon) had significant coverage and is attributed to the artist. Otherwise, representations of living persons should be real representations taken with photographic equipment. Robert McClenon (talk) 02:31, 7 January 2025 (UTC)
- So you will be arguing for the removal of the lead images at Banksy, CGP Grey, etc. then? Thryduulf (talk) 06:10, 7 January 2025 (UTC)
- At this point you're making bad-faith "BY YOUR LOGIC" arguments. You're better than that. Don't do it. DS (talk) 19:18, 7 January 2025 (UTC)
- So you will be arguing for the removal of the lead images at Banksy, CGP Grey, etc. then? Thryduulf (talk) 06:10, 7 January 2025 (UTC)
- Strong no per bloodofox. —Nythar (💬-🍀) 03:32, 7 January 2025 (UTC)
- No for AI-generated BLP images Mrfoogles (talk) 21:40, 7 January 2025 (UTC)
- No - Not only is this effectively guesswork that usually includes unnatural artefacts, but worse, it is also based on unattributed work of photographers who didn't release their work into public domain. I don't care if it is an open legal loophole somewhere, IMO even doing away with the fair use restriction on BLPs would be morally less wrong. I suspect people on whose work LLMs in question were trained would also take less offense to that option. Daß Wölf 23:25, 7 January 2025 (UTC)
- No – WP:NFC says that
Non-free content should not be used when a freely licensed file that serves the same purpose can reasonably be expected to be uploaded, as is the case for almost all portraits of living people.
While AI images may not be considered copyrightable, it could still be a copyright violation if the output resembles other, copyrighted images, pushing the image towards NFC. At the very least, I feel the use of non-free content to generate AI images violates the spirit of the NFC policy. (I'm assuming copyrighted images of a person are used to generate an AI portrait of them; if free images of that person were used, we should just use those images, and if no images of the person were used, how on Earth would we trust the output?) RunningTiger123 (talk) 02:43, 8 January 2025 (UTC) - No, AI images should not be permitted on Wikipedia at all. Stifle (talk) 11:27, 8 January 2025 (UTC)
Expiration date?
"AI," as the term is currently used, is very new. It feels like large language models and the type of image generators under discussion just got here in 2024. (Yes, I know it was a little earlier.) The culture hasn't completed its initial response to them yet. Right now, these images do more harm than good, but that may change. Either we'll come up with a better way of spotting hallucinations or the machines will hallucinate less. Their copyright status also seems unstable. I suggest that any ban decided upon here have some expiration date or required rediscussion date. Two years feels about right to me, but the important thing would be that the ban has a number on it. Darkfrog24 (talk) 23:01, 3 January 2025 (UTC)
- No need for any end-date. If there comes a point where consensus on this changes, then we can change any ban then. FOARP (talk) 05:27, 5 January 2025 (UTC)
- An end date is a positive suggestion. Consensus systems like Wikipedia's are vulnerable to half-baked precedential decisions being treated as inviolate. With respect, this conversation does not inspire confidence that this policy proposal's consequences are well-understood at this time. If Wikipedia goes in this direction, it should be labeled as primarily reactionary and open to review at a later date. lethargilistic (talk) 10:22, 5 January 2025 (UTC)
- Agree with FOARP, no need for an end date. If something significantly changes (e.g. reliable sources/news outlets such as the New York Times, BBC, AP, etc. start using text-to-image models to generate images of living people for their own articles) then this topic can be revisited later. Editors will have to go through the usual process of starting a new discussion/proposal when that time comes. Some1 (talk) 11:39, 5 January 2025 (UTC)
- Seeing as this discussion has not touched at all on what other organizations may or may not do, it would not be accurate to describe any consensus derived from this conversation in terms of what other organizations may or may not be doing. That is, there has been no consensus that we ought to be looking to the New York Times as an example. Doing so would be inadvisable for several reasons. For one, they have sued an AI company over semi-related issues and they have teams explicitly working on what the future of AI in news ought to look like, so they have some investment in what the future of AI looks like and they are explicitly trying to shape its norms. For another, if they did start to use AI in a way that may be controversial, they would have no positive reason to disclose that and many disincentives. They are not a neutral signal on this issue. Wikipedia should decide for itself, preferably doing so while not disrupting the ability of people to continue creating user-generated images. lethargilistic (talk) 03:07, 6 January 2025 (UTC)
- WP:Consensus can change on an indefinite basis, if something changes. An arbitrary sunset date doesn't seem much use. CMD (talk) 03:15, 6 January 2025 (UTC)
- No need per others. Additionally, if practices change, it doesn't mean editors will decide to follow new practices. As for the technology, it seems the situation has been fairly stable for the past two years: we can detect some fakes and hallucinations immediately, many more in the past, but certainly not all retouched elements and all generated photos available right now, even if there was a readily accessible tool or app that enabled ordinary people to reliably do so.
- Through the history, art forgeries have been fairly reliably detected, but rarely quickly. Relatedly, I don't see why the situation with AI images would change in the next 24 months or any similar time period. Daß Wölf 22:17, 9 January 2025 (UTC)
Should WP:Demonstrate good faith include mention of AI-generated comments?
Using AI to write your comments in a discussion makes it difficult for others to assume that you are discussing in good faith, rather than trying to use AI to argue someone into exhaustion (see example of someone using AI in their replies "Because I don't have time to argue with, in my humble opinion, stupid PHOQUING people"). More fundamentally, WP:AGF can't apply to the AI itself as AI lacks intentionality, and it is difficult for editors to assess how much of an AI-generated comment reflects the training of the AI vs. the actual thoughts of the editor.
Should WP:DGF be addended to include that using AI to generate your replies in a discussion runs counter to demonstrating good faith? Photos of Japan (talk) 00:23, 2 January 2025 (UTC)
- Yes, I think this is a good idea. :bloodofox: (talk) 00:39, 2 January 2025 (UTC)
- No. As with all the other concurrent discussions (how many times do we actually need to discuss the exact same FUD and scaremongering?) the problem is not AI, but rather inappropriate use of AI. What we need to do is to (better) explain what we actually want to see in discussions, not vaguely defined bans of swathes of technology that, used properly, can aid communication. Thryduulf (talk) 01:23, 2 January 2025 (UTC)
- Note that this topic is discussing using AI to generate replies, as opposed to using it as an aid (e.g. asking it to edit for grammar, or conciseness). As the above concurrent discussion demonstrates, users are already using AI to generate their replies in AfD, so it isn't scaremongering but an actual issue.
- WP:DGF also does not ban anything ("Showing good faith is not required"), but offers general advice on demonstrating good faith. So it seems like the most relevant place to include mention of the community's concerns regarding AI-generated comments, without outright banning anything. Photos of Japan (talk) 01:32, 2 January 2025 (UTC)
- And as pointed out, multiple times in those discussions, different people understand different things from the phrase "AI-generated". The community's concern is not AI-generated comments, but comments that do not clearly and constructively contribute to a discussion - some such comments are AI-generated, some are not. This proposal would, just as all the other related ones, cause actual harm when editors falsely accuse others of using AI (and this will happen). Thryduulf (talk) 02:34, 2 January 2025 (UTC)
- Nobody signed up to argue with bots here. If you're pasting someone else's comment into a prompt and asking the chatbot to argue against that comment and just posting it in here, that's a real problema and absolutely should not be acceptable. :bloodofox: (talk) 03:31, 2 January 2025 (UTC)
- Thank you for the assumption of bad faith and demonstrating one of my points about the harm caused. Nobody is forcing you to engage with bad-faith comments, but whether something is or is not bad faith needs to be determined by its content not by its method of generation. Simply using an AI demonstrates neither good faith nor bad faith. Thryduulf (talk) 04:36, 2 January 2025 (UTC)
- I don't see why we have any particular to reason to suspect a respected and trustworthy editor of using AI. Cremastra (u — c) 14:31, 2 January 2025 (UTC)
- I'm one of those people who clarified the difference between AI-generated vs. edited, and such a difference could be made explicit with a note. Editors are already accusing others of using AI. Could you clarify how you think addressing AI in WP:DGF would cause actual harm? Photos of Japan (talk) 04:29, 2 January 2025 (UTC)
- By encouraging editors to accuse others of using AI, by encouraging editors to dismiss or ignore comments because they suspect that they are AI-generated rather than engaging with them. @Bloodofox has already encouraged others to ignore my arguments in this discussion because they suspect I might be using an LLM and/or be a bot (for the record I'm neither). Thryduulf (talk) 04:33, 2 January 2025 (UTC)
- I think bloodofox's comment was about "you" in the rhetorical sense, not "you" as in Thryduulf. jlwoodwa (talk) 11:06, 2 January 2025 (UTC)
- Given your relentlessly pro-AI comments here, it seems that you'd be A-OK with just chatting with a group of chatbots here — or leaving the discussion to them. However, most of us clearly are not. In fact, I would immediately tell someone to get lost were it confirmed that indeed that is what is happening. I'm a human being and find the notion of wasting my time with chatbots on Wikipedia to be incredibly insulting and offensive. :bloodofox: (talk) 04:38, 2 January 2025 (UTC)
- My comments are neither pro-AI nor anti-AI, indeed it seems that you have not understood pretty much anything I'm saying. Thryduulf (talk) 04:43, 2 January 2025 (UTC)
- Funny, you've done nothing here but argue for more generative AI on the site and now you seem to be arguing to let chatbots run rampant on it while mocking anyone who doesn't want to interface with chatbots on Wikipedia. Hey, why not just sell the site to Meta, am I right? :bloodofox: (talk) 04:53, 2 January 2025 (UTC)
- I haven't been arguing for more generative AI on the site. I've been arguing against banning it on the grounds that such a ban would be unclear, unenforceable, wouldn't solve any problems (largely because whether something is AI or not is completely irrelevant to the matter at hand) but would instead cause harm. Some of the issues identified are actual problems, but AI is not the cause of them and banning AI won't fix them.
- I'm not mocking anybody, nor am I advocating to
let chatbots run rampant
. I'm utterly confused why you think I might advocate for selling Wikipedia to Meta (or anyone else for that matter)? Are you actually reading anything I'm writing? You clearly are not understanding it. Thryduulf (talk) 05:01, 2 January 2025 (UTC)- So we're now in 'everyone else is the problem, not me!' territory now? Perhaps try communicating in a different way because your responses here are looking very much like the typical AI apologetics one can encounter on just about any contemporary LinkedIn thread from your typical FAANG employee. :bloodofox: (talk) 05:13, 2 January 2025 (UTC)
- No, this is not a
everyone else is the problem, not me
issue because most other people appear to be able to understand my arguments and respond to them appropriately. Not everybody agrees with them, but that's not an issue. - I'm not familiar with Linkedin threads (I don't use that platform) nor what a "FAANG employee" is (I've literally never heard the term before now) so I have no idea whether your characterisation is a compliment or a personal attack, but given your comments towards me and others you disagree with elsewhere I suspect it's closer to the latter.
- AI is a tool. Just like any other tool it can be used in good faith or in bad faith, it can be used well and it can be used badly, it can be used in appropriate situations and it can be used in inappropriate situations, the results of using the tool can be good and the results of using the tool can be bad. Banning the tool inevitably bans the good results as well as the bad results but doesn't address the reasons why the results were good or bad and so does not resolve the actual issue that led to the bad outcomes. Thryduulf (talk) 12:09, 2 January 2025 (UTC)
- In the context of generating comments to other users though, AI is much easier to use for bad faith than for good faith. LLMs don't understand Wikipedia's policies and norms, and so are hard to utilize to generate posts that productively address them. By contrast, bad actors can easily use LLMs to make low quality posts to waste people's time or wear them down.
- In the context of generating images, or text for articles, it's easy to see how the vast majority of users using AI for those purposes is acting in good faith as these are generally constructive tasks, and most people making bad faith changes to articles are either obvious vandals who won't bother to use AI because they'll be reverted soon anyways, or trying to be subtle (povpushers) in which case they tend to want to carefully write their own text into the article.
- It's true that AI "is just a tool", but when that tool is much easier to use for bad faith purposes (in the context of discussions) then it raises suspicions about why people are using it. Photos of Japan (talk) 22:44, 2 January 2025 (UTC)
LLMs don't understand Wikipedia's policies and norms
They're not designed to "understand" them since the policies and norms were designed for human cognition. The fact that AI is used rampantly by people acting in bad faith on Wikipedia does not inherently condemn the AI. To me, it shows that it's too easy for vandals to access and do damage on Wikipedia. Unfortunately, the type of vetting required to prevent that at the source would also potentially require eliminating IP-editing, which won't happen. Duly signed, ⛵ WaltClipper -(talk) 14:33, 15 January 2025 (UTC)
- No, this is not a
- So we're now in 'everyone else is the problem, not me!' territory now? Perhaps try communicating in a different way because your responses here are looking very much like the typical AI apologetics one can encounter on just about any contemporary LinkedIn thread from your typical FAANG employee. :bloodofox: (talk) 05:13, 2 January 2025 (UTC)
- You mentioned "FUD". That acronym, "fear, uncertainty and doubt," is used in precisely two contexts: pro-AI propagadizing and persuading people who hold memecoin crypto to continue holding it. Since this discussion is not about memecoin crypto that would suggest you are using it in a pro-AI context. I will note, fear, uncertainty and doubt is not my problem with AI. Rather it's anger, aesthetic disgust and feeling disrespected when somebody makes me talk to their chatbot. Simonm223 (talk) 14:15, 14 January 2025 (UTC)
That acronym, "fear, uncertainty and doubt," is used in precisely two contexts
is simply- FUD both predates AI by many decades (my father introduced me to the term in the context of the phrase "nobody got fired for buying IBM", and the context of that was mainframe computer systems in the 1980s if not earlier. FUD is also used in many, many more contexts that just those two you list, including examples by those opposing the use of AI on Wikipedia in these very discussions. Thryduulf (talk) 14:47, 14 January 2025 (UTC)
That acronym, "fear, uncertainty and doubt," is used in precisely two contexts
is factually incorrect.- FUD both predates AI by many decades (indeed if you'd bothered to read the fear, uncertainty and doubt article you'd learn that the concept was first recorded in 1693, the exact formulation dates from at least the 1920s and the use of it in technology concepts originated in 1975 in the context of mainframe computer systems. That its use, eve in just AI contexts, is limited to pro-AI advocacy is ludicrous (even ignoring things like Roko's basilisk), examples can be found in these sprawling discussions from those opposing AI use on Wikipedia. Thryduulf (talk) 14:52, 14 January 2025 (UTC)
- Funny, you've done nothing here but argue for more generative AI on the site and now you seem to be arguing to let chatbots run rampant on it while mocking anyone who doesn't want to interface with chatbots on Wikipedia. Hey, why not just sell the site to Meta, am I right? :bloodofox: (talk) 04:53, 2 January 2025 (UTC)
- My comments are neither pro-AI nor anti-AI, indeed it seems that you have not understood pretty much anything I'm saying. Thryduulf (talk) 04:43, 2 January 2025 (UTC)
- By encouraging editors to accuse others of using AI, by encouraging editors to dismiss or ignore comments because they suspect that they are AI-generated rather than engaging with them. @Bloodofox has already encouraged others to ignore my arguments in this discussion because they suspect I might be using an LLM and/or be a bot (for the record I'm neither). Thryduulf (talk) 04:33, 2 January 2025 (UTC)
- Nobody signed up to argue with bots here. If you're pasting someone else's comment into a prompt and asking the chatbot to argue against that comment and just posting it in here, that's a real problema and absolutely should not be acceptable. :bloodofox: (talk) 03:31, 2 January 2025 (UTC)
- And as pointed out, multiple times in those discussions, different people understand different things from the phrase "AI-generated". The community's concern is not AI-generated comments, but comments that do not clearly and constructively contribute to a discussion - some such comments are AI-generated, some are not. This proposal would, just as all the other related ones, cause actual harm when editors falsely accuse others of using AI (and this will happen). Thryduulf (talk) 02:34, 2 January 2025 (UTC)
- WP:DGF also does not ban anything ("Showing good faith is not required"), but offers general advice on demonstrating good faith. So it seems like the most relevant place to include mention of the community's concerns regarding AI-generated comments, without outright banning anything. Photos of Japan (talk) 01:32, 2 January 2025 (UTC)
- Not really – I agree with Thryduulf's arguments on this one. Using AI to help tweak or summarize or "enhance" replies is of course not bad faith – the person is trying hard. Maybe English is their second language. Even for replies 100% AI-generated the user may be an ESL speaker struggling to remember the right words (I always forget 90% of my French vocabulary when writing anything in French, for example). In this case, I don't think we should make a blanket assumption that using AI to generate comments is not showing good faith. Cremastra (u — c) 02:35, 2 January 2025 (UTC)
- Yes because generating walls of text is not good faith. People "touching up" their comments is also bad (for starters, if you lack the English competency to write your statements in the first place, you probably lack the competency to tell if your meaning has been preserved or not). Exactly what AGF should say needs work, but something needs to be said, and
AGFDGF is a good place to do it. XOR'easter (talk) 02:56, 2 January 2025 (UTC)- Not all walls of text are generated by AI, not all AI generated comments are walls of text. Not everybody who uses AI to touch up their comments lacks the competencies you describe, not everybody who does lack those competencies uses AI. It is not always possible to tell which comments have been generated by AI and which have not. This proposal is not particularly relevant to the problems you describe. Thryduulf (talk) 03:01, 2 January 2025 (UTC)
- Someone has to ask: Are you generating all of these pro-AI arguments using ChatGPT? It'd explain a lot. If so, I'll happily ignore any and all of your contributions, and I'd advise anyone else to do the same. We're not here to be flooded with LLM-derived responses. :bloodofox: (talk) 03:27, 2 January 2025 (UTC)
- That you can't tell whether my comments are AI-generated or not is one of the fundamental problems with these proposals. For the record they aren't, nor are they pro-AI - they're simply anti throwing out babies with bathwater. Thryduulf (talk) 04:25, 2 January 2025 (UTC)
- I'd say it also illustrates the serious danger: We can no longer be sure that we're even talking to other people here, which is probably the most notable shift in the history of Wikipedia. :bloodofox: (talk) 04:34, 2 January 2025 (UTC)
- How is that a "serious danger"? If a comment makes a good point, why does it matter whether ti was AI generated or not? If it doesn't make a good point, why does it matter if it was AI generated or not? How will these proposals resolve that "danger"? How will they be enforceable? Thryduulf (talk) 04:39, 2 January 2025 (UTC)
- Wikipedia is made for people, by people, and I like most people will be incredibly offended to find that we're just playing some kind of LLM pong with a chatbot of your choice. You can't be serious. :bloodofox: (talk) 04:40, 2 January 2025 (UTC)
- You are entitled to that philosophy, but that doesn't actually answer any of my questions. Thryduulf (talk) 04:45, 2 January 2025 (UTC)
- "why does it matter if it was AI generated or not?"
- Because it takes little effort to post a lengthy, low quality AI-generated post, and a lot of effort for human editors to write up replies debunking them.
- "How will they be enforceable? "
- WP:DGF isn't meant to be enforced. It's meant to explain to people how they can demonstrate good faith. Posting replies to people (who took the time to write them) that are obviously AI-generated harms the ability of those people to assume good faith. Photos of Japan (talk) 05:16, 2 January 2025 (UTC)
- Wikipedia is made for people, by people, and I like most people will be incredibly offended to find that we're just playing some kind of LLM pong with a chatbot of your choice. You can't be serious. :bloodofox: (talk) 04:40, 2 January 2025 (UTC)
- How is that a "serious danger"? If a comment makes a good point, why does it matter whether ti was AI generated or not? If it doesn't make a good point, why does it matter if it was AI generated or not? How will these proposals resolve that "danger"? How will they be enforceable? Thryduulf (talk) 04:39, 2 January 2025 (UTC)
- I'd say it also illustrates the serious danger: We can no longer be sure that we're even talking to other people here, which is probably the most notable shift in the history of Wikipedia. :bloodofox: (talk) 04:34, 2 January 2025 (UTC)
- That you can't tell whether my comments are AI-generated or not is one of the fundamental problems with these proposals. For the record they aren't, nor are they pro-AI - they're simply anti throwing out babies with bathwater. Thryduulf (talk) 04:25, 2 January 2025 (UTC)
- Someone has to ask: Are you generating all of these pro-AI arguments using ChatGPT? It'd explain a lot. If so, I'll happily ignore any and all of your contributions, and I'd advise anyone else to do the same. We're not here to be flooded with LLM-derived responses. :bloodofox: (talk) 03:27, 2 January 2025 (UTC)
- The linked "example of someone using AI in their replies" appears – to me – to be a non-AI-generated comment. I think I preferred the allegedly AI-generated comments from that user (example). The AI was at least superficially polite. WhatamIdoing (talk) 04:27, 2 January 2025 (UTC)
- Obviously the person screaming in all caps that they use AI because they don't want to waste their time arguing is not using AI for that comment. Their first post calls for the article to be deleted for not "offering new insights or advancing scholarly understanding" and "merely" reiterating what other sources have written.
- Yes, after a human had wasted their time explaining all the things wrong with its first post, then the bot was able to write a second post which looks ok. Except it only superficially looks ok, it doesn't actually accurately describe the articles. Photos of Japan (talk) 04:59, 2 January 2025 (UTC)
- Multiple humans have demonstrated in these discussions that humans are equally capable of writing posts which superficially look OK but don't actually accurately relate to anything they are responding to. Thryduulf (talk) 05:03, 2 January 2025 (UTC)
- But I can assume that everyone here is acting in good faith. I can't assume good faith in the globally-locked sock puppet spamming AfD discussions with low effort posts, whose bot is just saying whatever it can to argue for the deletion of political pages the editor doesn't like. Photos of Japan (talk) 05:09, 2 January 2025 (UTC)
- True, but I think that has more to do with the "globally-locked sock puppet spamming AfD discussions" part than with the "some of it might be [AI-generated]" part. WhatamIdoing (talk) 07:54, 2 January 2025 (UTC)
- All of which was discovered because of my suspicions from their inhuman, and meaningless replies. "Reiteration isn't the problem; redundancy is," maybe sounds pithy in a vacuum, but this was written in reply to me stating that we aren't supposed to be doing OR but reiterating what the sources say.
- "Your criticism feels overly prescriptive, as though you're evaluating this as an academic essay" also sounds good, until you realize that the bot is actually criticizing its own original post.
- The fact that my suspicions about their good faith were ultimately validated only makes it even harder for me to assume good faith in users who sound like ChatGPT. Photos of Japan (talk) 08:33, 2 January 2025 (UTC)
- I wonder if we need some other language here. I can understand feeling like this is a bad interaction. There's no sense that the person cares; there's no feeling like this is a true interaction. A contract lawyer would say that there's no meeting of the minds, and there can't be, because there's no mind in the AI, and the human copying from the AI doesn't seem to be interested in engaging their brain.
- But... do you actually think they're doing this for the purpose of intentionally harming Wikipedia? Or could this be explained by other motivations? Never attribute to malice that which can be adequately explained by stupidity – or to anxiety, insecurity (will they hate me if I get my grammar wrong?), incompetence, negligence, or any number of other "understandable" (but still something WP:SHUN- and even block-worthy) reasons. WhatamIdoing (talk) 08:49, 2 January 2025 (UTC)
- The user's talk page has a header at the top asking people not to template them because it is "impersonal and disrespectful", instead requesting "please take a moment to write a comment below in your own words"
- Does this look like acting in good faith to you? Requesting other people write personalized responses to them while they respond with an LLM? Because it looks to me like they are trying to waste other people's time. Photos of Japan (talk) 09:35, 2 January 2025 (UTC)
- Wikipedia:Assume good faith means that you assume people aren't deliberately screwing up on purpose. Humans are self-contradictory creatures. I generally do assume that someone who is being hypocritical hasn't noticed their contradictions yet. WhatamIdoing (talk) 07:54, 3 January 2025 (UTC)
- "Being hypocritical" in the abstract isn't the problem, it's the fact that asking people to put effort into their comments, while putting in minimal effort into your own comments appears bad faith, especially when said person says they don't want to waste time writing comments to stupid people. The fact you are arguing AGF for this person is both astounding and disappointing. Photos of Japan (talk) 16:08, 3 January 2025 (UTC)
- It feels like there is a lack of reciprocity in the interaction, even leaving aside the concern that the account is a block-evading sock.
- But I wonder if you have read AGF recently. The first sentence is "Assuming good faith (AGF) means assuming that people are not deliberately trying to hurt Wikipedia, even when their actions are harmful."
- So we've got some of this (e.g., harmful actions). But do you really believe this person woke up in the morning and decided "My main goal for today is to deliberately hurt Wikipedia. I might not be successful, but I sure am going to try hard to reach my goal"? WhatamIdoing (talk) 23:17, 4 January 2025 (UTC)
- Trying to hurt Wikipedia doesn't mean they have to literally think "I am trying to hurt Wikipedia", it can mean a range of things, such as "I am trying to troll Wikipedians". A person who thinks a cabal of editors is guarding an article page, and that they need to harass them off the site, may think they are improving Wikipedia, but at the least I wouldn't say that they are acting in good faith. Photos of Japan (talk) 23:27, 4 January 2025 (UTC)
- Sure, I'd count that as a case of "trying to hurt Wikipedia-the-community". WhatamIdoing (talk) 06:10, 5 January 2025 (UTC)
- Trying to hurt Wikipedia doesn't mean they have to literally think "I am trying to hurt Wikipedia", it can mean a range of things, such as "I am trying to troll Wikipedians". A person who thinks a cabal of editors is guarding an article page, and that they need to harass them off the site, may think they are improving Wikipedia, but at the least I wouldn't say that they are acting in good faith. Photos of Japan (talk) 23:27, 4 January 2025 (UTC)
- "Being hypocritical" in the abstract isn't the problem, it's the fact that asking people to put effort into their comments, while putting in minimal effort into your own comments appears bad faith, especially when said person says they don't want to waste time writing comments to stupid people. The fact you are arguing AGF for this person is both astounding and disappointing. Photos of Japan (talk) 16:08, 3 January 2025 (UTC)
- Wikipedia:Assume good faith means that you assume people aren't deliberately screwing up on purpose. Humans are self-contradictory creatures. I generally do assume that someone who is being hypocritical hasn't noticed their contradictions yet. WhatamIdoing (talk) 07:54, 3 January 2025 (UTC)
- True, but I think that has more to do with the "globally-locked sock puppet spamming AfD discussions" part than with the "some of it might be [AI-generated]" part. WhatamIdoing (talk) 07:54, 2 January 2025 (UTC)
- But I can assume that everyone here is acting in good faith. I can't assume good faith in the globally-locked sock puppet spamming AfD discussions with low effort posts, whose bot is just saying whatever it can to argue for the deletion of political pages the editor doesn't like. Photos of Japan (talk) 05:09, 2 January 2025 (UTC)
- Multiple humans have demonstrated in these discussions that humans are equally capable of writing posts which superficially look OK but don't actually accurately relate to anything they are responding to. Thryduulf (talk) 05:03, 2 January 2025 (UTC)
- Yes, after a human had wasted their time explaining all the things wrong with its first post, then the bot was able to write a second post which looks ok. Except it only superficially looks ok, it doesn't actually accurately describe the articles. Photos of Japan (talk) 04:59, 2 January 2025 (UTC)
- The issues with AI in discussions is not related to good faith, which is narrowly defined to intent. CMD (talk) 04:45, 2 January 2025 (UTC)
- In my mind, they are related inasmuch as it is much more difficult for me to ascertain good faith if the words are eminently not written by the person I am speaking to in large part, but instead generated based on an unknown prompt in what is likely a small fraction of the expected time. To be frank, in many situations it is difficult to avoid the conclusion that the disparity in effort is being leveraged in something less than good faith. Remsense ‥ 论 05:02, 2 January 2025 (UTC)
- Assume good faith, don't ascertain! Llm use can be deeply unhelpful for discussions and the potential for mis-use is large, but the most recent discussion I've been involved with where I observed an llm post was responded to by an llm post, I believe both the users were doing this in good faith. CMD (talk) 05:07, 2 January 2025 (UTC)
- All I mean to say is it should be licit that unhelpful LLM use should be something that can be mentioned like any other unhelpful rhetorical pattern. Remsense ‥ 论 05:09, 2 January 2025 (UTC)
- Sure, but WP:DGF doesn't mention any unhelpful rhetorical patterns. CMD (talk) 05:32, 2 January 2025 (UTC)
- The fact that everyone (myself included) defending "LLM use" says "use" rather than "generated", is a pretty clear sign that no one really wants to communicate with someone using "LLM generated" comments. We can argue about bans (not being proposed here), how to know if someone is using LLM, the nuances of "LLM use", etc., but at the very least we should be able to agree that there are concerns with LLM generated replies, and if we can agree that there are concerns then we should be able to agree that somewhere in policy we should be able to find a place to express those concerns. Photos of Japan (talk) 05:38, 2 January 2025 (UTC)
- ...or they could be saying "use" because "using LLMs" is shorter and more colloquial than "generating text with LLMs"? Gnomingstuff (talk) 06:19, 2 January 2025 (UTC)
- Seems unlikely when people justify their use for editing (which I also support), and not for generating replies on their behalf. Photos of Japan (talk) 06:23, 2 January 2025 (UTC)
- This is just semantics.
- For instance, I am OK with someone using a LLM to post a productive comment on a talk page. I am also OK with someone generating a reply with a LLM that is a productive comment to post to a talk page. I am not OK with someone generating text with an LLM to include in an article, and also not OK with someone using a LLM to contribute to an article.
- The only difference between these four sentences is that two of them are more annoying to type than the other two. Gnomingstuff (talk) 08:08, 2 January 2025 (UTC)
- Most people already assume good faith in those making productive contributions. In situations where good faith is more difficult to assume, would you trust someone who uses an LLM to generate all of their comments as much as someone who doesn't? Photos of Japan (talk) 09:11, 2 January 2025 (UTC)
- Given that LLM-use is completely irrelevant to the faith in which a user contributes, yes. Of course what amount that actually is may be anywhere between completely and none. Thryduulf (talk) 11:59, 2 January 2025 (UTC)
- LLM-use is relevant as it allows bad faith users to disrupt the encyclopedia with minimal effort. Such a user posted in this thread earlier, as well as started a disruptive thread here and posted here, all using AI. I had previously been involved in a debate with another sock puppet of theirs, but at that time they didn't use AI. Now it seems they are switching to using an LLM just to troll with minimal effort. Photos of Japan (talk) 21:44, 2 January 2025 (UTC)
- LLMs are a tool that can be used by good and bad faith users alike. Using an LLM tells you nothing about whether a user is contributing in good or bad faith. If somebody is trolling they can be, and should be, blocked for trolling regardless of the specifics of how they are trolling. Thryduulf (talk) 21:56, 2 January 2025 (UTC)
- A can of spray paint, a kitchen knife, etc., are tools that can be used for good or bad, but if you bring them some place where they have few good uses and many bad uses then people will be suspicious about why you brought them. You can't just assume that a tool in any context is equally harmless. Using AI to generate replies to other editors is more suspicious than using it to generate a picture exemplifying a fashion style, or a description of a physics concept. Photos of Japan (talk) 23:09, 2 January 2025 (UTC)
- LLMs are a tool that can be used by good and bad faith users alike. Using an LLM tells you nothing about whether a user is contributing in good or bad faith. If somebody is trolling they can be, and should be, blocked for trolling regardless of the specifics of how they are trolling. Thryduulf (talk) 21:56, 2 January 2025 (UTC)
- LLM-use is relevant as it allows bad faith users to disrupt the encyclopedia with minimal effort. Such a user posted in this thread earlier, as well as started a disruptive thread here and posted here, all using AI. I had previously been involved in a debate with another sock puppet of theirs, but at that time they didn't use AI. Now it seems they are switching to using an LLM just to troll with minimal effort. Photos of Japan (talk) 21:44, 2 January 2025 (UTC)
- Given that LLM-use is completely irrelevant to the faith in which a user contributes, yes. Of course what amount that actually is may be anywhere between completely and none. Thryduulf (talk) 11:59, 2 January 2025 (UTC)
- Most people already assume good faith in those making productive contributions. In situations where good faith is more difficult to assume, would you trust someone who uses an LLM to generate all of their comments as much as someone who doesn't? Photos of Japan (talk) 09:11, 2 January 2025 (UTC)
- Seems unlikely when people justify their use for editing (which I also support), and not for generating replies on their behalf. Photos of Japan (talk) 06:23, 2 January 2025 (UTC)
- ...or they could be saying "use" because "using LLMs" is shorter and more colloquial than "generating text with LLMs"? Gnomingstuff (talk) 06:19, 2 January 2025 (UTC)
- All I mean to say is it should be licit that unhelpful LLM use should be something that can be mentioned like any other unhelpful rhetorical pattern. Remsense ‥ 论 05:09, 2 January 2025 (UTC)
- Assume good faith, don't ascertain! Llm use can be deeply unhelpful for discussions and the potential for mis-use is large, but the most recent discussion I've been involved with where I observed an llm post was responded to by an llm post, I believe both the users were doing this in good faith. CMD (talk) 05:07, 2 January 2025 (UTC)
- In my mind, they are related inasmuch as it is much more difficult for me to ascertain good faith if the words are eminently not written by the person I am speaking to in large part, but instead generated based on an unknown prompt in what is likely a small fraction of the expected time. To be frank, in many situations it is difficult to avoid the conclusion that the disparity in effort is being leveraged in something less than good faith. Remsense ‥ 论 05:02, 2 January 2025 (UTC)
- I wouldn't trust anything factual the person would have to say, but I wouldn't assume they were malicious, which is the entire point of WP:AGF. Gnomingstuff (talk) 16:47, 2 January 2025 (UTC)
- WP:AGF is not a death pact though. At times you should be suspicious. Do you think that if a user, who you already have suspicions of, is also using an LLM to generate their comments, that that doesn't have any effect on those suspicions? Photos of Japan (talk) 21:44, 2 January 2025 (UTC)
- So… If you suspect that someone is not arguing in good faith… just stop engaging them. If they are creating walls of text but not making policy based arguments, they can be ignored. Resist the urge to respond to every comment… it isn’t necessary to “have the last word”. Blueboar (talk) 21:57, 2 January 2025 (UTC)
- As the person just banned at ANI for persistently using LLMs to communicate demonstrates, you can't "just stop engaging them". When they propose changes to an article and say they will implement them if no one replies then somebody has to engage them in some way. It's not about trying to "have the last word", this is a collaborative project, it generally requires engaging with others to some degree. When someone like the person I linked to above (now banned sock), spams low quality comments across dozens of AfDs, then they are going to waste people's time, and telling others to just not engage with them is dismissive of that. Photos of Japan (talk) 22:57, 2 January 2025 (UTC)
- That they've been banned for disruption indicates we can do everything we need to do to deal with bad faith users of LLMs without assuming that everyone using an LLM is doing so in bad faith. Thryduulf (talk) 00:33, 3 January 2025 (UTC)
- I don't believe we should assume everyone using an LLM is doing so in bad faith, so I'm glad you think my comment indicates what I believe. Photos of Japan (talk) 01:09, 3 January 2025 (UTC)
- That they've been banned for disruption indicates we can do everything we need to do to deal with bad faith users of LLMs without assuming that everyone using an LLM is doing so in bad faith. Thryduulf (talk) 00:33, 3 January 2025 (UTC)
- As the person just banned at ANI for persistently using LLMs to communicate demonstrates, you can't "just stop engaging them". When they propose changes to an article and say they will implement them if no one replies then somebody has to engage them in some way. It's not about trying to "have the last word", this is a collaborative project, it generally requires engaging with others to some degree. When someone like the person I linked to above (now banned sock), spams low quality comments across dozens of AfDs, then they are going to waste people's time, and telling others to just not engage with them is dismissive of that. Photos of Japan (talk) 22:57, 2 January 2025 (UTC)
- So… If you suspect that someone is not arguing in good faith… just stop engaging them. If they are creating walls of text but not making policy based arguments, they can be ignored. Resist the urge to respond to every comment… it isn’t necessary to “have the last word”. Blueboar (talk) 21:57, 2 January 2025 (UTC)
- WP:AGF is not a death pact though. At times you should be suspicious. Do you think that if a user, who you already have suspicions of, is also using an LLM to generate their comments, that that doesn't have any effect on those suspicions? Photos of Japan (talk) 21:44, 2 January 2025 (UTC)
- I wouldn't trust anything factual the person would have to say, but I wouldn't assume they were malicious, which is the entire point of WP:AGF. Gnomingstuff (talk) 16:47, 2 January 2025 (UTC)
- No -- whatever you think of LLMs, the reason they are so popular is that the people who use them earnestly believe they are useful. Claiming otherwise is divorced from reality. Even people who add hallucinated bullshit to articles are usually well-intentioned (if wrong). Gnomingstuff (talk) 06:17, 2 January 2025 (UTC)
- Comment I have no opinion on this matter, however, note that we are currently dealing with a real-world application of this at ANI and there's a generalized state of confusion in how to address it. Chetsford (talk) 08:54, 2 January 2025 (UTC)
- Yes I find it incredibly rude for someone to procedurally generate text and then expect others to engage with it as if they were actually saying something themselves. Simonm223 (talk) 14:34, 2 January 2025 (UTC)
- Yes, mention that use of an LLM should be disclosed and that failure to do so is like not telling someone you are taping the call. Selfstudier (talk) 14:43, 2 January 2025 (UTC)
- I could support general advice that if you're using machine translation or an LLM to help you write your comments, it can be helpful to mention this in the message. The tone to take, though, should be "so people won't be mad at you if it screwed up the comment" instead of "because you're an immoral and possibly criminal person if you do this". WhatamIdoing (talk) 07:57, 3 January 2025 (UTC)
- No. When someone publishes something under their own name, they are incorporating it as their own statement. Plagiarism from an AI or elsewhere is irrelevant to whether they are engaging in good faith. lethargilistic (talk) 17:29, 2 January 2025 (UTC)
- Comment LLMs know a few tricks about logical fallacies and some general ways of arguing (rhetoric), but they are incredibly dumb at understanding the rules of Wikipedia. You can usually tell this because it looks like incredibly slick and professional prose, but somehow it cannot get even the simplest points about the policies and guidelines of Wikipedia. I would indef such users for lacking WP:CIR. tgeorgescu (talk) 17:39, 2 January 2025 (UTC)
- That guideline states "Sanctions such as blocks and bans are always considered a last resort where all other avenues of correcting problems have been tried and have failed." Gnomingstuff (talk) 19:44, 2 January 2025 (UTC)
- WP:CIR isn't a guideline, but an essay. Relevantly though it is being cited at this very moment in an ANI thread concerning a user who can't/won't communicate without an LLM. Photos of Japan (talk) 20:49, 2 January 2025 (UTC)
- I blocked that user as NOTHERE a few minutes ago after seeing them (using ChatGPT) make suggestions for text to live pagespace while their previous bad behaviors were under discussion. AGF is not a suicide pact. BusterD (talk) 20:56, 2 January 2025 (UTC)
- WP:CIR isn't a guideline, but an essay. Relevantly though it is being cited at this very moment in an ANI thread concerning a user who can't/won't communicate without an LLM. Photos of Japan (talk) 20:49, 2 January 2025 (UTC)
... but somehow it cannot get even the simplest points about the policies and guidelines of Wikipedia
: That problem existed with some humans even prior to LLMs. —Bagumba (talk) 02:53, 20 January 2025 (UTC)
- That guideline states "Sanctions such as blocks and bans are always considered a last resort where all other avenues of correcting problems have been tried and have failed." Gnomingstuff (talk) 19:44, 2 January 2025 (UTC)
- No - Not a good or bad faith issue. PackMecEng (talk) 21:02, 2 January 2025 (UTC)
- Yes Using a 3rd party service to contribute to the Wikipedia on your behalf is clearly bad-faith, analogous to paying someone to write your article. Zaathras (talk) 14:39, 3 January 2025 (UTC)
- Its a stretch to say that a newbie writing a comment using AI is automatically acting in bad faith and not here to build an encyclopedia. PackMecEng (talk) 16:55, 3 January 2025 (UTC)
- That's true, but this and other comments here show that not a few editors perceive it as bad-faith, rude, etc. I take that as an indication that we should tell people to avoid doing this when they have enough CLUE to read WP:AGF and are making an effort to show they're acting in good faith. Daß Wölf 23:06, 9 January 2025 (UTC)
- Its a stretch to say that a newbie writing a comment using AI is automatically acting in bad faith and not here to build an encyclopedia. PackMecEng (talk) 16:55, 3 January 2025 (UTC)
- Comment Large language model AI like Chat GPT are in their infancy. The culture hasn't finished its initial reaction to them yet. I suggest that any proposal made here have an automatic expiration/required rediscussion date two years after closing. Darkfrog24 (talk) 22:42, 3 January 2025 (UTC)
- No – It is a matter of how you use AI. I use Google translate to add trans-title parameters to citations, but I am careful to check for Google's output making for good English as well as reflecting the foreign title when it is a language I somewhat understand. I like to think that I am careful, and I do not pretend to be fluent in a language I am not familiar with, although I usually don't announce the source of such a translation. If an editor uses AI profligately and without understanding the material generated, then that is the sin; not AI itself. Dhtwiki (talk) 05:04, 5 January 2025 (UTC)
- There's a legal phrase, "when the exception swallows the rule", and I think we might be headed there with the recent LLM/AI discussions.
- We start off by saying "Let's completely ban it!" Then in discussion we add "Oh, except for this very reasonable thing... and that reasonable thing... and nobody actually meant this other reasonable thing..."
- The end result is that it's "completely banned" ...except for an apparent majority of uses. WhatamIdoing (talk) 06:34, 5 January 2025 (UTC)
- Do you want us to reply to you, because you are a human? Or are you just posting the output of an LLM without bothering to read anything yourself? DS (talk) 06:08, 7 January 2025 (UTC)
- Most likely you would reply because someone posted a valid comment and you are assuming they are acting in good faith and taking responsibility for what they post. To assume otherwise is kind of weird and not inline with general Wikipedia values. PackMecEng (talk) 15:19, 8 January 2025 (UTC)
- Do you want us to reply to you, because you are a human? Or are you just posting the output of an LLM without bothering to read anything yourself? DS (talk) 06:08, 7 January 2025 (UTC)
- No The OP seems to misunderstand WP:DGF which is not aimed at weak editors but instead exhorts stronger editors to lead by example. That section already seems to overload the primary point of WP:AGF and adding mention of AI would be quite inappropriate per WP:CREEP. Andrew🐉(talk) 23:11, 5 January 2025 (UTC)
- No. Reading the current text of the section, adding text about AI would feel out-of-place for what the section is about. —pythoncoder (talk | contribs) 05:56, 8 January 2025 (UTC)
- No, this is not about good faith. Adumbrativus (talk) 11:14, 9 January 2025 (UTC)
- Yes. AI use is not a demonstration of bad faith (in any case not every new good-faith editor is familiar with our AI policies), but it is equally not a "demonstration of good faith", which is what the WP:DGF section is about.
- It seems some editors are missing the point and !voting as if every edit is either a demonstration of good faith or bad faith. Most interactions are neutral and so is most AI use, but I find it hard to imagine a situation where AI use would point away from unfamiliarity and incompetence (in the CIR sense), and it often (unintentionally) leads to a presumption of laziness and open disinterest. It makes perfect sense to recommend against it. Daß Wölf 22:56, 9 January 2025 (UTC)
- Indeed most kinds of actions don't inherently demonstrate good or bad. The circumspect and neutral observation that
AI use is not a demonstration of bad faith... but it is equally not a "demonstration of good faith"
, does not justify a proposal to one-sidedly say just half. And among all the actions that don't necessarily demonstrate good faith (and don't necessarily demonstrate bad faith either), it is not the purpose of "demonstrate good faith" and the broader guideline, to single out one kind of action to especially mention negatively. Adumbrativus (talk) 04:40, 13 January 2025 (UTC)
- Indeed most kinds of actions don't inherently demonstrate good or bad. The circumspect and neutral observation that
- Yes. Per Dass Wolf, though I would say passing off a completely AI-generated comment as your own anywhere is inherently bad-faith and one doesn't need to know Wiki policies to understand that. JoelleJay (talk) 23:30, 9 January 2025 (UTC)
- Yes. Sure, LLMs may have utility somewhere, and it might be a crutch for people unfamiliar with English, but as I've said above in the other AI RfC, that's a competence issue. This is about comments eating up editor time, energy, about LLMs easily being used to ram through changes and poke at editors in good standing. I don't see a case wherein a prospective editor's command of policy and language is good enough to discuss with other editors while being bad enough to require LLM use. Iseult Δx talk to me 01:26, 10 January 2025 (UTC)
- Good faith is separate from competence. Trying to do good is separate from having skills and knowledge to achieve good results. Adumbrativus (talk) 04:40, 13 January 2025 (UTC)
- No - anyone using a washing machine to wash their clothes must be evil and inherently lazy. They cannot be trusted. ... Oh, sorry, wrong century. Regards, --Goldsztajn (talk) 01:31, 10 January 2025 (UTC)
- No - As long as a person understands (and knows) what they are talking about, we shouldn't discriminate against folks using generative AI tech for grammar fixes or minor flow improvements. Yes, AI can create walls of text, and make arguments not grounded in policy, but we could do that even without resorting to generative AI. Sohom (talk) 11:24, 13 January 2025 (UTC)
- To expand on my point above. Completely AI generated comments (or articles) are obviously bad, but
using AI
should be thrown into the same cross-hairs as completely AI generated comments. Sohom (talk) 11:35, 13 January 2025 (UTC)- @Sohom Datta You mean shouldn't be thrown? I think that would make more sense given the context of your original !vote. Duly signed, ⛵ WaltClipper -(talk) 14:08, 14 January 2025 (UTC)
- To expand on my point above. Completely AI generated comments (or articles) are obviously bad, but
- No. Don't make any changes. It's not a good faith/bad faith issue. The 'yes' arguments are most unconvincing with very bizarre analogies to make their point. Here, I can make one too: "Don't edit with AI; you wouldn't shoot your neighbor's dog with a BB-gun, would you?" Duly signed, ⛵ WaltClipper -(talk) 14:43, 13 January 2025 (UTC)
Extended content
|
---|
I appreciate your concern about the use of AI in discussions. It is important to be mindful of how AI is used, and to ensure that it is used in a way that is respectful of others.
I don't think that WP:DGF should be amended to specifically mention AI. However, I do think that it is important to be aware of the potential for AI to be used in a way that is not in good faith. When using AI, it is important to be transparent about it. Let others know that you are using AI, and explain how you are using it. This will help to build trust and ensure that others understand that you are not trying to deceive them. It is also important to be mindful of the limitations of AI. AI is not a perfect tool, and it can sometimes generate biased or inaccurate results. Be sure to review and edit any AI-generated content before you post it. Finally, it is important to remember that AI is just a tool. It is up to you to use it in a way that is respectful and ethical. |} It's easy to detect for most, can be pointed out as needed. No need to add an extra policy JayCubby |
Allowing non-admin "delete" closures at RfD
At Wikipedia:Deletion review#Clock/calendar, a few editors (Enos733 and Jay, while Robert McClenon and OwenX hinted at it) expressed support for allowing non-administrators to close RfD discussions as "delete". While I don't personally hold strong opinions in this regard, I would like for this idea to be discussed here. JJPMaster (she/they) 13:13, 7 January 2025 (UTC)
- That would not be helpful. -- Tavix (talk) 14:10, 7 January 2025 (UTC)
- While I have no issue with the direction the linked discussion has taken, I agree with almost every contributor there: As a practice I have zero interest in generally allowing random editors closing outside their permissions. It might make DRV a more chatty board, granted. BusterD (talk) 15:02, 7 January 2025 (UTC)
- Tamzin makes a reasonable case in their comment below. When we have already chosen to trust certain editors with advanced permissions, we might allow those folks to utilize them as fully as accepted practice allows. Those humans already have skin in the game. They are unlikely to act rashly. BusterD (talk) 19:32, 7 January 2025 (UTC)
- To me, non-admin delete closes at any XfD have always seemed inconsistent with what we say about how adminship and discussion closing work. I would be in violation of admin policy if I deleted based on someone else's close without conducting a full review myself, in which case, what was the point of their close? It's entirely redundant to my own work. That said, I can't really articulate a reason that this should be allowed at some XfDs but not others, and it seems to have gone fine at CfD and TfD. I guess call me neutral. What I'd be more open to is allowing page movers to do this. Page movers do have the tools to turn a bluelink red, so it doesn't create the same admin accountability issue if I'm just cleaning up the stray page left over from a page mover's use of a tool that they were duly granted and subject to their own accountability rules for. We could let them move a redirect to some other plausible title (this would violate WP:MOVEREDIRECT as currently written but I think I'd be okay with making this a canonical exception), and/or allow moving to some draftspace or userspace page and tagging for G6, as we do with {{db-moved}}. I'll note that when I was a non-admin pagemover, I did close a few things as delete where some edge case applied that let me effect the deletion using only suppressredirect, and no one ever objected. -- Tamzin[cetacean needed] (they|xe|🤷) 19:07, 7 January 2025 (UTC)
- I see that I was sort of vague, which is consistent with the statement that I hinted at allowing non-admin delete closures. My main concern is that I would like to see our guidelines and our practice made consistent, either by changing the guidelines or changing the practice. It appears that there is a rough consensus emerging that non-admin delete closures should continue to be disallowed in RFD, but that CFD may be a special case. So what I am saying is that if, in practice, we allow non-admin Delete closures at CFD, the guideline should say something vague to that effect.
- I also see that there is a consensus that DRV can endorse irregular non-admin closures, including irregular non-admin Delete closures. Specifically, it isn't necessary for DRV to vacate the closure for an uninvolved admin to close. A consensus at DRV, some of whose editors will be uninvolved admins, is at least as good a close as a normal close by an uninvolved admin.
- Also, maybe we need clearer guidance about non-admin Keep closures of AFDs. I think that if an editor is not sure whether they have sufficient experience to be closing AFDs as Keep, they don't have enough experience. I think that the guidance is clear enough in saying that administrator accountability applies to non-admin closes, but maybe it needs to be further strengthened, because at DRV we sometimes deal with non-admin closes where the closer doesn't respond to inquiries, or is rude in response to them.
- Also, maybe we need clearer guidance about non-admin No Consensus closures of AFDs. In particular, a close of No Consensus is a contentious closure, and should either be left to an admin, or should be Relisted.
- Robert McClenon (talk) 19:20, 7 January 2025 (UTC)
- As for
I can't really articulate a reason that this should be allowed at some XfDs
, the argument is that more work is needed to enact closures at TfD and CfD (namely orphaning templates and emptying/moving/merging categories). Those extra steps aren't present at RfD. At most, there are times when it's appropriate to unlink the redirect or add WP:RCATs but those are automated steps that WP:XFDC handles. From my limited experience at TfD and CfD though, it does seem that the extra work needed at closure does not compensate for the extra work from needing two people reviewing the closure (especially at CfD because a bot that handles the clean-up). Consistency has come up and I would much rather consistently disallow non-admin delete closures at all XfD venues. I know it's tempting for non-admins to think they're helping by enacting these closures but it's not fair for them to be spinning their wheels. As for moving redirects, that's even messier than deleting them. There's a reason that WP:MOVEREDIRECT advises not to move redirects except for limited cases when preserving history is important. -- Tavix (talk) 20:16, 7 January 2025 (UTC)
- As for
- @Tamzin: I do have one objection to this point of redundancy, which you are quite familiar with. Here, an AfD was closed as "transwiki and delete", however, the admin who did the closure does not have the technical ability to transwiki pages to the English Wikibooks, meaning that I, who does, had to determine that the outcome was actually to transwiki rather than blindly accepting a request at b:WB:RFI. Then, I had to mark the pages for G6 deletion, that way an admin, in this case you, could determine that the page was ready to be deleted. Does this mean that that admin who closed the discussion shouldn't have closed it, since they only have the technical ability to delete, not transwiki? Could I have closed it, having the technical ability to transwiki, but not delete? Either way, someone else would have had to review it. Or, should only people who have importing rights on the target wiki and admin rights on the English Wikipedia be allowed to close discussions as "transwiki and delete"? JJPMaster (she/they) 12:04, 8 January 2025 (UTC)
- Robert McClenon (talk) 19:20, 7 January 2025 (UTC)
- I do support being explicit when a non-administrator can close a discussion as "delete" and I think that explicitly extending to RfD and CfD is appropriate. First, there can be a backlog in both of these areas and there are often few comments in each discussion (and there is usually not the same passion as in an AfD). Second, the delete close of a non-administrator is reviewed by an administrator before action is taken to delete the link, or category (a delete close is a two-step process, the writeup and the delete action, so in theory the administrators workload is reduced). Third, non-admins do face administrator accountability for their actions, and can be subject to sanction. Fourth, the community has a role in reviewing closing decisions at DRV, so there is already a process in place to check a unexperienced editor or poor close. Finally, with many, if not most discussions for deletion the outcome is largely straight forward. --Enos733 (talk) 20:01, 7 January 2025 (UTC)
- There is currently no rule against non-admin delete closures as far as I know; the issue is the practical one that you don't have the ability to delete. However, I have made non-admin delete closures at AfD. This occurred when an admin deleted the article under consideration (usually for COPYVIO) without closing the related AfD. The closures were not controversial and there was no DRV. Hawkeye7 (discuss) 20:31, 7 January 2025 (UTC)
- The situation you're referring to is an exception allowed per WP:NACD:
If an administrator has deleted a page (including by speedy deletion) but neglected to close the discussion, anyone with a registered account may close the discussion provided that the administrator's name and deletion summary are included in the closing rationale.
-- Tavix (talk) 20:37, 7 January 2025 (UTC)
- The situation you're referring to is an exception allowed per WP:NACD:
- Bad idea to allow, this sort of closure is just busy work, that imposes more work on the admin that then has to review the arguments, close and then delete. Graeme Bartlett (talk) 22:05, 7 January 2025 (UTC)
- Is this the same as #Non-Admin XFD Close as Delete above? Anomie⚔ 23:04, 7 January 2025 (UTC)
- Yes, User:Anomie. Same issue coming from the same DRV. Robert McClenon (talk) 03:52, 8 January 2025 (UTC)
- (1) As I've also noted in the other discussion, the deletion process guidelines at WP:NACD do say non-admins shouldn't do "delete" closures and do recognize exceptions for CfD and TfD. There isn't a current inconsistency there between guidelines and practice.
(2) In circumstances where we do allow for non-admin "delete" closures, I would hope that the implementing admin isn't fully reviewing the discussion de novo before implementing, but rather giving deference to any reasonable closure. That's how it goes with requested move closers asking for technical help implementing a "moved" closure at WP:RM/TR (as noted at WP:RMNAC, the closure will "generally be respected by the administrator (or page mover)" but can be reverted by an admin if "clearly improper"). SilverLocust 💬 08:41, 9 January 2025 (UTC)
- Comment - A couple things to note about the CFD process: It very much requires work by admins. The non-admin notes info about the close at WT:CFD/Working, and then an admin enters the info on the CFD/Working page (which is protected) so that the bot can perform the various actions. Remember that altering a category is potentially more labour intensive than merely editing or deleting a single page - every page in that category must be edited, and then the category deleted. (There are other technical things involved, like the mess that template transclusion can cause, but let's keep it simple.) So I wouldn't suggest that that process is very useful as a precedent for anything here. It was done at a time when there was a bit of a backlog at CfD, and this was a solution some found to address that. Also - since then, I think at least one of the regular non-admin closers there is now an admin. So there is that as well. - jc37 09:14, 9 January 2025 (UTC)
- If the expectation is that an admin needs to review the deletion discussion to ensure they agree with that outcome before deleting via G6, as multiple people here are suggesting, then I'm not sure this is worthwhile. However, I have had many admins delete pages I've tagged with G6, and I have been assuming that they only check that the discussion was indeed closed as delete, and trust the closer to be responsible for the correctness of it. This approach makes sense to me, because if a non-admin is competent to close and be responsible for any other outcome of a discussion, I don't see any compelling reason they can't be responsible for a delete outcome and close accordingly. —Compassionate727 (T·C) 19:51, 9 January 2025 (UTC)
- Some closers, and you're among them, have closing accuracy similar to many sysops. But the sysop can't/shouldn't "trust" that your close is accurate. Trustworthy though you are, the sysop must, at very minimum, check firstly that the close with your signature on it was actually made by you (signatures are easily copied), secondly that the close wasn't manifestly unreasonable, and thirdly that the CSD is correct. WP:DRV holds the deleting sysop responsible for checking that the CSD were correctly applied. G6 is for uncontroversial deletions, and if there's been an XFD, then it's only "uncontroversial" if the XFD was unanimous or nearly so. We do have sysops who'll G6 without checking carefully, but they shouldn't. Basically, non-admin closing XFDs doesn't save very much sysop time. I think that if your motive as a non-admin is to relieve sysops of labour, the place you're of most use is at RfC.—S Marshall T/C 11:28, 12 January 2025 (UTC)
if your motive as a non-admin is to relieve sysops of labour, the place you're of most use is at RfC
alternatively you should consider becoming an administrator yourself. Thryduulf (talk) 13:20, 12 January 2025 (UTC)- If you're willing to tolerate the RFA process.—S Marshall T/C 15:24, 12 January 2025 (UTC)
- In all the cases I have dealt with, the admin's reason for deletion (usually copyvio) was completely different to the issues being debated in the AfD (usually notability). The closing statement was therefore something like "Discussion is now moot due to article being deleted for <reason> by <admin>". Hawkeye7 (discuss) 20:10, 14 January 2025 (UTC)
- Some closers, and you're among them, have closing accuracy similar to many sysops. But the sysop can't/shouldn't "trust" that your close is accurate. Trustworthy though you are, the sysop must, at very minimum, check firstly that the close with your signature on it was actually made by you (signatures are easily copied), secondly that the close wasn't manifestly unreasonable, and thirdly that the CSD is correct. WP:DRV holds the deleting sysop responsible for checking that the CSD were correctly applied. G6 is for uncontroversial deletions, and if there's been an XFD, then it's only "uncontroversial" if the XFD was unanimous or nearly so. We do have sysops who'll G6 without checking carefully, but they shouldn't. Basically, non-admin closing XFDs doesn't save very much sysop time. I think that if your motive as a non-admin is to relieve sysops of labour, the place you're of most use is at RfC.—S Marshall T/C 11:28, 12 January 2025 (UTC)
- I think most all the time, experienced closers will do a great job and that will save admin time because they will not have to construct and explain the close from scratch, but there will be some that are bad and that will be costly in time not just for the admin but for the project's goal of completing these issues and avoiding disruption. I think that lost time is still too costly, so I would oppose non-admin delete closes. (Now if there were a proposal for a process to make a "delete-only admin permission" that would be good -- such motivated specialists would likely be more efficient.) Alanscottwalker (talk) 16:44, 12 January 2025 (UTC)
- As I said at the "Non-Admin XFD Close as Delete" section, I support non-admins closing RfDs as Delete. If TfDs have been made an exception, RfDs can be too, especially considering RfD backlogs. Closing a heavily discussed nomination at RfD is more about the reading, analysis and thought process at arriving at the outcome, and less about the technicality of the subsequent page actions. I don't see a significant difference between non-admins closing discussions as Delete vs non-Delete. It will help making non-admins mentally prepared to advance to admin roles. Jay 💬 14:53, 14 January 2025 (UTC)
- The backlog at RFD is mostly lack of participation, not lack of admins not making closures. This would only be exacerbated if non-admins are given a reason not to !vote on discussions trending toward deletion so they can get the opportunity to close. RFD isn't as technical as CFD and TFD. In any case, any admin doing the deletion would still have to review the RFD. Except in the most obviously trivial cases, this will lead to duplicate work, and even where it doesn't (e.g. multiple !votes all in one direction), the value-add is minimal.
Modifying the first sentence of BLPSPS
A discussion has been started at WT:BLP re: modifying the text of BLPSPS. FactOrOpinion (talk) 14:23, 13 January 2025 (UTC)
Upgrade MOS:ALBUM to an official guideline
- The following discussion is closed. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
Wikipedia:WikiProject_Albums/Album_article_style_advice is an essay. I've been editing since 2010, and for the entire duration of that, this essay has been referred to and used extensively, and has even guided discussions regarding ascertaining if sources are reliable. I propose that it be formally upgraded to a status as an MOS guideline parallel to MOS:MUSIC.--3family6 (Talk to me | See what I have done) 14:28, 13 January 2025 (UTC)
- I'm broadly in favor of this proposal—I looked over the essay and most of it is aligned with what seems standard in album articles—but there are a few aspects that feel less aligned with current practice, which I'd want to reexamine before we move forward with promoting this:
- The section Recording, production suggests
What other works of art is this producer known for?
as one of the categories of information to include in a recording/production section. This can be appropriate in some cases (e.g., the Nevermind article discusses how Butch Vig's work with Killdozer inspired Nirvana to try and work with him), but recommending it outright seems like it'd risk encouraging people to WP:COATRACK. My preference would be to cut the sentence I quoted and the one immediately following it. - The section Track listing suggests that the numbered-list be the preferred format for track listings, with other formats like {{Track listing}} being alternative choices for "more complicated" cases. However, in my experience, using {{Track listing}} rather than a numbered list tends to be the standard. All of the formatting options currently listed in the essay should continue to be mentioned, but I think portraying {{Track listing}} as the primary style would be more reflective of current practice.
- The advice in the External links section seems partially outdated. In my experience, review aggregators like Metacritic are conventionally discussed in the "Critical reception" section instead these days, and I'm uncertain to what extent we still link to databases like Discogs even in ELs.
- The section Recording, production suggests
- (As a disclaimer, my familiarity with album articles comes mostly from popular-music genres, rock and hip-hop in particular. I don't know if typical practice is different in areas like classical or jazz.) Overall, while I dedicated most of my comment volume to critiques, these are a fairly minor set of issues in what seems like otherwise quite sound guidance. If they're addressed, it's my opinion that this essay would be ready for prime time. ModernDayTrilobite (talk • contribs) 15:19, 13 January 2025 (UTC)
- I'd agree with all of this, given my experience. The jazz and classical that I've seen is mostly the same.--3family6 (Talk to me | See what I have done) 16:57, 13 January 2025 (UTC)
- Me too, though sometime last year, I unexpectedly had some (inexplicably strong) pushback on the tracklist part with an editor or two. In my experience, using the track list template is the standard, and I can't recall anyone giving me any pushback for it, but some editors apparently prefer just using numbers. I guess we can wait and see if there's any current pushback on it. 17:01, 13 January 2025 (UTC) Sergecross73 msg me 17:01, 13 January 2025 (UTC)
- Was it pushback for how you had rendered the tracklist, or an existing tracklist being re-formatted by you or them?--3family6 (Talk to me | See what I have done) 18:13, 13 January 2025 (UTC)
- They came to WT:ALBUMS upset that another editor was changing track lists from "numbered" to "template" formats. My main response was surprised, because in my 15+ years of article creations and rewrites, I almost exclusively used the tracklist template, and had never once received any pushback.
- So basically, I personally agree with you and MDT above, I'm merely saying I've heard someone disagree. I'll try to dig up the discussion. Sergecross73 msg me 17:50, 14 January 2025 (UTC)
- I found this one from about a year ago, though this was more about sticking to the current wording as is than it was about opposition against changing it. Not sure if there was another one or not. Sergecross73 msg me 18:14, 14 January 2025 (UTC)
- I remember one editor being strongly against the template, but they are now community banned. Everyone else I've seen so far uses the template. AstonishingTunesAdmirer 連絡 22:25, 13 January 2025 (UTC)
- I can see the numbered-list format being used for very special cases like Guitar Songs, which was released with only two songs, and had the same co-writers and producer. But I imagine we have extremely few articles that are like that, so I believe the template should be the standard. Elias 🦗🐜 [Chat, they chattin', they chat] 12:23, 14 January 2025 (UTC)
- Was it pushback for how you had rendered the tracklist, or an existing tracklist being re-formatted by you or them?--3family6 (Talk to me | See what I have done) 18:13, 13 January 2025 (UTC)
- ModernDayTrilobite, regarding linking to Discogs, some recent discussions I was in at the end of last year indicate that it is common to still link to Discogs as an EL, because it gives more exhaustive track, release history, and personnel listings that Wikipedia - generally - should not.--3family6 (Talk to me | See what I have done) 14:14, 15 January 2025 (UTC)
- Thank you for the clarification! In that case, I've got no objection to continuing to recommend it. ModernDayTrilobite (talk • contribs) 14:37, 15 January 2025 (UTC)
- Me too, though sometime last year, I unexpectedly had some (inexplicably strong) pushback on the tracklist part with an editor or two. In my experience, using the track list template is the standard, and I can't recall anyone giving me any pushback for it, but some editors apparently prefer just using numbers. I guess we can wait and see if there's any current pushback on it. 17:01, 13 January 2025 (UTC) Sergecross73 msg me 17:01, 13 January 2025 (UTC)
- There were several discussions about Discogs and an RfC here. As a user of {{Discogs master}}, I agree with what other editors said there. We can't mention every version of an album in an article, so an external link to Discogs is invaluable IMO. AstonishingTunesAdmirer 連絡 22:34, 13 January 2025 (UTC)
- I'd agree with all of this, given my experience. The jazz and classical that I've seen is mostly the same.--3family6 (Talk to me | See what I have done) 16:57, 13 January 2025 (UTC)
- We badly need this to become part of the MOS. As it stands, some editors have rejected the guidelines as they're just guidelines, not policies, which defeats the object of having them in the first place. Popcornfud (talk) 16:59, 13 January 2025 (UTC)
- I mean, they are guidelines, but deviation per WP:IAR should be for a good reason, not just because someone feels like it.--3family6 (Talk to me | See what I have done) 18:14, 13 January 2025 (UTC)
- I am very much in favor of this becoming an official MOS guideline per User:Popcornfud above. Very useful as a template for album articles. JeffSpaceman (talk) 21:03, 13 January 2025 (UTC)
- I recently wrote my first album article and this essay was crucial during the process, to the extent that me seeing this post is like someone saying "I thought you were already an admin" in RFA; I figured this was already a guideline. I would support it becoming one. DrOrinScrivello (talk) 02:00, 14 January 2025 (UTC)
- I have always wondered why all this time these pointers were categorized as an essay. It's about time we formalize them; as said earlier, there are some outdated things that need to be discussed (like in WP:PERSONNEL which advises not to use stores for credits, even though in the streaming era we have more and more albums/EPs that never get physical releases). Also, song articles should also have their own guidelines, IMV. Elias 🦗🐜 [Chat, they chattin', they chat] 12:19, 14 January 2025 (UTC)
- I'd be in favor of discussing turning the outline at the main page for WP:WikiProject Songs into a guideline.--3family6 (Talk to me | See what I have done) 12:53, 14 January 2025 (UTC)
- I get the sense it'd have to be a separate section from this one, given the inherent complexity of album articles as opposed to that of songs. Elias 🦗🐜 [Chat, they chattin', they chat] 14:56, 14 January 2025 (UTC)
- Yes, I think it should be a separate, parallel guideline.--3family6 (Talk to me | See what I have done) 16:53, 14 January 2025 (UTC)
- I get the sense it'd have to be a separate section from this one, given the inherent complexity of album articles as opposed to that of songs. Elias 🦗🐜 [Chat, they chattin', they chat] 14:56, 14 January 2025 (UTC)
- I'd be in favor of discussing turning the outline at the main page for WP:WikiProject Songs into a guideline.--3family6 (Talk to me | See what I have done) 12:53, 14 January 2025 (UTC)
- I think it needs work--I recall that a former longtime album editor, Richard3120 (not pinging them, as I think they are on another break to deal with personal matters), floated a rewrite a couple of years ago. Just briefly: genres are a perennial problem, editors love unsourced exact release dates and chronology built on OR (many discography pages are sourced only to random Billboard, AllMusic, and Discogs links, rather than sources that provide a comprehensive discography), and, like others, I think all the permutations of reissue and special edition track listings has gotten out of control, as well as these long lists of not notable personnel credits (eight second engineers, 30 backing vocalists, etc.). Also agree that the track listing template issue needs consensus; if three are acceptable, then three are acceptable--again, why change it to accommodate the names of six not notable songwriters? There's still a divide on the issue of commercial links in the body of the article--I have yet to see a compelling reason for their inclusion (WP is, uh, not for sale, remember?), when a better source can always be found (and editors have noted, not that I've made a study of it, that itunes often uses incorrect release dates for older albums). But I also acknowledge that since this "floated" rewrite never happened, then the community at large may be satisfied with the guidelines. Caro7200 (talk) 13:45, 14 January 2025 (UTC)
- Regarding the personnel and reissue/special edition track listing, I don't know if I can dig up the discussions, but there seems to be a consensus against being exhaustive and instead to put an external link to Discogs. I fail to see how linking to Billboard or AllMusic links for a release date on discographies is OR, unless you're talking about in the lead. At least in the case of Billboard, that's an established RS (AllMusic isn't the most accurate with dates).-- 3family6 (Talk to me | See what I have done) 13:53, 14 January 2025 (UTC)
- I meant that editors often use discography pages to justify chronology, even though Billboard citations are simply supporting chart positions, Discogs only states that an album exists, and AllMusic entries most often do not give a sequential number in their reviews, etc. There is often not a source (or sources) that states that the discography is complete, categorized properly, and in order. Caro7200 (talk) 14:05, 14 January 2025 (UTC)
- Ah, okay, I understand now.--3family6 (Talk to me | See what I have done) 16:54, 14 January 2025 (UTC)
- I meant that editors often use discography pages to justify chronology, even though Billboard citations are simply supporting chart positions, Discogs only states that an album exists, and AllMusic entries most often do not give a sequential number in their reviews, etc. There is often not a source (or sources) that states that the discography is complete, categorized properly, and in order. Caro7200 (talk) 14:05, 14 January 2025 (UTC)
- Regarding the personnel and reissue/special edition track listing, I don't know if I can dig up the discussions, but there seems to be a consensus against being exhaustive and instead to put an external link to Discogs. I fail to see how linking to Billboard or AllMusic links for a release date on discographies is OR, unless you're talking about in the lead. At least in the case of Billboard, that's an established RS (AllMusic isn't the most accurate with dates).-- 3family6 (Talk to me | See what I have done) 13:53, 14 January 2025 (UTC)
Myself, I've noticed that some of the sourcing recommendations are contrary to WP:RS guidance (more strict, actually!) or otherwise outside consensus. For instance, MOS:ALBUMS currently says to not use vendors for track list or personnel credits, linking to WP:AFFILIATE in WP:RS, but AFFILIATE actually says that such use is acceptable but not preferred. Likewise, MOS:ALBUMS says not to use scans of liner notes, which is 1. absurd, and 2. not actual consensus, which in the discussions I've had is that actual scans are fine (which makes sense as it's a digital archived copy of the source).--3family6 (Talk to me | See what I have done) 14:05, 14 January 2025 (UTC)
- The tendency to be overreliant on liner notes is also a detriment. I've encountered some liner notes on physical releases that have missing credits (e.g. only the producers are credited and not the writers), or there are outright no notes at all. Tangentially, some physical releases of albums like Still Over It and Pink Friday 2 actually direct consumers to official websites to see the credits, which has the added problem of link rot (the credits website for Still Over It no longer works and is a permanent dead link). Elias 🦗🐜 [Chat, they chattin', they chat] 15:04, 14 January 2025 (UTC)
- That turns editors to using stores like Spotify or Apple Music as the next-best choice, but a new problem arises -- the credits for a specific song can vary depending on the site you use. One important thing we should likely discuss is what sources should take priority wrt credits. For an example of what I mean, take "No Love". Go to Spotify to check its credits and you'd find the name Sean Garrett -- head to Apple Music, however, and that name is missing. I assume these digital credits have a chance to deviate from the albums' physical liner notes as well, if there is one available. Elias 🦗🐜 [Chat, they chattin', they chat] 15:11, 14 January 2025 (UTC)
- Moreover, the credits in stores are not necessarily correct either. An example I encountered was on Tidal, an amazing service and the only place where I could find detailed credits for one album (not even liner notes had them, since back then artists tried to avoid sample clearance). However, as I was double checking everything, one song made no sense: in its writing credits I found "Curtis Jackson", with a link to 50 Cent's artist page. It seemed extremely unlikely that they would collaborate, nor any of his work was sampled here. Well, it turns out this song sampled a song written by Charles Jackson of The Independents. AstonishingTunesAdmirer 連絡 16:39, 14 January 2025 (UTC)
- PSA and AstonishingTunesAdmirer, I agree that it's difficult. I usually use both the physical liner notes and online streaming and retail sources to check for completeness and errors. I've also had the experience of Tidal being a great resource, and, luckily, so far I've yet to encounter an error. Perhaps advice for how to check multiple primary sources here for errors should be added to the proposed guideline.--3family6 (Talk to me | See what I have done) 17:00, 14 January 2025 (UTC)
- At this point, I am convinced as well that finding the right sources for credits should be on a case-by-case basis, with the right amount of discretion from the editor. While I was creating List of songs recorded by SZA, which included several SoundCloud songs where it was extremely hard to find songwriting credits, I found the Songview database useful for filling those missing gaps. More or less the credits there align with what's on the liner notes/digital credits. However, four issues, most of which you can see by looking at the list I started: 1) they don't necessarily align with physical liner notes either, 2) sometimes names are written differently depending on the entry, 3) there are entries where a writer (or co-writer) is unknown, and 4) some of the entries here were never officially released and confirmed as outtakes/leaks (why is "BET Awards 19 Nomination Special" here, whatever that means?). Elias 🦗🐜 [Chat, they chattin', they chat] 22:59, 14 January 2025 (UTC)
- Yeah, I've found it particularly tricky when working on technical personnel (production, engineering, mixing, etc.) and songwriting credits for individuals. I usually use the liner notes (if there are any), check AllMusic and Bandcamp, and also check Tidal if necessary. But I'll also look at Spotify, too. I know they're user-generated, so I don't cite them, but I usually look at Discogs and Genius to get an idea if I'm missing something. Thank you for pointing me to Songview, that will probably also be really helpful. 3family6 (Talk to me | See what I have done) 12:50, 15 January 2025 (UTC)
- At this point, I am convinced as well that finding the right sources for credits should be on a case-by-case basis, with the right amount of discretion from the editor. While I was creating List of songs recorded by SZA, which included several SoundCloud songs where it was extremely hard to find songwriting credits, I found the Songview database useful for filling those missing gaps. More or less the credits there align with what's on the liner notes/digital credits. However, four issues, most of which you can see by looking at the list I started: 1) they don't necessarily align with physical liner notes either, 2) sometimes names are written differently depending on the entry, 3) there are entries where a writer (or co-writer) is unknown, and 4) some of the entries here were never officially released and confirmed as outtakes/leaks (why is "BET Awards 19 Nomination Special" here, whatever that means?). Elias 🦗🐜 [Chat, they chattin', they chat] 22:59, 14 January 2025 (UTC)
- PSA and AstonishingTunesAdmirer, I agree that it's difficult. I usually use both the physical liner notes and online streaming and retail sources to check for completeness and errors. I've also had the experience of Tidal being a great resource, and, luckily, so far I've yet to encounter an error. Perhaps advice for how to check multiple primary sources here for errors should be added to the proposed guideline.--3family6 (Talk to me | See what I have done) 17:00, 14 January 2025 (UTC)
- Moreover, the credits in stores are not necessarily correct either. An example I encountered was on Tidal, an amazing service and the only place where I could find detailed credits for one album (not even liner notes had them, since back then artists tried to avoid sample clearance). However, as I was double checking everything, one song made no sense: in its writing credits I found "Curtis Jackson", with a link to 50 Cent's artist page. It seemed extremely unlikely that they would collaborate, nor any of his work was sampled here. Well, it turns out this song sampled a song written by Charles Jackson of The Independents. AstonishingTunesAdmirer 連絡 16:39, 14 January 2025 (UTC)
- That turns editors to using stores like Spotify or Apple Music as the next-best choice, but a new problem arises -- the credits for a specific song can vary depending on the site you use. One important thing we should likely discuss is what sources should take priority wrt credits. For an example of what I mean, take "No Love". Go to Spotify to check its credits and you'd find the name Sean Garrett -- head to Apple Music, however, and that name is missing. I assume these digital credits have a chance to deviate from the albums' physical liner notes as well, if there is one available. Elias 🦗🐜 [Chat, they chattin', they chat] 15:11, 14 January 2025 (UTC)
- (@3family6, please see WP:PROPOSAL for advice on advertising discussions about promoting pages to a guideline. No, you don't have to start over. But maybe add an RFC tag or otherwise make sure that it is very widely publicized.) WhatamIdoing (talk) 23:37, 14 January 2025 (UTC)
- Thank you. I'll notify the Manual of Style people. I did already post a notice at WP:ALBUMS. I'll inform other relevant WikiProjects as well.--3family6 (Talk to me | See what I have done) 12:46, 15 January 2025 (UTC)
Before posting the RfC as suggested by WhatamIdoing, I'm proposing the following changes to the text of MOS:ALBUM as discussed above:
- Eliminate What other works of art is this producer known for? Keep the list of other works short, as the producer will likely have their own article with a more complete list. from the "Recording, production" sub-section.
- Rework the text of the "Style and form" for tracklistings to:
- The track listing should be under a primary heading named "Track listing".
- A track listing should generally be formatted with the {{Track listing}} template. Note, however, that the track listing template forces a numbering system, so tracks originally listed as "A", "B", etc., or with other or no designations, will not appear as such when using the template. Additionally, in the case of multi-disc/multi-sided releases, a new template may be used for each individual disc or side, if applicable.
- Alternate forms, such as a table or a numbered list, are acceptable but usually not preferred. If a table is used, it should be formatted using class="wikitable", with column headings "No.", "Title" and "Length" for the track number, the track title and the track length, respectively (see Help:Table). In special cases, such as Guitar Songs, a numbered list may be the most appropriate format.
- Move Critical reception overviews like AcclaimedMusic (using {{Acclaimed Music}}), AnyDecentMusic?, or Metacritic may be appropriate as well. from "External links" to "Album ratings templates" of "Critical reception", right before the sentence about using {{Metacritic album prose}}.
- Re-write this text from "Sourcing" under "Track listing" from However, if there is disagreement, there are other viable sources. Only provide a source for a track listing if there are exceptional circumstances, such as a dispute about the writers of a certain track. Per WP:AFFILIATE, avoid commercial sources such as online stores and streaming platforms. In the rare instances where outside citations are required, explanatory text is useful to help other editors know why the album's liner notes are insufficient. to Per WP:AFFILIATE, commercial sources such as online stores and streaming platforms are acceptable to cite for track list information, but secondary coverage in independent reliable sources is preferred if available. Similarly, in the "Personnel" section, re-write Similar to the track listing requirements, it is generally assumed that a personnel section is sourced from the liner notes. In some cases, it will be necessary to use third-party sources to include performers who are not credited in the liner notes. If you need to cite these, use {{Cite AV media}} for the liner notes and do not use third party sources such as stores (per WP:AFFILIATE) or scans uploaded to image hosting sites or Discogs.com (per WP:RS). to Similar to the track listing requirements, it is generally assumed that a personnel section is sourced from the liner notes. If you need to cite the liner notes, use {{Cite AV media}}. Scans of the physical media that have been uploaded in digital form to repositories or sites such as Discogs are acceptable for verification, but cite the physical notes themselves, not the user-generated transcriptions. Frequently, it will be necessary to use third-party sources to include performers who are not credited in the liner notes. Per WP:AFFILIATE, inline citations to e-commerce or streaming platforms to verify personnel credits are allowed. However, reliable secondary sources are preferred, if available.
- Additional guidance has been suggested for researching and verifying personnel and songwriting credits. I suggest adding It is recommended to utilize a combination of the physical liner notes (if they exist) with e-commerce sites such as Apple Music and Amazon, streaming platforms such as Spotify and Tidal, and databases such as AllMusic credits listings and Songview. Finding the correct credits requires careful, case-by-case consideration and editor discretion. If you would like assistance, you can reach out to the albums or discographies WikiProjects. The best section for this is probably in "Personnel", in the paragraph discussing that liner notes can be inaccurate.
- The excessive listing of personnel has been mentioned. I suggest adding the following to the paragraph in the "Personnel" section beginning with "The credits to an album can be extensive or sparse.": If the listing of personnel is extensive, avoid excessive, exhaustive lists, in the spirit of WP:INDISCRIMINATE. In such cases, provide an external link to Discogs and list only the major personnel to the list.
If you have any additional suggestions, or suggestions regarding the wording of any of the above (I personally think that four needs to be tightened up or expressed better), please give them. I'm pinging the editors who raised issues with the essay as currently written, or were involved in discussing those issues, for their input regarding the above proposed changes. ModernDayTrilobite, PSA, Sergecross73, AstonishingTunesAdmirer, Caro7200, what do you think? Also, I realize that I never pinged Fezmar9, the author of the essay, for their thoughts on upgrading this essay to a guideline.--3family6 (Talk to me | See what I have done) 17:21, 15 January 2025 (UTC)
- The proposed edits all look good to me. I agree there's probably some room for improvement in the phrasing of #4, but in my opinion it's still clear enough as to be workable, and I haven't managed to strike upon any other phrasings I liked better for expressing its idea. If nobody else has suggestions, I'd be content to move forward with the language as currently proposed. ModernDayTrilobite (talk • contribs) 17:37, 15 January 2025 (UTC)
- It might be better to have this discussion on its talk page. That's where we usually talk about changes to a page. WhatamIdoing (talk) 17:38, 15 January 2025 (UTC)
- WhatamIdoing - just the proposed changes, or the entire discussion about elevating this essay to a guideline?--3family6 (Talk to me | See what I have done) 18:21, 15 January 2025 (UTC)
- It would be normal to have both discussions (separately) on that talk page. WhatamIdoing (talk) 18:53, 15 January 2025 (UTC)
- Okay, thank you. I started the proposal to upgrade the essay here, as it would be far more noticed by the community, but I'm happy for everything to get moved there.-- 3family6 (Talk to me | See what I have done) 19:00, 15 January 2025 (UTC)
- It would be normal to have both discussions (separately) on that talk page. WhatamIdoing (talk) 18:53, 15 January 2025 (UTC)
- WhatamIdoing - just the proposed changes, or the entire discussion about elevating this essay to a guideline?--3family6 (Talk to me | See what I have done) 18:21, 15 January 2025 (UTC)
- These changes look good to me. Although, since we got rid of Acclaimed Music in the articles, we should probably remove it here too. AstonishingTunesAdmirer 連絡 19:36, 15 January 2025 (UTC)
- Sure thing.--3family6 (Talk to me | See what I have done) 20:56, 15 January 2025 (UTC)
reverts all edits
Hello everyone. I have an idea for the Wikipedia coders. Would it be possible for you to design an option that, with the click of a button, automatically reverts all edits of a disruptive user? This idea came to my mind because some people create disposable accounts to cause disruption in all their edits... In this case, a lot of time and energy is consumed by administrators and reverting users to undo all the vandalism. If there were a template that could revert all the edits of a disruptive user with one click, it would be very helpful. If you think regular users might misuse this option, you could limit it to Wikipedia administrators only so they can quickly and easily undo the disruption. Hulu2024 (talk) 17:31, 13 January 2025 (UTC)
- Hi @Hulu2024, there's a script that does that: User:Writ Keeper/Scripts/massRollback. Also, editors who use Twinkle can single-click revert all consecutive edits of an editor. Schazjmd (talk) 17:44, 13 January 2025 (UTC)
- Is this tool active in all the different languages of Wikipedia? I couldn't perform such an action with the tool you mentioned. Hulu2024 (talk) 17:51, 13 January 2025 (UTC)
- That script requires the Wikipedia:Rollback permission, which is available only for admins and other trusted users. Admins and other users with the tool have gotten in trouble for using it inappropriately. I never use it myself, as I find the rollback in Twinkle quite sufficient for my needs. Donald Albury 17:54, 13 January 2025 (UTC)
- (ec) I don't know about other languages. If you check the page I linked, you'll see that the script requires rollback rights. Schazjmd (talk) 17:55, 13 January 2025 (UTC)
- @Schazjmd Sorry. Does your option can reverse all edits of a user in different page's with clicking on button ? i think you mean that massrollback can reverse all edits in a special wiki page... not all edits of edits of disruptive user in multiple pages ? or i'm wrong ??? Hulu2024 (talk) 04:23, 14 January 2025 (UTC)
- If you want this for the Persian Wikipedia, you should probably talk to Ladsgroup. WhatamIdoing (talk) 23:41, 14 January 2025 (UTC)
- @WhatamIdoing Thank you. Hulu2024 (talk) 07:11, 15 January 2025 (UTC)
- If you want this for the Persian Wikipedia, you should probably talk to Ladsgroup. WhatamIdoing (talk) 23:41, 14 January 2025 (UTC)
- @Schazjmd Sorry. Does your option can reverse all edits of a user in different page's with clicking on button ? i think you mean that massrollback can reverse all edits in a special wiki page... not all edits of edits of disruptive user in multiple pages ? or i'm wrong ??? Hulu2024 (talk) 04:23, 14 January 2025 (UTC)
- Is this tool active in all the different languages of Wikipedia? I couldn't perform such an action with the tool you mentioned. Hulu2024 (talk) 17:51, 13 January 2025 (UTC)
Problem For Translate page
Hello everyone. I don’t know who is in charge for coding the Translate page on Wikipedia. But I wanted to send my message to the Wikipedia coders, and that is that in the Wikipedia translation system, the information boxes for individual persons (i.e personal biography box- see: Template:Infobox person) are not automatically translated, and it is time-consuming for Wikipedia users to manually translate and change the links one by one from English to another language. Please, could the coders come up with a solution for translating the information template boxes? Thank you. Hulu2024 (talk) 17:32, 13 January 2025 (UTC)
- Hi Hulu2024, this also applies to the section above. If your proposal only applies to the English Wikipedia then it is probably best to post it at WP:VPT in the first instance. If it is only about the Persian Wikipedia then you may wish to try there. If it is more general then you could try Meta:, or, for more formal proposals, phabricator. Phil Bridger (talk) 18:51, 13 January 2025 (UTC)
- @Phil Bridger Thank you. Hulu2024 (talk) 19:21, 13 January 2025 (UTC)
A discrimination policy
- The following discussion is closed. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
- i quit this will go no where im extremely embarassed and feel horrible i dont think ill try again
Ani cases:
I would like to start this proposal by saying that this concept was a proposal in 2009 which failed for obvious reasons. But in this year, 2025, we need it as its happened a bunch. its already under personal attacks but this I feel and a couple other Wikipedians that it should be codified as their is precedent for blocking users who discriminate. Here’s a list of the things I want to include in this policy. edit: This policy is intended to target blatant and admitted instances of discrimination. If the intent behind an action is ambiguous, users should continue to assume good until the intent is.
Just as being a member of a group does not give one special requirements to edit, it also does not endow any special privileges.[a] One is not absolved of discrimination against a group just because one claims to be a member of that group.
What counts as discrimination
- Race
- Disability-will define this further
- Disease
- Gender-different from sex neurological [1][2]
- Sex-different then gender biological[3]
- Sexuality
- Religion
- Hobbies (e.g furry ( most often harassed hobby))
- Relationship status
- Martial status
- (Idk how to word this but) lack of parental presence
- Political position (will be a hot topic)
- Discrimination anything i missed would be in there
A disability is an umbrella term in my sight
you have mental and physical
examples for mental would be:
- schizophrenia
- autism
- ADHD
- PTSD
- mood disorders (depression, borderline personality disorder)
- dyslexia (or any learning disability)
examples of physical:
- paralyzation
- Pretty much any physical injury
- Im aware that this never really happens but its good to go over
A user may not claim without evidence that a user is affected by/are any of the above (idk how to term this).
A user may not claim that users with these disabilities/beliefs/races/genders shouldn’t edit Wikipedia.
A user may not imply a user is below them based on the person.
calling people woke simply cause they are queer is discrimination.
Also I would like to propose a condition.
Over reaction to what you think is discrimination (accidental misgendering and wrong pronouns) and the user apologizes for it is not grounds for an entry at ani.
This should be used as a guideline.
discrimination is defined as acts, practices, or policies that wrongfully impose a relative disadvantage or deprivation on persons based on their membership in a salient social group. This is a comparative definition. An individual need not be actually harmed in order to be discriminated against. He or she just needs to be treated worse than others for some arbitrary reason. If someone decides to donate to help orphan children, but decides to donate less, say, to children of a particular race out of a racist attitude, he or she will be acting in a discriminatory way even if he or she actually benefits the people he discriminates against by donating some money to them.
- This largely seems like behavior that already is sanctionable per WP:NPA and WP:UCOC (and the adoption of the latter drew complaints at the time that it in itself was already unnecessarily redundant with existing civility policy on en.wiki). What shortcomings do you see with those existing bodies of policy en force? signed, Rosguill talk 16:45, 16 January 2025 (UTC)
- The fact that punishments should be a little more severe for users who go after a whole group of editors. As its not an npa its an attack on a group •Cyberwolf•talk? 16:57, 16 January 2025 (UTC)
- NPA violations are already routinely met with blocks and sitebans, often on sight without prior warning for the level of disparagement you're describing. Do you have any recent examples on hand of cases where the community's response was insufficiently severe? signed, Rosguill talk 17:07, 16 January 2025 (UTC)
- Ill grab some my issue is admins can unblock without community input it should be unblock from admin then= they have to appeal to the community •Cyberwolf•talk? 17:10, 16 January 2025 (UTC)
- Noting that I've now taken the time to read through the three cases listed at the top--two of them ended in NOTHERE blocks pretty quickly--I could see someone taking issue with the community's handling of RowanElder and Jwa05002, although it does seem that the discussion ultimately resulted in an indef block for one and an apparently sincere apology from the other. signed, Rosguill talk 17:13, 16 January 2025 (UTC)
- Ill grab some my issue is admins can unblock without community input it should be unblock from admin then= they have to appeal to the community •Cyberwolf•talk? 17:10, 16 January 2025 (UTC)
- NPA violations are already routinely met with blocks and sitebans, often on sight without prior warning for the level of disparagement you're describing. Do you have any recent examples on hand of cases where the community's response was insufficiently severe? signed, Rosguill talk 17:07, 16 January 2025 (UTC)
- I think the real problem is that in order to block for any reason you have to take them to a place where random editors discuss whether they are a "net positive" or "net negative" to the wiki, which in principle would be a fair way to decide, but in reality is like the work of opening an RFC just in order to get someone to stop saying random racist stuff, and it's not worth it. Besides, remember the RSP discussion where the Daily Mail couldn't be agreed to be declared unreliable on transgender topics because "being 'gender critical' is a valid opinion" according to about half the people there? I've seen comments that were blatant bigoted insults beneath a thin veneer, that people did not take to ANI because it's just not worth the huge amount of effort. There really needs to be an easy way for administrators to warn (on first violation) and then block people who harass people in discriminatory ways without a huge and exhausting-for-the-complainer "discussion" about it -- and a very clear policy that says discrimination is not OK and is always "net negative" for the encyclopedia would reduce the complexity of that discussion, and I think is an important statement to make.
- By allowing it to be exhaustively debated whether thinly-veiled homophobic insults towards gay people warrant banning is Wikipedia deliberately choosing not to take a stance on the topic. A stance needs to be taken, and it needs to be clear enough to allow rapid and decisive action that makes people actually afraid to discriminate against other editors, because they know that it isn't tolerated, rather than being reasonably confident their targets won't undergo another exhausting ANI discussion. Mrfoogles (talk) 17:04, 16 January 2025 (UTC)
- Said better then i could say i agree wholeheartedly it happens way too much •Cyberwolf•talk? 17:18, 16 January 2025 (UTC)
- The fact that punishments should be a little more severe for users who go after a whole group of editors. As its not an npa its an attack on a group •Cyberwolf•talk? 16:57, 16 January 2025 (UTC)
- I agree that a blind eye shouldn't be turned against discrimination against groups of Wikipedia editors in general, but I don't see why we need a list that doesn't include social class but includes hobbies. The determining factor for deciding whether something is discrimination should be how much choice the individual has in the matter, which seems, in practice, to be the way WP:NPA is used. Phil Bridger (talk) 17:02, 16 January 2025 (UTC)
- I agree hobbies doesn't need to be included. Haven't seen a lot of discrimination based on social class? I think this needs to be taken to the Idea Lab. Mrfoogles (talk) 17:06, 16 January 2025 (UTC)
- Sorry this was just me spit balling i personally have been harassed over my hobbies •Cyberwolf•talk? 17:07, 16 January 2025 (UTC)
- I agree hobbies doesn't need to be included. Haven't seen a lot of discrimination based on social class? I think this needs to be taken to the Idea Lab. Mrfoogles (talk) 17:06, 16 January 2025 (UTC)
- @cyberwolf Strong support in general (see above) but I strongly suggest you take this to the idea lab, because it's not written as a clear and exact proposal and it would probably benefit a lot from being developed into an RFC before taking it here. In the current format it probably can't pass because it doesn't make specific changes to policy. Mrfoogles (talk) 17:08, 16 January 2025 (UTC)
- Yeah sorry I’m new to this i was told to go here to get the ball rolling •Cyberwolf•talk? 17:11, 16 January 2025 (UTC)
- Wait...does this mean I won't be able to discriminate against people whose hobby is editing Wikipedia? Where's the fun in that? Anonymous 17:09, 16 January 2025 (UTC)
- I guess not :3 •Cyberwolf•talk? 17:13, 16 January 2025 (UTC)
- In general, I fail to see the problem this is solving. The UCoC and other policies/guidelines/essays (such as WP:NPA, WP:FOC, and others) already prohibit discriminatory behavior. And normal conduct processes already have the ability to lay down the strictest punishment theoretically possible - an indefinite ban - for anyone who engages in such behavior.
- I do not like the idea of what amounts to bureaucracy for bureaucracy’s sake. That is the best way I can put it. At worst, this is virtue signaling - it’s waving a flag saying “hey, public and editors, Wikipedia cares about discrimination so much we made a specific policy about it” - without even saying the next part “but our existing policies already get people who discriminate against other editors banned, so this was not necessary and a waste of time”. I’ll happily admit I’m proven wrong if someone can show evidence of a case where actual discrimination was not acted upon because people were “concerned” it wasn’t violating one of those other policies. -bɜ:ʳkənhɪmez | me | talk to me! 20:56, 16 January 2025 (UTC)
- To clarify, all the comments about "why is this included" or "why is this not included" are part of the reason I'm against a specific policy like this. Any disruption can be handled by normal processes, and a specific policy will lead to wikilawyering over what is or is not discrimination. There is no need to try to define/specifically treat discrimination when all discriminatory behaviors are adequately covered by other policies already. -bɜ:ʳkənhɪmez | me | talk to me! 22:27, 16 January 2025 (UTC)
- We should be relating to other editors in a kind way. But this proposal appears to make the editing environment more hostile with more blocking on the opinion of one person. We do discrimonate against those that use Wikipedia for wrong purposes, such as vandalism, or advertising. Pushing a particular point of view is more grey area. The proposal by cyberwolf is partly point of view that many others would disagree with. So we should concentrate policies on how a user relates to other editors, rather than their motivations or opinions. Graeme Bartlett (talk) 20:50, 16 January 2025 (UTC)
- I think this is valuable by setting a redline for a certain sort of personal attack and saying, "this is a line nobody is permitted to cross while participating in this project." Simonm223 (talk) 20:57, 16 January 2025 (UTC)
- It is not possible for the content of a discussion to be "discriminatory". Discrimination is action, not speech. This proposal looks like an attempt to limit discourse to a certain point of view. That's not a good idea. --Trovatore (talk) 21:13, 16 January 2025 (UTC)
- Discrimination can very much be speech. Akechi The Agent Of Chaos (talk) 00:36, 17 January 2025 (UTC)
- Nope. --Trovatore (talk) 00:44, 17 January 2025 (UTC)
- Cambridge says that discrimination is : "treating a person or particular group of people differently, especially in a worse way from the way in which you treat other people, because of their race, gender, sexuality, etc".
- So yes, that includes speech because you can treat people differently in speech. Speech is an act. TarnishedPathtalk 01:04, 17 January 2025 (UTC)
- OK, look, I'll concede part of the point here. Yes, if I'm a dick to (name of group) but not to (name of other group), I suppose that is discrimination, but I don't think a discrimination policy is a particularly useful tool for this, because what I should do is not be a dick to anybody.
- What I'm concerned about is that the policy would be used to assert that certain content is discriminatory. Say someone says, here's a reliable source that says biological sex is real and has important social consequences, and someone else says, you can't bring that up, it's discriminatory. Well, no, that's a category error. That sort of thing can't be discriminatory. --Trovatore (talk) 01:29, 17 January 2025 (UTC)
- just drop it •Cyberwolf•talk? 01:23, 17 January 2025 (UTC)
- Nope. --Trovatore (talk) 00:44, 17 January 2025 (UTC)
- Discrimination can very much be speech. Akechi The Agent Of Chaos (talk) 00:36, 17 January 2025 (UTC)
- I would remove anything to do with polical position. Those on the far-right should be discriminated against. TarnishedPathtalk 21:45, 16 January 2025 (UTC)
- The examples you use show that we've been dealing effectively without this additional set of guidelines; it would be more convincing that something was needed if you had examples where the lack of this policy caused bad outcomes. And I can see it being used as a hammer; while we're probably picturing "as a White man, I'm sure that I understand chemistry better than any of you lesser types" as what we're going after, I can see some folks trying to wield it against "as a Comanche raised on the Comanche nation, I think I have some insights on the Comanche language that others here are overlooking." As such, I'm cautious. -- Nat Gertler (talk) 21:49, 16 January 2025 (UTC)
- Comment. I am sorry that caste discrimination is being ignored here. Xxanthippe (talk) 21:54, 16 January 2025 (UTC).
- Not needed. Everything the proposal is talking about would constitute disruptive behavior, and we can block or ban someone for being disruptive already. No need to break disruption down into its component parts, and write rules for each. Blueboar (talk) 22:07, 16 January 2025 (UTC)
References
- ^ Professor Dave Explains (2022-06-06). Let’s All Get Past This Confusion About Trans People. Retrieved 2025-01-15 – via YouTube.
- ^ Altinay, Murat; Anand, Amit (2020-08-01). "Neuroimaging gender dysphoria: a novel psychobiological model". Brain Imaging and Behavior. 14 (4): 1281–1297. doi:10.1007/s11682-019-00121-8. ISSN 1931-7565.
- ^ Professor Dave Explains (2022-06-06). Let’s All Get Past This Confusion About Trans People. Retrieved 2025-01-15 – via YouTube.
Repeated false retirement
There is a user (who shall remain unnamed) who has "retired" twice and had the template removed from their page by other users because they were clearly still editing. They are now on their third "retirement", yet they last edited a few days ago. I don't see any policy formally prohibiting such behavior, but it seems extremely unhelpful for obvious reasons. Anonymous 17:13, 16 January 2025 (UTC)
- Unless the material is harmful to Wikipedia or other users, users have considerable leeway in what they may post on their user page. Personally, I always take "retirement" notices with a grain of salt. If a user wants to claim they are retired even though they are still actively editing, I don't see the harm to anything but their credibility. If I want to know if an editor is currently active, I look at their contributions, not at notices on their user or talk page. Donald Albury 22:07, 16 January 2025 (UTC)
I can't imagine that this calls for a policy. You're allowed to be annoyed if you want. No one can take that away from you. But I'm missing an explanation of why the rest of us should care. --Trovatore (talk) 22:13, 16 January 2025 (UTC)- This seems a little prickly, my friend. Clearly, the other two users who removed older retirement notices cared. At the end of the day, it's definitely not the most major thing, but it is helpful to have a reliable and simple indication as to whether or not a user can be expected to respond to any kind of communication or feedback. I'm not going to die on this hill. Cheers. Anonymous 22:41, 16 January 2025 (UTC)
- A "retirement notice" from a Wikipedia editor is approximately as credible as a "retirement notice" from a famous rock and roll band. Ignore it. Cullen328 (talk) 03:01, 20 January 2025 (UTC)
- FWIW, those two other editors were in the wrong to edit another person's user page for this kind of thing. And the retired banner does indicate: don't expect a quick response, even if I made an edit a few days or even minutes ago, as I may not be around much. Valereee (talk) 12:28, 20 January 2025 (UTC)
- This seems a little prickly, my friend. Clearly, the other two users who removed older retirement notices cared. At the end of the day, it's definitely not the most major thing, but it is helpful to have a reliable and simple indication as to whether or not a user can be expected to respond to any kind of communication or feedback. I'm not going to die on this hill. Cheers. Anonymous 22:41, 16 January 2025 (UTC)
- There's a lot of active editors on the project, with retirement templates on their user pages. GoodDay (talk) 03:11, 20 January 2025 (UTC)
- I think it's kind of rude to edit someone else's user page unless there is an extreme reason, like reversing vandalism or something. On Wikipedia:User pages I don't see anything about retirement templates, but i do see it say "In general, one should avoid substantially editing another's user and user talk pages, except when it is likely edits are expected and/or will be helpful. If unsure, ask." If someone wants to identify as retired but sometimes drop by and edit, that doesn't seem to hurt anything. GeogSage (⚔Chat?⚔) 03:56, 20 January 2025 (UTC)
- Wikipedia is WP:NOTCOMPULSORY, so even a "non-retired" editor might never edit again. And if someone is "retired" but still constructively edits, just consider that a bonus. What's more problematic is a petulant editor who "retires", but returns and edits disruptively; in such case, it's their disruptive behavior that would be the issue, not a trivial retirement notice. —Bagumba (talk) 07:42, 20 January 2025 (UTC)
- As far as Wikipedia is concerned it's just another userbox you can put on your userpage. We only remove userboxes and userspace material if they're claiming to have a right that they don't (ie. a user with an Administrator toolbox who isn't an admin). Retirement is not an official term defined in policy anywhere, and being retired confers no special status. Pinguinn 🐧 11:13, 20 January 2025 (UTC)
- If you see a retirement template that seems to be false you could post a message on the user talk page to ask if they are really retired. I suppose it could be just a tiny bit disruptive if we cannot believe such templates, but nowhere near enough to warrant sanctions or a change in policy. Phil Bridger (talk) 13:39, 20 January 2025 (UTC)
What is the purpose of banning?
In thinking about a recent banned user's request to be unblocked, I've been reading WP:Blocking policy and WP:Banning policy trying to better understand the differences. In particular, I'm trying to better understand what criteria should be applied when deciding whether to end a sanction.
One thing that stuck me is that for blocks, we explicitly say Blocks are used to prevent damage or disruption to Wikipedia, not to punish users
. The implication being that a user should be unblocked if we're convinced they no longer present a threat of damage or disruption. No such statement exists for bans, which implies that bans are be a form of punishment. If that's the case, then the criteria should not just be "we think they'll behave themselves now", but "we think they've endured sufficiently onerous punishment to atone for their misbehavior", which is a fundamentally different thing.
I'm curious how other people feel about this. RoySmith (talk) 16:15, 20 January 2025 (UTC)
- My understanding (feel free to correct me if I am wrong) is that blocks are made by individual admins, and may be lifted by an admin (noting that CU blocks should only be lifted after clearance by a CU), while bans are imposed by ARBCOM or the community and require ARBCOM or community discussion to lift. Whether block or ban, a restriction on editing should only be imposed when it is the opinion of the admin, or ARBCOM, or the community, that such restriction is necessary to protect the encyclopedia from further harm or disruption. I thinks bans carry the implication that there is less chance that the banned editor will be able to successfully return to editing than is the case for blocked editors, but that is not a punishment, it is a determination of what is needed to protect WP in the future. Donald Albury 16:44, 20 January 2025 (UTC)
- Good question. I'm interested in what ban evasion sources think about current policies, people who have created multiple accounts, been processed at SPI multiple times, made substantial numbers of edits, the majority of which are usually preserved by the community in practice for complicated reasons (a form of reward in my view - the community sends ban evading actors very mixed messages). What's their perspective on blocks and bans and how to reduce evasion? It is not easy to get this kind of information unfortunately as people who evade bans and blocks are not very chatty it seems. But I have a little bit of data from one source for interest, Irtapil. Here are a couple of views from the other side.
- On socking - "automatic second chance after first offense with a 2 week ban / block, needs to be easier than making a third one so people don't get stuck in the loop"
- On encouraging better conduct - "they need to gently restrict people, not shun and obliterate"
- No comment on the merits of these views, or whether punishment is what is actually happening, or is required, or effective, but it seems clear that it is likely to be perceived as punishment and counterproductive (perhaps unsurprisingly) by some affected parties. Sean.hoyland (talk) 17:31, 20 January 2025 (UTC)
- Blocks are a sanction authorized by the community to be placed by administrators on their own initiative, for specific violations as described by a policy, guideline, or arbitration remedy (in which case the community authorization is via the delegated authority to the arbitration committee). Blocks can also be placed to enforce an editing restriction. A ban is an editing restriction. As described on the banning policy page, it is a
formal prohibition from editing some or all pages on the English Wikipedia, or a formal prohibition from making certain types of edits on Wikipedia pages. Bans can be imposed for a specified or an indefinite duration.
Aside from cases where the community has delegated authority to admins to enact bans on their own initiative, either through community authorization of discretionary sanctions, or arbitration committee designated contentious topics, editing restrictions are authorized through community discussion. They cover cases where there isn't a single specific violation for which blocking is authorized by guidance/arbitration remedy, and so a pattern of behaviour and the specific circumstances of the situation have to be discussed and a community consensus established. - Historically, removing blocks and bans require a consensus from the authorizing party that removing it will be beneficial to the project. Generally, the community doesn't like to impose editing restrictions when there is promise for improved behaviour, so they're enacted for more severe cases of poor behaviour. Thus it's not unusual that the community is somewhat skeptical about lifting recently enacted restrictions (where "recent" can vary based on the degree of poor behaviour and the views of each community member). Personally I don't think this means an atonement period should be mandated. isaacl (talk) 18:33, 20 January 2025 (UTC)
- I think that a block is a preventive measure, whereas a ban is where the community's reached a consensus to uninvite a particular person from the site. Wikipedia is the site that anyone can edit, except for a few people we've decided we can't or won't work with. A ban is imposed by a sysop on behalf of the community whereas a block is imposed on their own authority.—S Marshall T/C 19:39, 20 January 2025 (UTC)
- A ban does not always stop you from editing Wikipedia. It may prohibit you from editing in a certain topic area (BLP for example or policies) but you can still edit other areas. CambridgeBayWeather (solidly non-human), Uqaqtuq (talk), Huliva 00:24, 23 January 2025 (UTC)
- Seems to be addressed in WP:BMB, which explains that the criteria is not dependent upon an editor merely behaving with what appears to be "
good or good-faith edits
". A ban is based on a persistent or long-term pattern of editing behavior that demonstrates a significant risk of "disruption, issues, or harm
" to the area in which they are banned from, despite any number of positive contributions said editor has made or is willing to make moving forward. As such, it naturally requires a higher degree of review (i.e. a form of community consensus) to be imposed or removed (though many simply expire upon a pre-determined expiration date without review). While some may interpret bans as a form of punishment, they are still a preventative measure at their core. At least that's my understanding. --GoneIn60 (talk) 12:59, 21 January 2025 (UTC)
Contacting/discussing organizations that fund Wikipedia editing
I have seen it asserted that contacting another editor's employer is always harassment and therefore grounds for an indefinite block without warning. I absolutely get why we take it seriously and 99% of the time this norm makes sense. (I'm using the term "norm" because I haven't seen it explicitly written in policy.)
In some cases there is a conflict between this norm and the ways in which we handle disruptive editing that is funded by organizations. There are many types of organizations that fund disruptive editing - paid editing consultants, corporations promoting themselves, and state propaganda departments, to name a few. Sometimes the disruption is borderline or unintentional. There have been, for instance, WMF-affiliated outreach projects that resulted in copyright violations or other crap being added to articles.
We regularly talk on-wiki and off-wiki about organizations that fund Wikipedia editing. Sometimes there is consensus that the organization should either stop funding Wikipedia editing or should significantly change the way they're going about it. Sometimes the WMF legal team sends cease-and-desist letters.
Now here's the rub: Some of these organizations employ Wikipedia editors. If a view is expressed that the organizations should stop the disruptive editing, it is foreseeable that an editor will lose a source of income. Is it harassment for an editor to say "Organization X should stop/modify what it's doing to Wikipedia?" at AN/I? Of course not. Is it harassment for an editor to express the same view in a social media post? I doubt we would see it that way unless it names a specific editor.
Yet we've got this norm that we absolutely must not contact any organization that pays a Wikipedia editor, because this is a violation of the harassment policy. Where this leads is a bizarre situation in which we are allowed to discuss our beef with a particular organization on AN/I but nobody is allowed to email the organization even to say, "Hey, we're having a public discussion about you."
I propose that if an organization is reasonably suspected to be funding Wikipedia editing, contacting the organization should not in and of itself be considered harassment. I ask that in this discussion, we not refer to real cases of alleged harassment, both to avoid bias-inducing emotional baggage and to prevent distress to those involved. Clayoquot (talk | contribs) 03:29, 22 January 2025 (UTC)
- If it's needful to contact an organisation about one of their employees' edits, Trust and Safety should do that. Not volunteers.—S Marshall T/C 09:21, 22 January 2025 (UTC)
- Let's say Acme Corporation has been spamming Wikipedia. If you post on Twitter "Acme has been spamming Wikipedia" is that harassment? How about if you write "@Acme has been spamming Wikipedia?" Should only Trust and Safety be allowed to add the @ sign? Clayoquot (talk | contribs) 15:43, 22 January 2025 (UTC)
- What you post on Twitter isn't something Wikipedia can control. But contacting another editor's employer about that editor's edits has a dark history on Wikipedia.—S Marshall T/C 15:49, 22 January 2025 (UTC)
- The history is dark indeed. What I'm pointing out is that writing "@Acme has been spamming Wikipedia" on Twitter is contacting another editor's employer. Should you be indef blocked without warning for doing that? Clayoquot (talk | contribs) 15:56, 22 January 2025 (UTC)
- You want an "in principle" discussion without talking about specific cases, so the only way I can answer that is to say: Not always, but depending on the surrounding circumstances, possibly.—S Marshall T/C 16:11, 22 January 2025 (UTC)
- I agree. You said it better than I did. Clayoquot (talk | contribs) 18:56, 22 January 2025 (UTC)
- You want an "in principle" discussion without talking about specific cases, so the only way I can answer that is to say: Not always, but depending on the surrounding circumstances, possibly.—S Marshall T/C 16:11, 22 January 2025 (UTC)
- The history is dark indeed. What I'm pointing out is that writing "@Acme has been spamming Wikipedia" on Twitter is contacting another editor's employer. Should you be indef blocked without warning for doing that? Clayoquot (talk | contribs) 15:56, 22 January 2025 (UTC)
- What you post on Twitter isn't something Wikipedia can control. But contacting another editor's employer about that editor's edits has a dark history on Wikipedia.—S Marshall T/C 15:49, 22 January 2025 (UTC)
- Let's say Acme Corporation has been spamming Wikipedia. If you post on Twitter "Acme has been spamming Wikipedia" is that harassment? How about if you write "@Acme has been spamming Wikipedia?" Should only Trust and Safety be allowed to add the @ sign? Clayoquot (talk | contribs) 15:43, 22 January 2025 (UTC)
Another issue is that it sometimes doing that can place another link or two in a wp:outing chain, and IMO avoiding that is of immense importance. The way that you posed the question with the very high bar of "always" is probably not the most useful for the discussion. Also, a case like this is almost always involves a concern about a particular editor or center around edits made by a particular editor, which I think is a non-typical omission from your hypothetical example. Sincerely, North8000 (talk) 19:41, 22 January 2025 (UTC)
- I'm not sure what you mean by placing a link in an outing chain. Can you explain this further? I used the very high bar of "always" because I have seen admins refer to it as an "always" or a "bright line" and this shuts down the conversation. Changing the norm from "is always harassment" to "is usually harassment" is exactly what I'm trying to do.
- Organizations that fund disruptive editing often hire just one person to do it but I've also seen plenty of initiatives that involve money being distributed widely, sometimes in the form of giving perks to volunteers. If the organization is represented by only one editor then there is obviously a stronger argument that contacting the organization constitutes harassment. Clayoquot (talk | contribs) 06:44, 23 January 2025 (UTC)
General reliability discussions have failed at reducing discussion, have become locus of conflict with external parties, and should be curtailed
The original WP:DAILYMAIL discussion, which set off these general reliability discussions in 2017, was supposed to reduce discussion about it, something which it obviously failed to do since we have had more than 20 different discussions about its reliability since then. Generally speaking, a review of WP:RSNP does not support the idea that general reliability discussions have reduced discussion about the reliability of sources either. Instead, we see that we have repeated discussions about the reliability of sources, even where their reliability was never seriously questioned. We have had a grand total of 22 separate discussions about the reliability of the BBC, for example, 10 of which have been held since 2018. We have repeated discussions about sources that are cited in relatively few articles (e.g., Jacobin).
Moreover these discussions spark unnecessary conflict with parties off wiki that harm the reputation of the project. Most recently we have had an unnecessary conflict with the Anti-Defamation League sparked by a general reliability discussion with them, but the original Daily Mail discussion did this also. In neither case was usage of the source a problem generally on Wikipedia in any way that has been lessened by their deprecation - they were neither widely-used, nor permitted to be used in a way that was problematic by existing policy on using reliable sources.
There is also some evidence, particularly from WP:PIA5, that some editors have sought to "claim scalps" by getting sources they are opposed to on ideological grounds 'banned' from Wikipedia. Comments in such discussions are often heavily influenced by people's impression of the bias of the source.
I think a the very least we need a WP:BEFORE-like requirement for these discussions, where the editors bringing the discussion have to show that the source is one for which the reliability of which has serious consequences for content on Wikipedia, and that they have tried to resolve the matter in other ways. The recent discussion about Jacobin, triggered simply by a comment by a Jacobin writer on Reddit, would be an example of a discussion that would be stopped by such a requirement. FOARP (talk) 15:54, 22 January 2025 (UTC)
- The purpose of this proposal is to reduce discussion of sources. I feel that evaluating the reliability of sources is the single most important thing that we as a community can do, and I don't want to reduce the amount of discussion about sources. So I would object to this.—S Marshall T/C 16:36, 22 January 2025 (UTC)
- I don't thinks meant to reduce but instead start more discussions at a more appropriate level than at VPP or RSP. Starting the discussion at the VPP/RSP level means you are trying to get all editors involved, which for most cases isn't really appropriate ( eg one editor has a beef about a source and brings it to wide discussion before getting other input first). Foarp us right that when these discussion are first opened at VPP or RSP without prior attempts to resolve elsewhere is a wear on the process. — Masem (t) 16:55, 22 January 2025 (UTC)
- Oh, well that makes more sense. We could expand WP:RFCBEFORE to cover WP:RSP?—S Marshall T/C 17:06, 22 January 2025 (UTC)
- Basically this. I favour something for RSP along the lines of WP:BEFORE/WP:RFCBEFORE, an WP:RSPBEFORE if you will. FOARP (talk) 21:50, 22 January 2025 (UTC)
- Oh, well that makes more sense. We could expand WP:RFCBEFORE to cover WP:RSP?—S Marshall T/C 17:06, 22 January 2025 (UTC)
- I don't thinks meant to reduce but instead start more discussions at a more appropriate level than at VPP or RSP. Starting the discussion at the VPP/RSP level means you are trying to get all editors involved, which for most cases isn't really appropriate ( eg one editor has a beef about a source and brings it to wide discussion before getting other input first). Foarp us right that when these discussion are first opened at VPP or RSP without prior attempts to resolve elsewhere is a wear on the process. — Masem (t) 16:55, 22 January 2025 (UTC)
- Yeah I would support anything to reduce the constant attempts to kill sources at RSN. It has become one of the busiest pages on all of Wikipedia, maybe even surpassing ANI. -- GreenC 19:36, 22 January 2025 (UTC)
- Oddly enough, I am wondering why this discussion is here? And not Talk RSN:Wikipedia talk:Reliable sources/Noticeboard, as it now seems to be a process discussion (more BEFORE) for RSN? Alanscottwalker (talk) 22:41, 22 January 2025 (UTC)
- Some confusion about pages here, with some mentions of RSP actually referring to RSN. RSN is a type of "before" for RSP, and RSP is intended as a summary of repeated RSN discussions. One purpose of RSP is to put a lid on discussion of sources that have appeared at RSN too many times. This isn't always successful, but I don't see a proposal here to alleviate that. Few discussions are started at RSP; they are started at RSN and may or may not result in a listing or a change at RSP. Also, many of the sources listed at RSP got there due to a formal RfC at RSN, so they were already subject to RFCBEFORE (not always obeyed). I'm wondering how many listings at RSN are created due to an unresolved discussion on an article talk page—I predict it is quite a lot. Zerotalk 04:40, 23 January 2025 (UTC)
- “Not always obeyed” is putting it mildly. FOARP (talk) 06:47, 23 January 2025 (UTC)
Primary sources vs Secondary sources
The discussion above has spiralled out of control, and needs clarification. The discussion revolves around how to count episodes for TV series when a traditionally shorter episode (e.g., 30 minutes) is broadcast as a longer special (e.g., 60 minutes). The main point of contention is whether such episodes should count as one episode (since they aired as a single entity) or two episodes (reflecting production codes and industry norms).
The simple question is: when primary sources and secondary sources conflict, which we do use on Wikipedia?
- The contentious article behind this discussion is at List of Good Luck Charlie episodes, in which Deadline, TVLine and The Futon Critic all state that the series has 100 episodes; this article from TFC, which is a direct copy of the press release from Disney Channel, also states that the series has "100 half-hour episodes".
- The article has 97 episodes listed; the discrepancy is from three particular episodes that are all an hour long (in a traditionally half-hour long slot). These episode receive two production codes, indicating two episodes, but each aired as one singular, continuous release. An editor argues that the definition of an episode means that these count as a singular episode, and stand by these episode being the important primary sources.
- The discussion above discusses what an episode is. Should these be considered one episode (per the primary source of the episode), or two episodes (per the secondary sources provided)? This is where the primary conflict is.
- Multiple editors have stated that the secondary sources refer to the production of the episodes, despite the secondary sources not using this word in any format, and that the primary sources therefore override the "incorrect" information of the secondary sources. Some editors have argued that there are 97 episodes, because that's what's listed in the article.
- WP:CALC has been cited;
Routine calculations do not count as original research, provided there is consensus among editors that the results of the calculations are correct, and a meaningful reflection of the sources
. An editor argues that there is not the required consensus. WP:VPT was also cited.
Another example was provided at Abbott Elementary season 3#ep36.
- The same editor arguing for the importance of the primary source stated that he would have listed this as one episode, despite a reliable source[4] stating that there is 14 episodes in the season.
- WP:PSTS has been quoted multiple times:
Wikipedia articles usually rely on material from reliable secondary sources. Articles may make an analytic, evaluative, interpretive, or synthetic claim only if it has been published by a reliable secondary source.
While a primary source is generally the best source for its own contents, even over a summary of the primary source elsewhere, do not put undue weight on its contents.
Do not analyze, evaluate, interpret, or synthesize material found in a primary source yourself; instead, refer to reliable secondary sources that do so.
- Other quotes from the editors arguing for the importance of primary over secondary includes:
When a secondary source conflicts with a primary source we have an issue to be explained but when the primary source is something like the episodes themselves and what is in them and there is a conflict, we should go with the primary source.
We shouldn't be doing "is considered to be"s, we should be documenting what actually happened as shown by sources, the primary authoritative sources overriding conflicting secondary sources.
Yep, secondary sources are not perfect and when they conflict with authoritative primary sources such as released films and TV episodes we should go with what is in that primary source.
Having summarized this discussion, the question remains: when primary sources and secondary sources conflict, which we do use on Wikipedia?
- Primary, as the episodes are authoritative for factual information, such as runtime and presentation?
- Or secondary, which guide Wikipedia's content over primary interpretations?
-- Alex_21 TALK 22:22, 23 January 2025 (UTC)
Technical
What happened to Geohack?
Today, upon clicking the {{coords}} template (example), I got a 404. Maybe this is a temporary problem, but given the use of the coords feature it's fairly impactful. JayCubby 16:04, 17 January 2025 (UTC)
- It's down, and it isn't maintained by volunteers that are active on-wiki. The last RFC to move away from it didn't pass (c.f. Wikipedia:Village_pump_(proposals)/Archive_202#h-RfC:_Updating_Template:Coord_to_use_Kartographer-20230510062200 and Template_talk:Coord/Archive_14#Switching_to_Kartographer ). — xaosflux Talk 18:13, 17 January 2025 (UTC)
- One of the maintainers, Magnus Manske, is still active on wikidatawiki, I've pinged them to this report there. — xaosflux Talk 18:19, 17 January 2025 (UTC)
- Click the globe icon instead of the coordinates for a map in Katographer for now. — xaosflux Talk 18:15, 17 January 2025 (UTC)
- Now working as intended. --Redrose64 🌹 (talk) 17:15, 18 January 2025 (UTC)
- I'm intermittently getting unreachable errors. Not 100% sure it's resolved. JayCubby 03:01, 22 January 2025 (UTC)
Heading in history view
The following edits [5] and [6] show a different heading (corresponding to the section being edited) in the edit summary than edits [7] (which was made using the convenient discussions tool) and [8] (which was made using the reply tool). When navigating from the history view, clicking on the heading in the edit summary for the first two edits results in a popup saying This topic could not be found. It might have been deleted, moved or renamed.
I made my edit using the default wikitext editor. Does anyone know why it would produce an incorrect heading in the edit summary? isaacl (talk) 19:34, 17 January 2025 (UTC)
- @Isaacl there are problems in jumping to the correct section when the section heading contains links, either [[ ]] or {{ }}. Nthep (talk) 19:39, 17 January 2025 (UTC)
- Sure; just wondering why the behaviour is inconsistent with the reply tool and the default wikitext editor (I would have thought the same code would be used to generate the heading for both use cases, but I guess not). isaacl (talk) 19:48, 17 January 2025 (UTC)
- @Isaacl: Your post was confusing because your third link was the same as the second and you didn't clarify what was supposed to be different. The wikitext of the actual heading says
Dark mode and {{tl|Yes}}
which renders as "Dark mode and {{Yes}}" withouttl
being displayed. Your second link [9] uses the wikitext withtl
in the edit summary and fails to link to the section. Your third link should have been [10] where the edit summary uses the rendering withouttl
and links correctly to the section #Dark mode and {{Yes}}. Different discussion features apparently use different ways to generate the automatic section edit summary and one of them works better in this case. phab:T69068 from 2014 is about the issue. Wikipedia:Manual of Style#Section headings (which doesn't apply to project space) says "For technical reasons, section headings should: ... Not contain template transclusions." PrimeHunter (talk) 20:46, 17 January 2025 (UTC)- My apologies for the copy and paste mistake for the links. Yes, obviously the edit summaries and underlying link text are being generated in different ways. I was wondering if it is a visual editor vs default wikitext editor difference, or something else? And if it was fixed for visual editor, was there an issue in following the same approach for the wikitext editor (maybe the fix was just partial, or not sufficiently resilient?). But I'm not asking for anyone to do any deep research on it. If someone knows off the top of their head, it would be nice to know. Thanks for the Phabricator link; it helped provide some context. (I know about the style recommendation for section headings; thanks for the reference.) isaacl (talk) 23:31, 17 January 2025 (UTC)
- @Isaacl: Your post was confusing because your third link was the same as the second and you didn't clarify what was supposed to be different. The wikitext of the actual heading says
- Sure; just wondering why the behaviour is inconsistent with the reply tool and the default wikitext editor (I would have thought the same code would be used to generate the heading for both use cases, but I guess not). isaacl (talk) 19:48, 17 January 2025 (UTC)
- Basically, this happens because the wikitext editor generates the edit summary directly from the wikitext of the heading, while the visual editor generates it from the parsed HTML of the page. The HTML contains the
id
attribute needed to make the correct link, but generating the correct link from the wikitext would require parsing it to HTML first, and most of the tools don't bother to do that. - The same applies to other wikitext-based editing tools and other HTML-based editing tools. There are more tasks in Phabricator about this, T234982 is a good summary and has even more links.
- The only wikitext-based tool I know that does this better is DiscussionTools's new topic tool's wikitext mode, where we solved it as a side-effect of T338390 – we needed to parse the HTML for some other reasons, and once that was implemented, adding a bit of code to read the
id
attribute out of it was easy. In principle the same approach could be used in other editors, but it is tricky to get the data from point A to point B, especially without affecting performance, and no one has put in the effort to do it yet. Matma Rex talk 23:23, 18 January 2025 (UTC)- Thanks for the explanation! isaacl (talk) 23:33, 18 January 2025 (UTC)
Why is this image acting so odd?
File:Dr. Seuss WikiWorld has removed fishbowl.png, floated in this section, doesn't give a thumbnail. In this older version of "cartoon" it turned into a page-wide hyperlink to the image page. When I click on the 197 × 240 pixels link, I see "Unauthorized This server could not verify that you are authorized to access the document you requested." What's going on there? Rjjiii (talk) 16:54, 18 January 2025 (UTC)
- This looks like a recurrence of phab:T383023. --Redrose64 🌹 (talk) 17:16, 18 January 2025 (UTC)
- Thanks for explaining and for reporting the bug, Rjjiii (talk) 03:31, 19 January 2025 (UTC)
Page mover SVG broken?
Is it just me or is File:Wikipedia page mover.svg somewhat broken? I'm getting "Sorry, the file cannot be displayed There seems to be a technical issue. You can retry if it persists. Error: could not load image from https://upload.wikimedia.org/wikipedia/commons/thumb/4/4b/Wikipedia_page_mover.svg/1024px-Wikipedia_page_mover.svg.png" when clicking the image on Wikipedia:Page mover. I have tried on Firefox, Chrome, Edge, iOS Safari with or without safemode, all yield the same results. However, clicking the original file doesn't generate the same error. — Paper9oll (🔔 • 📝) 13:36, 19 January 2025 (UTC)
- See #Why is this image acting so odd? above. – SD0001 (talk) 15:27, 19 January 2025 (UTC)
Gadget proposal
We currently have a gadget that makes disambiguation links orange, which makes correcting said links much easier. Would it be feasible to create something similar for redlinks to articles that have previously been deleted? For instance, let's say I'm writing an article on an academic named Joe Bloggs, who published a significant work cowritten by Joe Public. I believe Joe Public is notable, but he does not currently have a Wikipedia article, so I create a redlink. However, I failed to check the page's deletion log (!!), which shows that an article on Joe Public did once exist, but it was deleted after its subject was found to lack sufficient independent coverage. Now imagine if I had a gadget that made that redlink purple (or pink, or maroon, or black; I'm not picky), so I would know at a glance to not bother to create a link for a person who has already been determined to not meet notability criteria. It would also make it easier to spot and correct such links while looking through other articles. Much like with the existing gadget I mentioned, this is, of course, still a process that can be done manually, but a gadget would make it much more efficient. Anonymous 19:22, 18 January 2025 (UTC)
- @An anonymous username, not my real name: MediaWiki adds the class mw-disambig to links to disambiguation pages like St. Mary's Church. This means the gadget only has to say links with that class should be orange. The entire code of the gadget is one line in MediaWiki:Gadget-DisambiguationLinks.css and it's client-side with no impact on the servers. MediaWiki does not add a class to red links with a deletion log like Corruption in Wales. A gadget would have to make an API call to the servers for each red link on a page to check for deletion logs. I don't think that's worth the server load even if somebody would make the non-trivial code. PrimeHunter (talk) 20:33, 18 January 2025 (UTC)
- That's fair. Thank you for taking the time to explain. Anonymous 20:57, 18 January 2025 (UTC)
- The script, as noted, only has to hit the servers for redlinks. More broadly Don't worry about performance, that's the server admins' job. If it became a problem they would alert us. There is also a lot of caching in user agents as well as the WMF servers, and this is only doing reads so it hits the caches. --Slowking Man (talk) 01:34, 19 January 2025 (UTC)
- You probably want to raise this as a feature request for User:Anomie/linkclassifier instead. – SD0001 (talk) 10:39, 19 January 2025 (UTC)
- I don't know that I'd implement such a request. Most of what linkclassifier does is based on categories (a little is based on page props). To do this, it'd have to query the logs for each page, which is a whole different thing. Anomie⚔ 15:04, 19 January 2025 (UTC)
- Just curious, what gadget makes dab links orange? I have the one that makes redirects green, dab page links have a yellow background, etc... - The Bushranger One ping only 23:52, 19 January 2025 (UTC)
- @The Bushranger: It's "Display links to disambiguation pages in orange" at Special:Preferences#mw-prefsection-gadgets. The feature you describe is not a gadget but a user script you load in User:The Bushranger/monobook.js. PrimeHunter (talk) 00:06, 20 January 2025 (UTC)
- Just because a page has previously been deleted, you can't assume that the article you were going to create would fail our notability criteria. Notability is far from the only deletion criteria, and especially if you are creating articles on people, you can't always assume that the person you were going to write about is the same person as the adolescent pro skateboarder whose article was deleted fifteen years ago. They may just have the same name. That said, some sort of colour coding or pop up that alerted you to there being a previous article of that name and the reason and recency of deletion might be helpful. New page patrol has a recently deleted colour which usually indicates that someone is repeatedly trying to create a particular article. ϢereSpielChequers 06:59, 20 January 2025 (UTC)
Do certain configuration templates need to be in the first n bytes of a page?
I have a vague recollection that certain templates need to be in the first n bytes of a page? I'm thinking of templates like these:
- {{CS1 config}}
- {{use dmy dates}}
- {{Use British English}}
- {{User:MiszaBot/config}}
- {{Italic title}}
- {{DISPLAYTITLE}}
I can't find anything about this in searches of documentation here or at mediawikiwiki: It looks to me like Module:Citation/CS1/Configuration searches the entire page contents. Do other bots or scripts care? Daask (talk) 19:23, 18 January 2025 (UTC)
- Moving configs somewhere else which CS1 relies on is likely to have a non-zero increase on the Lua execution time associated with a page. (These metadata are incidentally good candidates to move to something like mediawikiwiki:MCR since they definitely don't need to participate in transclusion and are otherwise pretty simple settings.)
- Title templates are there because they modify the title though you could theoretically move them.
- Archiving template is there because it would otherwise get lost by archiving of threads + addition of new threads. Izno (talk) 20:31, 18 January 2025 (UTC)
- AFAIK the only one that is position-critical is
{{User:MiszaBot/config}}
, which must be before the first section heading (of any level), i.e. in the lead section. This is to guard against it being accidentally moved to an archive, which might happen if it were placed inside a section (or subsection) which became archived. It's possible that{{CS1 config}}
might need to be before the first WP:CS1/WP:CS2 template, but not if the relevant JavaScript function(s) has been written carefully. The others are definitely position-independent, but do have conventional positions, summarised at WP:LEADORDER. --Redrose64 🌹 (talk) 22:09, 18 January 2025 (UTC)- Module:Citation/CS1/Configuration reads article wikitext looking for
{{CS1 config}}
,{{use dmy dates}}
, and{{use mdy dates}}
(and any of their redirects). Of course, the earlier these appear in the wiki text, the less work the module needs to do. But, if none of them appear in the wikitext, the module still must scan all of the wikitext to be sure that none of them exist so placement really doesn't matter. Scanning for the{{use xxx dates}}
could be made faster by eliminating some of the several redirects but that suggestion has already been dismissed dismissed (permalink). - —Trappist the monk (talk) 22:39, 18 January 2025 (UTC)
- Module:Citation/CS1/Configuration reads article wikitext looking for
Mouse-over popups and redirects
I've enabled the gadget that pops up a micro-summary of an article whenever I mouse over a link to it. Unfortunately, it's not working properly with redirects. For example, if visit Serial comma#Mainly British style guides opposing typical use, I'm given the following text: I dedicate this book to my parents, Martin Amis, and JK Rowling. If I mouse over the first link, I get a picture of Amis and this text:
Martin Amis ⋅ actions ⋅ popups
108.1kB, 369 wikiLinks, 3 images, 61 categories, 2 weeks 2 days old, Q310176
Sir Martin Louis Amis (25 August 1949 – 19 May 2023) was an English novelist, essayist, memoirist, screenwriter and critic. He is best known for his novels Money (1984) and London Fields (1989). He received the James Tait Black Memorial Prize for his memoir Experience and was twice listed for the Booker Prize (shortlisted in 1991 for Time's Arrow and longlisted in 2003 for Yellow Dog).
However, if I mouse over the second link, I get this text:
JK Rowling ⋅ actions ⋅ popups
Redirects to
J. K. Rowling ⋅ actions
Is there a way to change this, so that the popup shows the target of the redirect (as if the link went to the target), rather than the redirect itself? I can't imagine a reason why we should care whether it's an article or a redirect. The documentation suggests that identifying pages as redirects helps people fix them, but You probably don't want to "fix" such links every time you come across them, and WP:NOTBROKEN actively prohibits changing those redirects without some alternate reason, e.g. it's fine to replace "JK Rowling" with "J. K. Rowling" if we want the full stops and space to appear in the article, but not good to edit the article just to change [[JK Rowling]] to [[J. K. Rowling|JK Rowling]]. If there are any legitimate uses for distinguishing redirects from articles with this tool, that's different, but as far as I can see, it merely gets in the way of using this tool. Nyttend (talk) 22:12, 19 January 2025 (UTC)
- @Nyttend: The first time I hover over a redirect like JK Rowling after loading or reloading a page, I see text from the target below the text you quoted. If I come back to hover over the same link, I only see what you quoted. PrimeHunter (talk) 23:58, 19 January 2025 (UTC)
Loading WP:Huggle
Hi, Good day. I am having trouble loading Huggle as no list of articles/edits is shown on it. Below are the system logs.
Mon Jan 20 13:09:53 2025 Failure of feed provider XMLRCS on enwiki, trying to find some alternative provider
Mon Jan 20 13:09:53 2025 ERROR: XmlRcs failed: redis is empty for 10 seconds
Kindly advise me on what I can do or point me to the right editor/talk page for help. (I didn't go to the Huggle talk page for this issue, as the talk page is not very active and, at times, no one replies to messages. Thank you. Cassiopeia talk 02:24, 20 January 2025 (UTC)
- @Cassiopeia: You can temporarily change the feed provider. Just open the System menu, click on
Change Provider
, and set it toWiki
. – DreamRimmer (talk) 09:57, 20 January 2025 (UTC) - DreamRimmer Thank you so much. It worked! Be safe and best. Cassiopeia talk 10:07, 20 January 2025 (UTC)
Issue - Loading WP:Huggle
Hi, Good day. I am having trouble loading Huggle as no list of articles/edits is shown. Below are the system logs.
Mon Jan 20 13:09:53 2025 Failure of feed provider XMLRCS on enwiki, trying to find some alternative provider
Mon Jan 20 13:09:53 2025 ERROR: XmlRcs failed: redis is empty for 10 seconds
Kindly advise me on what I can do or point me to the right editor/talk page for help. (I didn't go to the Huggle talk page for this issue, as the talk page is not very active and, at times, no one replies to messages. Thank you. Cassiopeia talk 02:17, 20 January 2025 (UTC)
- Have you tried setting it to another provider like IRC or Wiki? Frost 02:31, 20 January 2025 (UTC)
- Frost Thank you for your reply. No, I have never have this issue and this is the first time after using Huggle for many years. How do I set to IRC or Wiki as provider? (note: I am not technical). Thank you. Cassiopeia talk 02:41, 20 January 2025 (UTC)
- From the toolbar at the top, click System > Change provider. Frost 02:49, 20 January 2025 (UTC)
- Frost I changed to Wiki, and it worked! Thank you very much for helping me. Thank you! Be safe and best. Cassiopeia talk 03:01, 20 January 2025 (UTC)
- @Cassiopeia: This is not a matter for WT:VPT; and as you also created a near-identical thread here at WP:VPT, I have combined the two. --Redrose64 🌹 (talk) 18:32, 20 January 2025 (UTC)
- Frost I changed to Wiki, and it worked! Thank you very much for helping me. Thank you! Be safe and best. Cassiopeia talk 03:01, 20 January 2025 (UTC)
- From the toolbar at the top, click System > Change provider. Frost 02:49, 20 January 2025 (UTC)
- Frost Thank you for your reply. No, I have never have this issue and this is the first time after using Huggle for many years. How do I set to IRC or Wiki as provider? (note: I am not technical). Thank you. Cassiopeia talk 02:41, 20 January 2025 (UTC)
Requst for file name change
On January 17 I uploaded the image LMC SMC Bab al Mandab.png, which shows the present (2025) position of the Large and Small Magellanic Clouds over the southern horizon. I have now created an accompanying image, LMC SMC Bab al Mandab_900.png, which shows the same thing as seen in the year 900. If possible, please rename the first image LMC SMC Bab al Mandab_2025.png. If this is not the proper page for such a request, please advise. AstroOgier (talk) 09:14, 20 January 2025 (UTC)
- @AstroOgier: You uploaded this file to Wikimedia Commons so you will need to request a rename there. You can read Commons:File renaming for guidance on how to rename a file. To rename this file, you can simply add the
{{Rename|File:LMC SMC Bab al Mandab_2025.png|1|reason=your reason here}}
template to the file description on the file page. Please don't forget to add your reason in the reason parameter. – DreamRimmer (talk) 09:45, 20 January 2025 (UTC)- Thanks a lot for a quick and helpful advise! AstroOgier (talk) 10:11, 20 January 2025 (UTC)
- The file has been renamed by Ziv on Wikimedia Commons. Regards, Aafi (talk) 10:35, 20 January 2025 (UTC)
- Thanks a lot for a quick and helpful advise! AstroOgier (talk) 10:11, 20 January 2025 (UTC)
WP:WikiProject C/C++ table.
When the links in the table showing the stubs, A class B class etc. are clicked on, it just goes to a blank-ish page. I suspect that there is something to do with the slash it the name, but I hope someone knows a solution. APenguinThatIsSilly("talk") 18:50, 20 January 2025 (UTC)
- I'm not sure anything go be done on our end. Wikipedia talk:Version 1.0 Editorial Team/Index might be worth a shot. — Qwerfjkltalk 19:05, 20 January 2025 (UTC)
Tech News: 2025-04
Latest tech news from the Wikimedia technical community. Please tell other users about these changes. Not all changes will affect you. Translations are available.
Updates for editors
- Administrators can mass-delete multiple pages created by a user or IP address using Extension:Nuke. It previously only allowed deletion of pages created in the last 30 days. It can now delete pages from the last 90 days, provided it is targeting a specific user or IP address. [11]
- On wikis that use the Patrolled edits feature, when the rollback feature is used to revert an unpatrolled page revision, that revision will now be marked as "manually patrolled" instead of "autopatrolled", which is more accurate. Some editors that use filters on Recent Changes may need to update their filter settings. [12]
- View all 31 community-submitted tasks that were resolved last week. For example, the Visual Editor's "Insert link" feature did not always suggest existing pages properly when an editor started typing, which has now been fixed.
Updates for technical contributors
- The Structured Discussion extension (also known as Flow) is being progressively removed from the wikis. This extension is unmaintained and causes issues. It will be replaced by DiscussionTools, which is used on any regular talk page. The last group of wikis (Catalan Wikiquote, Wikimedia Finland, Goan Konkani Wikipedia, Kabyle Wikipedia, Portuguese Wikibooks, Wikimedia Sweden) will soon be contacted. If you have questions about this process, please ping Trizek (WMF) at your wiki. [13]
- The latest quarterly Technical Community Newsletter is now available. This edition includes: updates about services from the Data Platform Engineering teams, information about Codex from the Design System team, and more.
Tech news prepared by Tech News writers and posted by bot • Contribute • Translate • Get help • Give feedback • Subscribe or unsubscribe.
MediaWiki message delivery 01:34, 21 January 2025 (UTC)
Data not shown in the infobox
In Ardatov, Nizhny Novgorod Oblast, the infobox for some reason doesn't contain File:Герб Ардотова гфг.png that is shown in the read mode. I thought it's in the wikidata, but the entry there says the "end time" for that coa is 1925 and that since 2012 this one is used, yet the infobox displays the outdated coa. What's going on? Brandmeistertalk 09:37, 21 January 2025 (UTC)
- I fixed it by deprecating the old file, not sure if that is the normal way to fix this, but it works. Nobody (talk) 12:23, 21 January 2025 (UTC)
Contributions by CIDR range plus date range
I'm tracking a LTA account who frequently IP hops within the same session eg. they might switch IP 6 or 7 times within 30 minutes. However they appear to be limited to certain A or B classes which in theory makes tracking possible. But in practice anything bigger than a C is hard. For example class C Special:Contributions/5.90.7.* is doable but class B Special:Contributions/5.90.* is not, and certainly not class A 5.* .. (I have "JavaScript-enhanced contributions lookup 0.2 enabled", your results may look different from mine.)
Question: is there a tool to filter Class A or Class B based on time frame eg. show all edits within this Class A between 10:40 and 12:40 on Jan 20 on Enwiki. -- GreenC 15:03, 21 January 2025 (UTC)
- I've long thought that the CIDR gadget is pretty much deprecated since the functionality was built in to the contributions page (there are probably still a couple of niche uses, but not many). The contributions page allows you to filter by range and date... For this /16 range the link looks like [14] (there are no contributions on the 20th and it won't filter by exact time). Won't that suffice? -- zzuuzz (talk) 15:14, 21 January 2025 (UTC)
- Excellent, thanks! Now wondering why API:Usercontribs is not working: uciprange or ucuserprefix return valid JSON but empty. -- GreenC 16:42, 21 January 2025 (UTC)
- I haven't checked the API doc but it's probably a "direction" issue. This link is the same as your's except that it reverses the two dates. Johnuniq (talk) 22:20, 21 January 2025 (UTC)
- Thanks John. Start is end. End is start. The docs mention this but somewhat confusingly. The default is
|ucdir=older
, which requires ucstart to be higher than ucend. The original will work with|ucdir=newer
enabled: [15] .. probably|ucdir=newer
should be the default because counting backwards is.. backwards. -- GreenC 01:22, 22 January 2025 (UTC)
- Thanks John. Start is end. End is start. The docs mention this but somewhat confusingly. The default is
- I haven't checked the API doc but it's probably a "direction" issue. This link is the same as your's except that it reverses the two dates. Johnuniq (talk) 22:20, 21 January 2025 (UTC)
- Excellent, thanks! Now wondering why API:Usercontribs is not working: uciprange or ucuserprefix return valid JSON but empty. -- GreenC 16:42, 21 January 2025 (UTC)
Some table classes yet to be adapted for Dark Mode
An example can be found at Javanese script, where each cell features a white background with invisible transliteration:
ha ꦲ
|
na ꦤ
|
ca ꦕ
|
ra ꦫ
|
ka ꦏ
|
a ꦄ
|
ā ꦄ
|
i ꦆ
|
ī ꦇ
|
u ꦈ
|
ū ꦈꦴ
|
ᬳ
|
ᬦ
|
ᬘ
|
ᬭ
|
ᬓ
|
ᬅ
|
ᬆ
|
ᬇ
|
ᬈ
|
ᬉ
|
ᬊ
|
Does this imply that all classes labeled letters-*
haven't been updated for Dark Mode yet?
Additionally, I can't find where to modify the CSS code. Thank you for your attention. Σ>―(〃°ω°〃)♡→天邪弱(と話したい) 09:25, 22 January 2025 (UTC)
- Would be better to get rid of these colors altogether per MOS:COLOR. Gonnym (talk) 09:30, 22 January 2025 (UTC)
- I see the transliterations (e.g. ha and na) above the first row of characters in both light mode and dark mode. – Jonesey95 (talk) 15:25, 22 January 2025 (UTC)
- Here's how I see it: Σ>―(〃°ω°〃)♡→天邪弱(と話したい) 22:14, 22 January 2025 (UTC)
- Strange. I suggest trying a different browser, and trying dark mode while logged out. – Jonesey95 (talk) 00:01, 23 January 2025 (UTC)
- Here's how I see it: Σ>―(〃°ω°〃)♡→天邪弱(と話したい) 22:14, 22 January 2025 (UTC)
- I see the transliterations (e.g. ha and na) above the first row of characters in both light mode and dark mode. – Jonesey95 (talk) 15:25, 22 January 2025 (UTC)
Any insight in to new accessibility issue affecting screen readers and Vector 2022 and Chrome?
See Wikipedia talk:WikiProject Accessibility § Search Field More Difficult to Activate with a Screen-reader in Chrome. Any replies should probably go there. Graham87 (talk) 14:36, 22 January 2025 (UTC)
Getting List of All Class B Articles
Hi,
I would like to get a list of urls to all class B articles. I know programming. But from what I could figure out up to now, it seems quite tedious to go to Category:B-Class_articles and do all subcategories in a recursive manner and change targets from talk pages to the actual articles. Is there any easier way to do it?
Thanks a lot
Yours Dirk Hünniger (talk) 15:34, 22 January 2025 (UTC)
- Query the database. This is most easily done with m:Research:Quarry if you don't already have a toolforge account, or asking at WP:Request a query if you don't speak SQL. (But don't bother with the latter; it'd probably be me that ends up answering you there anyway, and it's not worth moving this unless it gets long.)Do you really mean to get a list of pages in the Category:B-Class articles tree? There's about 85 categories named "B-Class ..." that aren't in it, and conversely some that are but likely don't categorize b-class articles such as Category:Anime and manga articles with incomplete B-Class checklists. See quarry:query/90084 for a full list of each. —Cryptic 16:27, 22 January 2025 (UTC)
- Hi,
- thanks for your response. I think I got a toolforge account. So I will try to quarry the database. The reason why I want to work with such a list is my mediawiki2latex program. I want to run it on all class B articles to try if a PDF is created in every case and fix the cases where it does not happen.
- Yours Dirk Hünniger (talk) 16:51, 22 January 2025 (UTC)
- This should feasible in WP:PETSCAN from Category:B-Class articles. WhatamIdoing (talk) 20:24, 22 January 2025 (UTC)
- Hi,
- when I click "Launch PetScan", I get
- "Error
- This web service cannot be reached. Please contact a maintainer of this project"
- so it seems to be broken Dirk Hünniger (talk) 09:37, 23 January 2025 (UTC)
- This should feasible in WP:PETSCAN from Category:B-Class articles. WhatamIdoing (talk) 20:24, 22 January 2025 (UTC)
- Hi User:Cryptic,
- thanks a lot for you query 90084 example. I am not really used to SQL, but this was a nice opportunity for me to practice SQL. I came up with a modified version of your query, that seems to do what I need quarry:query/90125. I exported it to csv and built a set of the lines, which resulted in 151299 elements, which is the right order of magnitude.
- For my purpose it is good enough, I just need a set of Wikipedia articles with not too short content, that I can use as test data for my mediawiki2latex program.
- Thanks a lot for your help. Dirk Hünniger (talk) 13:35, 23 January 2025 (UTC)
Retrieving multiple property values in one call of Module:wd
I am trying to retrieve multiple property values from wikidata (using Module:wd) in one call but it is ignoring the other properties I give so I only ever get one property value. I must not be giving things in the correct order but none of the Module examples help me. For example, given a mountain name, I want to retrieve the elevation, prominence, mountain range, coordinates and the first ascent significant event. I can get all the values if I code one call per property but how do I code it so I can get all the properties in one call? So given this:
P2044 = elevation P2660 = prominence P4552 = mountain range P625 = coordinates P793 = significant event; Q1194369 = first ascent; P585 = point in time
how do I get all the property values in one call?
{{#invoke:wd|property|P2044|P2660|P4552|P625|property|qualifier|P793|Q1194369|P585|page=Mount Robson}}
RedWolf (talk) 19:10, 22 January 2025 (UTC)
- I am fairly certain this cannot be done in that module. Izno (talk) 21:32, 22 January 2025 (UTC)
- The documentation for the "property" command says "Returns the requested property – or list of properties". Yet, I see no example or syntax of how to specify this list of properties. RedWolf (talk) 22:35, 22 January 2025 (UTC)
Incomprehensible error message
Hi, near the bottom of Anglo-German Fellowship is the giant red error message "Lua error in Module:Navbox at line 604: attempt to concatenate field 'argHash' (a nil value)." Does this mean anything to anybody? And, more importantly, can anyone make it go away? Thank you, DuncanHill (talk) 19:40, 22 January 2025 (UTC)
- "Lua error in Module:Navbox at line 535: attempt to get length of local 'arg' (a number value). Lua error in Module:Navbox at line 535: attempt to get length of local 'arg' (a number value). Lua error in Module:Navbox at line 535: attempt to get length of local 'arg' (a number value)." Is it WP:THURSDAY? Hawkeye7 (discuss) 20:08, 22 January 2025 (UTC)
- Whatever has gone wrong is not just related to that - see Soko 522 for example, and looks like its broken every navbox at the bottom of articles. It may be related to Template_talk:Navbox#generates_errors_from_Module:Military_navigation.
- I've reverted a recent change to Module:Navbox that caused the problem. —Bkell (talk) 20:14, 22 January 2025 (UTC)
Wayback Machine not archiving new links?
Obviously this is beyond the scope of Wikipedia, but it seems to be impossible to find snapshots of new links that can be archived at the moment. It simply produces an error message such as "Fail with status: 498" or "We're sorry — something's gone wrong. Our team has been notified." This is a nuisance as archiving links via the Wayback Machine is important. Are other people having this problem? ♦IanMacM♦ (talk to me) 20:03, 22 January 2025 (UTC)
Parent categories
An editor has requested a change to the way we display categories in the Category: namespace. The existing system, which looks approximately like this:
does not seem intuitive. @PrimeHunter figured out how to change the existing category footer to something that makes the meaning more obvious:
and to have this only appear in the Category: namespace (i.e., will not change/screw up any articles).
Could we please get this change implemented here? It would only require copying the contents of testwiki:MediaWiki:Pagecategories to MediaWiki:Pagecategories.
WhatamIdoing (talk) 20:18, 22 January 2025 (UTC)
- This sort of sounds like it would be an overall general improvement - that is not something special for only the English Wikipedia, and for only users with their interface language in en. If so, this should be requested upstream. — xaosflux Talk 01:56, 23 January 2025 (UTC)
- I think it'd be better to do this locally, where it's been requested. If it seems to be a net improvement, we could always suggest it for widespread use (which would require re-translation of the string for all 300+ languages – not something that can happen quickly). WhatamIdoing (talk) 03:44, 23 January 2025 (UTC)
Nonsense redlink
Twice in the past three days, AnomieBOT has created the entirely unpopulated maintenance category Category:Articles lacking reliable references from 2025-01-19 from January 2025, which has in turn generated a nonsense redlink for Category:Monthly clean-up category (Articles lacking reliable references from 2025-01-19) counter — but since YYYY-MM-DD is not part of our naming format for either "Articles lacking reliable references" or "monthly clean-up category" maintenance categories, neither of these are categories that should ever exist at those names at all. But when I deleted the referencing category as both nonsense and empty earlier today in order to blow up the monthly clean-up redlink, the bot came along and recreated it again a few hours later even though it's still both nonsense and empty.
Could somebody look into this and figure out how to make it stop? I haven't deleted the category again this time, though I have wrapped the template in {{suppress categories}} since the redlinked parent still needed to go away regardless. Thanks. Bearcat (talk) 01:29, 23 January 2025 (UTC)
- The somewhere to ask about this begins here: User talk:AnomieBOT. — xaosflux Talk 01:53, 23 January 2025 (UTC)
- AnomieBOT created that because Category:Articles lacking reliable references from 2025-01-19 existed. Garbage in, garbage out. * Pppery * it has begun... 02:43, 23 January 2025 (UTC)
- More specifically, because Category:Articles lacking reliable references from 2025-01-19 existed and was in Category:Wikipedia maintenance categories sorted by month. As the latter says,
A bot, currently AnomieBOT and formerly Cerabot~enwiki, will monitor the categories in this category and create the necessary monthly subcategories.
Anomie⚔ 12:54, 23 January 2025 (UTC)
- More specifically, because Category:Articles lacking reliable references from 2025-01-19 existed and was in Category:Wikipedia maintenance categories sorted by month. As the latter says,
- @Bearcat and Xaosflux: It's not the fault of AnomieBOT. The problem stems from these two edits by En rouge (talk · contribs), who added more than fifty instances of
{{Irrelevant citation}}
, each of which used|date=2025-01-19
and not|date=January 2025
as advised by the template doc. They also manually created Category:Articles lacking reliable references from 2025-01-19, which has since been deleted. --Redrose64 🌹 (talk) 10:08, 23 January 2025 (UTC)- Some input validation could help there. — xaosflux Talk 10:22, 23 January 2025 (UTC)
asterisks
- List item
- Second level list item
- Third level list item if there's a line separating this from the previous list item
^^ Why is this the default behavior for multiple asterisks? Does anybody ever actually want every asterisk to render as an additional dot? Why isn't *** by default just a twice indented bulletpoint, even if not immediately preceded by a once indented bulletpoint? This silliness is why many people wind up starting lines with e.g. :*:::: or :::::*, which if I recall correctly isn't ideal for accessibility. At minimum, given the ubiquity of unordered lists in wiki discussion pages, shouldn't indents be the default behavior (with perhaps some template for anyone with a weird multi-bullet use case)? — Rhododendrites talk \\ 03:46, 23 January 2025 (UTC)
- Because any ‘lists’ with newlines between them are not actually one united list. See MOS:LISTBREAK. Sadly MediaWiki does not make this obvious enough, if anything, but both are bad for accessibility. Editors should be advised not to insert blank lines between list items, or at least to insert them as blank lines with indentation (
*** <blank>
), as that would generate a hidden empty list item that would unite the markup. stjn 04:59, 23 January 2025 (UTC)
PetScan down or broken
Hi, For PetScan, when launched, it shows Error: This web service cannot be reached. Please contact a maintainer of this project.. Needed to run an hour ago and still fails. Will check again later today. Regards, JoeNMLC (talk) 14:32, 23 January 2025 (UTC)
- It's been down for a couple of days. It doesn't appear to have any current documentation as to active maintainers. — xaosflux Talk 15:28, 23 January 2025 (UTC)
- It only accepts bug reports on github, where there is bug 187 open now. Github isn't really good for operational bugs, just software ones... This is another project with a lack of active onwiki volunteers unfortunately. — xaosflux Talk 15:31, 23 January 2025 (UTC)
- Only maintainer linked is User:Magnus Manske, who is still active on wikidata. Similar to the geohack outage (Wikipedia:Village_pump_(technical)#What_happened_to_Geohack?) above -- needs an operator to possibly work with the cloud team. — xaosflux Talk 15:35, 23 January 2025 (UTC)
- @Xaosflux - Thank you for digging into this issue. PetScan really is a great tool for category filtering of articles. I regularly use to find Unreferenced + Orphan articles combination. For now, "Plan B" is to search thru just the old Unref. articles. Yes, it would be great to find an expert to: 1. Identify what is broken; 2. Fix it. There are some Bots that occasionally fail off & need to be restarted. Cheers, JoeNMLC (talk) 17:41, 23 January 2025 (UTC)
- LDAP shows that Magnus Manske is the only maintainer. Since this is a very important tool, I think the Wikimedia Cloud team can help restart the web service if Magnus is not available. – DreamRimmer (talk) 18:04, 23 January 2025 (UTC)
- @Xaosflux - Thank you for digging into this issue. PetScan really is a great tool for category filtering of articles. I regularly use to find Unreferenced + Orphan articles combination. For now, "Plan B" is to search thru just the old Unref. articles. Yes, it would be great to find an expert to: 1. Identify what is broken; 2. Fix it. There are some Bots that occasionally fail off & need to be restarted. Cheers, JoeNMLC (talk) 17:41, 23 January 2025 (UTC)
- Only maintainer linked is User:Magnus Manske, who is still active on wikidata. Similar to the geohack outage (Wikipedia:Village_pump_(technical)#What_happened_to_Geohack?) above -- needs an operator to possibly work with the cloud team. — xaosflux Talk 15:35, 23 January 2025 (UTC)
- It only accepts bug reports on github, where there is bug 187 open now. Github isn't really good for operational bugs, just software ones... This is another project with a lack of active onwiki volunteers unfortunately. — xaosflux Talk 15:31, 23 January 2025 (UTC)
- Why Wikipedia has no own tools like PetScan? Eurohunter (talk) 21:13, 23 January 2025 (UTC)
Proposals
Transclusion of peer reviews to article talk pages
Hello,
First time posting here.
I would like to propose that peer reviews be automatically transcluded to talk pages in the same way as GAN reviews. This would make them more visible to more editors and better preserve their contents in the article/talk history. They often take a considerable amount of time and effort to complete, and the little note near the top of the talk page is very easy to overlook.
This also might (but only might!) raise awareness of the project and lead to more editors making use of this volunteer resource.
I posted this suggestion on the project talk page yesterday, but I have since realized it has less than 30 followers and gets an average of 0 views per day.
Thanks for your consideration, Patrick (talk) 23:07, 2 January 2025 (UTC)
- I don't see any downsides here. voorts (talk/contributions) 01:55, 4 January 2025 (UTC)
- Support; I agree with Voorts. Noting for transparency that I was neutrally notified of this discussion by Patrick Welsh. —TechnoSquirrel69 (sigh) 21:04, 6 January 2025 (UTC)
- This is a great idea, it's weird that it isn't done already. Toadspike [Talk] 21:13, 6 January 2025 (UTC)
- So far this proposal has only support, both here and at the Peer review talk. Absent objections, is there a place we can request assistance with implementation? I have no idea how to do this. Thanks! --Patrick (talk) 17:23, 13 January 2025 (UTC)
- It might be useful to have a bot transclude the reviews automatically like ChristieBot does for GAN reviews. AnomieBOT already does some maintenance tasks for PR so, Anomie, would this task be a doable addition to its responsibilities? Apart from that, I don't think any other changes need to be made except to selectively hide or display elements on the review pages with
<noinclude>...</noinclude>
or<includeonly>...</includeonly>
tags. —TechnoSquirrel69 (sigh) 17:28, 13 January 2025 (UTC)- Since ChristieBot already does the exact same thing for GAN reviews, it might be easier for Mike Christie to do the same for peer reviews than for me to write AnomieBOT code to do the same thing. If he doesn't want to, then I'll take a look. Anomie⚔ 22:41, 13 January 2025 (UTC)
- I don't have any objection in principle, but I don't think it's anything I could get to soon -- I think it would be months at least. I have a list of things I'd like to do with ChristieBot that I'm already not getting to. Mike Christie (talk - contribs - library) 22:54, 13 January 2025 (UTC)
- I took a look and posted some questions at Wikipedia talk:Peer review. Anomie⚔ 16:14, 18 January 2025 (UTC)
- I don't have any objection in principle, but I don't think it's anything I could get to soon -- I think it would be months at least. I have a list of things I'd like to do with ChristieBot that I'm already not getting to. Mike Christie (talk - contribs - library) 22:54, 13 January 2025 (UTC)
- Since ChristieBot already does the exact same thing for GAN reviews, it might be easier for Mike Christie to do the same for peer reviews than for me to write AnomieBOT code to do the same thing. If he doesn't want to, then I'll take a look. Anomie⚔ 22:41, 13 January 2025 (UTC)
- It might be useful to have a bot transclude the reviews automatically like ChristieBot does for GAN reviews. AnomieBOT already does some maintenance tasks for PR so, Anomie, would this task be a doable addition to its responsibilities? Apart from that, I don't think any other changes need to be made except to selectively hide or display elements on the review pages with
- Support, I've submitted a couple of articles for peer review and wondered when I first did it why it wasn't done on a sub-page of article's talks the same way GAN is. TarnishedPathtalk 02:24, 16 January 2025 (UTC)
- Support -- seems like a good idea to me. Talk pages are for showing how people have discussed the article, including peer review. Mrfoogles (talk) 20:51, 23 January 2025 (UTC)
Support. This would be very, very helpful for drafts, so discussions can be made in the Talk pages to explain a problem with a draft in more detail rather than only showing the generic reason boxes. Hinothi1 (talk) 12:56, 18 January 2025 (UTC)
Good Article visibility
I think it would be a good idea to workshop a better way to show off our Good, A-class and Featured articles (or even B-class too), and especially in the mobile version, where there is nothing. At present, GA icons appear on the web browser, but this is it. I think we could and should be doing more. Wikipedia is an expansive project where page quality varies considerably, but most casual readers who do not venture onto talk pages will likely not even be aware of the granular class-based grading system. The only visible and meaningful distinction for many readers, especially mobile users, will be those articles with maintenance and cleanup tags, and those without. So we prominently and visibly flag our worst content, but do little to distinguish between our best content and more middling content. This seems like a missed opportunity, and poor publicity for the project. Many readers come to the project and can go away with bad impressions about Wikipedia if they encounter bad or biased content, or if they read something bad about the project, but we are doing less than we could to flag the good. If a reader frequents 9 C-class articles and one Good Article, they may simply go away without even noticing the better content, and conclude that Wikipedia is low quality and rudimentary. By better highlighting our articles that have reached a certain standard, we would actually better raise awareness about A) the work that still needs to be done, and B) the end results of a collaborative editing process. It could even potentially encourage readers who become aware of this distinction to become editors themselves and work on pages that do not carry this distinction when they see them. In this age of AI-augmented misinformation and short-attention spans, better flagging our best content could yield benefits, with little downside. It could also reinject life and vitality into the Good Article process by giving the status more tangible front-end visibility and impact, rather than largely back-end functionality. Maybe this has been suggested before. Maybe I'm barking up the wrong tree. But thoughts? Iskandar323 (talk) 15:09, 11 January 2025 (UTC)
- With the big caveat that I'm very new to the GA system in general and also do not know how much technical labor this would require, this seems like a straightforwardly helpful suggestion. The green + sign on mobile (and/or some additional element) would be a genuinely positive addition to the experience for users - I think a textual element might be better so the average reader understands what the + sign means, but as it stands you're absolutely right, quality is basically impossible to ascertain on mobile for non-experts, even for articles with GA status that would have a status icon on desktop. 19h00s (talk) 16:43, 11 January 2025 (UTC)
- While GA articles have been approved by at least one reviewer, there is no system of quality control for B class articles, and no system to prevent an editor from rating an article they favor as B class in order to promote or advertise it. A class articles are rare, as Military History is the only project I know of that uses that rating. Donald Albury 17:16, 11 January 2025 (UTC)
- I totally agree we should be doing more. There are userscript that change links to different colours based on quality (the one I have set up shows gold links as featured, green as GA, etc).
- If you aren't logged in and on mobile, you'd have no idea an article has had a review. Lee Vilenski (talk • contribs) 20:15, 11 January 2025 (UTC)
- A discussion was held on this about two years ago and there was consensus to do something. See Wikipedia talk:Good Article proposal drive 2023#Proposal 21: Make GA status more prominent in mainspace and Wikipedia:Good Article proposal drive 2023/Feedback#Proposal 21: Make GA status more prominent in mainspace. Thebiguglyalien (talk) 04:20, 12 January 2025 (UTC)
- @Thebiguglyalien: Is that feedback discussion alive, dead, or just lingering in half-life? It's not obviously archived, but has the whole page been mothballed? So basically, there's community consensus to do something, but the implementation is now the sticking point. Iskandar323 (talk) 04:57, 12 January 2025 (UTC)
- Basically, most of the progress made is listed on that feedback page and the project has moved on from it. There were a few options, like the visibility one, where it was agreed upon and then didn't really go anywhere. So there are some ideas there, but we'd basically need to start fresh in terms of implementation. Thebiguglyalien (talk) 05:16, 12 January 2025 (UTC)
- @Thebiguglyalien: Is that feedback discussion alive, dead, or just lingering in half-life? It's not obviously archived, but has the whole page been mothballed? So basically, there's community consensus to do something, but the implementation is now the sticking point. Iskandar323 (talk) 04:57, 12 January 2025 (UTC)
- You're barking up exactly the right tree, Iskandar323. Regarding showing the icons on mobile, that's a technical issue, which is tracked at phab:T75299. I highlighted it to MMiller (WMF) when I last saw him at WCNA, but there's ultimately only so much we can push it.Regarding desktop, we also know the solution there: Move the GA/FA topicons directly next to the article name, as was proposed in 2021. The barrier there is more achieving consensus — my reading of that discussion is that, while it came close, the determining factor of why it didn't ultimately pass is that some portion of editors believed (wrongly, in my view) that most readers notice/know what the GA/FA symbols mean. The best counterargument to that would be some basic user research, and while ideally that would come from the WMF, anyone could try it themselves by showing a bunch of non-Wikipedian friends GAs/FAs and asking if they notice the symbols and know what they mean. Once we have that, the next step would be running another RfC that'd hopefully have a better chance of passing. Sdkb talk 06:50, 12 January 2025 (UTC)
- It's great that I've got the right tree, since I think that's a village pump first for me. It seems that the proposer of that original 2021 discussion already did some basic research. Intuitively, it also seems just obvious that an icon tucked away in the corner, often alongside the padlocks indicating permission restrictions, is not a high visibility location. Another good piece of final feedback in the GA project discussion mentioned earlier up this thread by TBUA is that the tooltip could also been improved, and say something more substantial and explanatory than simply "this is a good article". On the subject of the mobile version and the level of priority we should be assigning to it, we already know that per WP:MOBILE, 65% of users access the platform via mobile, which assuming a roughly even spread of editors and non-editors, implies that 2/3 of contemporary casual visitors to the site likely have no idea about the page rating system. Iskandar323 (talk) 07:31, 12 January 2025 (UTC)
my reading of that discussion is that, while it came close, the determining factor of why it didn't ultimately pass is that some portion of editors believed (wrongly, in my view) that most readers notice/know what the GA/FA symbols mean
This is not my reading of the discussion. To me it looks as though a major concern among opposers is that making GA/FA status more prominent for readers is likely to mislead them, either by making them think that GAs/FAs are uniformly high-quality even for those which were assessed many years ago when our standards were lower and have neither been maintained or reassessed, or by making them more doubtful about the quality of articles which have never gone through the GA/FA process but are nonetheless high quality. By my count at least ten of the 15 oppose !voters cite this reason either explicitly or "per X" where X is someone else who had made this point. Caeciliusinhorto (talk) 16:18, 12 January 2025 (UTC)- I've also encountered a fair few instances of older, lower standard GA articles. But I also think greater visibility (effectively also transparency) could also benefit in that area as well. If GA status is more prominent, it provides greater cause to review and reassess older GAs for possible quality issues. Also, most of the worst GAs I have seen have come from around 2007, so it seems like one sensible solution would be for GA status to come with a sunset clause whereby a GA review is automatically required after a decade. Maybe I'm getting a little sidetracked there, but this sort of concern is also exactly what I mean by greater visibility potentially reinjecting life and vitality into the process. Iskandar323 (talk) 17:15, 12 January 2025 (UTC)
- I think you're right about that being the most major source of opposition, but most major is different than determining — I don't think those !voters will be open to persuasion unless the quality of GAs/FAs improves (which, to be fair, it definitely has somewhat since 2021). But the "they already know" !voters might be more persuadable swing !voters, and it would have passed with their support. Sdkb talk 19:02, 12 January 2025 (UTC)
- @Sdkb: So, is there any way to poke the mobile issue a little harder with a stick? And do you think it is worth re-running the 2021 proposal or a version of it? What format should such a discussion take? Is there a formal template for making a proposal more RFC-like? Iskandar323 (talk) 12:59, 20 January 2025 (UTC)
- I think that's a fair reading of the discussion. But, I suppose the best way to be more transparent is to tell a user that it has been rated GA after a peer review, but that doesn't mean that the article is perfect... Which is what GA (and FAs) also say. Lee Vilenski (talk • contribs) 19:54, 12 January 2025 (UTC)
- My radical proposal would be to get rid of the whole WP:GA system (which always came across to me as a watered-down version of WP:FA). Some1 (talk) 16:31, 12 January 2025 (UTC)
- Why? TompaDompa (talk) 16:38, 12 January 2025 (UTC)
- It is a watered-down process from an FA, but it is also the first rung on the ladder for some form of peer-review and a basic indicator of quality. Not every subject has the quality sources, let alone a volunteer dedicated enough, to take it straight from B-class to Featured Article. Iskandar323 (talk) 17:17, 12 January 2025 (UTC)
- That's literally the point of it. Lee Vilenski (talk • contribs) 19:52, 12 January 2025 (UTC)
Replace abbreviated forms of Template:Use mdy dates with full name
I propose that most[a] transclusions of redirects to {{Use mdy dates}} and {{Use dmy dates}} be replaced by bots with the full template name.
Part of the purpose of {{Use mdy dates}} is to indicate to editors what they should do. Thus, readability is important. I propose all of these redirects be replaced with their target which is:
- More easily understood even the first time you see it.
- Standardized, and thus easier to quickly scan and read.
The specific existing redirects that I suggest replacing are:
- {{Mdy}} → {{Use mdy dates}}
- {{MDY}} → {{Use mdy dates}}
- {{Usemdy}} → {{Use mdy dates}}
- {{Usemdydates}} → {{Use mdy dates}}
- {{Use MDY}} → {{Use mdy dates}}
- {{Use mdy}} → {{Use mdy dates}}
- {{Dmy}} → {{Use dmy dates}}
- {{DMY}} → {{Use dmy dates}}
- {{Usedmy}} → {{Use dmy dates}}
- {{Use dmy}} → {{Use dmy dates}}
- {{Use DMY}} → {{Use dmy dates}}
- {{Usedmydates}} → {{Use dmy dates}}
- ^ I would probably leave alone the redirects that differ only in case, namely {{Use MDY dates}} and {{Use DMY dates}}, which are sufficiently readable for my concerns.
Daask (talk) 20:30, 18 January 2025 (UTC)
- In principle I like this idea (noting my suggestion to bring it here). My only concern would be about watchlist spam, given that, while this may not technically be a cosmetic edit, it's only a hair above one. But there's only a few thousand transclusions of these redirects, so if the bot goes at a rate of, say, one per minute, it'd be done in a few days. -- Tamzin[cetacean needed] (they|xe|🤷) 21:09, 18 January 2025 (UTC)
- It looks like most or all of these are already listed at Wikipedia:AutoWikiBrowser/Template redirects, so whenever anyone edits an article with AWB, they'll already be replaced. No strong view about doing so preemptively.
- However, if our goal is to ensure that these templates are actually meaningfully used, then we have some bigger fish to fry. First of all, even the written-out form isn't sufficiently readable/noticeable — many newcomers may not know what it means, and many experienced editors may miss it if they aren't happening to look at the top of the article. Ideally, we would either offer to correct the date format if anyone enters the incorrect one via mw:Edit check (task) or we'd include it in an editnotice of some sort.
- Second of all, roughly 2/3 of all articles still don't have a date tag, so we need to figure out better strategies for tagging en masse. There are surely some definable groups of articles that are best suited to a particular format (e.g. all U.S. municipality articles I'd think would want to use MDY) that we could agree on and then bulk tag. Sdkb talk 21:50, 18 January 2025 (UTC)
Ideally, we would either offer to correct the date format if anyone enters the incorrect one via mw:Edit check (task) or we'd include it in an editnotice of some sort.
This could also feasibly be done with a regex edit filter, which is better than Edit check in that specific case as the latter doesn't work with the source editor as far as I know. Chaotic Enby (talk · contribs) 07:01, 20 January 2025 (UTC)- However it's done technically, it will need human supervision as some instances shouldn't be change, e.g. in quotes and the titles of sources. Thryduulf (talk) 07:08, 20 January 2025 (UTC)
- A filter could only flag an issue, not fix it. And any time a user gets a warning screen when they click "publish", there is a significant chance they will abandon their edit out of confusion or frustration, so we should not be doing that for a relatively minor issue like date format. -- Tamzin[cetacean needed] (they|xe|🤷) 07:11, 20 January 2025 (UTC)
- I do believe that just flagging it would be better than giving an explicit warning (that might scare the user) or automatically fixing it (which, like Thryduulf mentioned, might not be optimal for direct quotes and the likes). Chaotic Enby (talk · contribs) 07:17, 20 January 2025 (UTC)
- Concur with Tamzin — the main point of Edit Check is to introduce an option to alert an editor of something without requiring a post-edit warning screen, which is all edit filters can do. The ideal form would be a combo of a flag and an automatic fix — for instance, dates not detected to be within quotes would be highlighted, clicking on it would say "this article uses the MDY date format; would you like to switch to that? learn more convert". Sdkb talk 16:38, 20 January 2025 (UTC)
- That could be great indeed! Chaotic Enby (talk · contribs) 22:14, 20 January 2025 (UTC)
- Courtesy pinging @PPelberg (WMF) of the Edit Check team, btw, just in case you have anything to add. Sdkb talk 05:11, 21 January 2025 (UTC)
- Concur with Tamzin — the main point of Edit Check is to introduce an option to alert an editor of something without requiring a post-edit warning screen, which is all edit filters can do. The ideal form would be a combo of a flag and an automatic fix — for instance, dates not detected to be within quotes would be highlighted, clicking on it would say "this article uses the MDY date format; would you like to switch to that? learn more convert". Sdkb talk 16:38, 20 January 2025 (UTC)
- I do believe that just flagging it would be better than giving an explicit warning (that might scare the user) or automatically fixing it (which, like Thryduulf mentioned, might not be optimal for direct quotes and the likes). Chaotic Enby (talk · contribs) 07:17, 20 January 2025 (UTC)
- It's definitely a cosmetic edit, in that it only changes the wikitext without changing anything readers see. But consensus can decide that any particular cosmetic edit should be done by bots. As proposed, there are currently 2089 transclusions of these redirects, 1983 in mainspace. Anomie⚔ 14:21, 19 January 2025 (UTC)
- Agree with this. Also regarding
many newcomers may not know what it means
(in reference to the full template names): as a reminder, we do have to opt in to display maintenance categories, many of which are far less scrutable to the uninitiated. Categories can be clicked on for explanation.As to the proposal itself, I don't really see the value in bypassing a bunch of redirects. Redirects exist to be used, and there's nothing wrong with using them. Blowing up people's watchlists for this type of change seems inconsiderate.Articles without a prescribed date format are a non-issue. There's no need to implement any standard format at every article, and I augur that an attempt to do so would create far more problems than it would solve. Folly Mox (talk) 16:15, 21 January 2025 (UTC)- It is a problem (albeit a small one) if an article has some dates MDY and others DMY or YMD, per MOS:DATERET, since it introduces inconsistency. Tagging the article with its preferred format helps retain it, so it's something we should ultimately strive for (particularly at GAs/FAs, but also in applicable categories as I suggested above). Sdkb talk 17:14, 21 January 2025 (UTC)
- Agree with this. Also regarding
- Knowing how much each is transcluded, and relative to the most-used cousins, would be a valuable point to include in this discussion.
- The more valuable change of sorts with respect to these templates is that they're clearly metadata. It would be great if we could move them over to mediawikiwiki:MCR, though IDK how much effort it would take to get that done. (And perhaps along with the settings for citations and English variety.) Izno (talk) 22:32, 23 January 2025 (UTC)
Forbid Moving an Article During AFD
There is currently a contentious Deletion Review, at Wikipedia:Deletion_review/Log/2025_January_19#Raegan Revord, about an article about a child actress, Raegan Revord. Some editors think that she is not biographically notable, and some editors think that she is biographically notable. There is nothing unusual about such a disagreement; that is why we have AFD to resolve the issue. What happened is that there were a draft version of her biography and a mainspace version of her biography, and that they were swapped while the AFD was in progress. Then User:Liz reviewed the AFD to attempt to close it, and concluded that it could not be closed properly, because the statements were about two different versions of the article. So Liz performed a procedural close, and said that another editor could initiate a new AFD, so that everyone could be reviewing the same article.
This post is not about that particular controversy, but about a simple change that could have avoided the controversy. The instructions on the banner template for MFD are more complete than those on the banner template for AFD. The AFD template says:
Feel free to improve the article, but do not remove this notice before the discussion is closed.
The MFD template says:
You are welcome to edit this page, but please do not blank, merge, or move it, or remove this notice, while the discussion is in progress.
Why don't we change the banner template on an article that has been nominated for deletion to say not to blank, merge, or move it until the discussion is closed? If the article should be blanked, redirected, merged, or moved, those are valid closes that should be discussed and resolved by the closer. As we have seen, if the move is done in good faith, which it clearly was, it confuses the closer, and it did that. I have also seen articles that were nominated for deletion moved in bad faith to interfere with the deletion discussion.
I made the suggestion maybe two or three years ago to add these instructions to the AFD banner, and was advised that it wasn't necessary. I didn't understand the reason then, but accepted that I was in the minority at the time. I think that this incident illustrates how this simple change would prevent such situations. Robert McClenon (talk) 06:06, 20 January 2025 (UTC)
- Seems like a reasonable proposal. Something similar occurred at Wikipedia:Articles for deletion/2025 TikTok refugee crisis. AfD was initiated, then the article was renamed, an admin had to move it back, and now it has been renamed again while the AfD is still ongoing. Some1 (talk) 06:32, 20 January 2025 (UTC)
- Thank you for the information, User:Some1. Both my example and yours are good-faith, but taking unilateral bold action while a community process is running confuses the community. I have also, more than once, seen bad-faith moves of articles during AFD. An editor who is probably a COI editor creates an article that is poorly sourced or promotional. A reviewer draftifies it. The originator moves it back to draft space. Another reviewer nominates it for deletipn, which is the proper next stop after contested draftification. The originator then moves it back to draft space so that the AFD will be stopped. Sometimes an admin reverses the move, but sometimes this stops the discussion and leaves the page in draft space. I think that any renaming should be considered within the AFD. Robert McClenon (talk) 06:52, 20 January 2025 (UTC)
- "Renaming" and "draftifying" may be technically the same operation, but they are quite different things. I don't mind outlawing draftify during AFD, as it pre-empts the outcome, but fixing a nontrivial typo or removing a BLP-noncompliant nickname from a page title should be done immediately by anyone who notices the problem, independent of whether the page is at AFD or not. —Kusma (talk) 09:15, 20 January 2025 (UTC)
- Thank you for the information, User:Some1. Both my example and yours are good-faith, but taking unilateral bold action while a community process is running confuses the community. I have also, more than once, seen bad-faith moves of articles during AFD. An editor who is probably a COI editor creates an article that is poorly sourced or promotional. A reviewer draftifies it. The originator moves it back to draft space. Another reviewer nominates it for deletipn, which is the proper next stop after contested draftification. The originator then moves it back to draft space so that the AFD will be stopped. Sometimes an admin reverses the move, but sometimes this stops the discussion and leaves the page in draft space. I think that any renaming should be considered within the AFD. Robert McClenon (talk) 06:52, 20 January 2025 (UTC)
- Oppose. Improving an article during AfD is encouraged and we must resist anything that would make it harder. Following the proposal would have meant a cut and paste move/merges would have had to happen in order to use the existing draft, making the situation more difficult to understand than a clear page swap. —Kusma (talk) 06:49, 20 January 2025 (UTC)
- Support, the AfD deals with notability, and moving can impact the scope and thus the notability. In that specific case, during the AfD, sources from both could've been considered, as AfD is about the sources that exist rather than the current content of the article. Not sure how a merge would've made it
more difficult to understand
than what actually happened. Chaotic Enby (talk · contribs) 06:55, 20 January 2025 (UTC)- It would have hidden the actual revision history for no benefit whatsoever. —Kusma (talk) 07:25, 20 January 2025 (UTC)
- When merging, the other article's history should be linked in the edit summary for attribution anyway. The benefit of avoiding the massive confusion for the closer (and the later deletion review) far outweighs the need for a few more clicks to find the history. Chaotic Enby (talk · contribs) 07:41, 20 January 2025 (UTC)
- If people are discussing version A before 13 January and version B after 13 January, this may result in confusion for the closer. But the confusion arises from people discussing two different versions of the article. I am all for clearly stating in the AFD when anything like moving or merging has happened, but outlawing moves is not solving the unsolvable problem that articles can change during an AFD. —Kusma (talk) 09:11, 20 January 2025 (UTC)
- When merging, the other article's history should be linked in the edit summary for attribution anyway. The benefit of avoiding the massive confusion for the closer (and the later deletion review) far outweighs the need for a few more clicks to find the history. Chaotic Enby (talk · contribs) 07:41, 20 January 2025 (UTC)
- It would have hidden the actual revision history for no benefit whatsoever. —Kusma (talk) 07:25, 20 January 2025 (UTC)
- Inclined to support as a draft swap seems rare, and seems somewhat at odds with the stated principle that AfD is about notability, which would not differ between a mainspace article and a draft article. In situations when there is a draft, the AfD could come to consensus to use the draft, or to keep on the topic and the draft can be moved in post-AfD. That said, regarding blanking, I have seen articles at least partially blanked due to BLP or copyright concerns. Those seem correct actions to take even during an AfD, and I suspect other instances of blanking are rare enough, and likely to be reverted if disruptive. CMD (talk) 09:31, 20 January 2025 (UTC)
- Weak oppose forbidding the kind of move made here. We encourage improving an article during the AFD, and separately it is often said during AFDs that an article should be TNT'ed and started over. Replacing the article with a new version, whether through moving a draft or simply rewriting it in place, is a valid (if hamhanded) attempt to do both of those things to save an article from deletion. Support forbidding moving the article to a new title with no content changes, as that could be disruptive (you'd have to move the AFD for one, and what if it gets reverted?). Pinguinn 🐧 10:57, 20 January 2025 (UTC)
- You do not have to move the AFD (and you should not, it is unnecessary and causes extra work). All you need is to make a note on the AFD what the new page title is. Of course you should almost never suppress the redirect while moving a page that is at AFD. —Kusma (talk) 14:06, 20 January 2025 (UTC)
- @Robert McClenon Look at the timeline again, in the Revord case it did not happen while the AFD was in progress. The swapping happened while the afd was closed keep. The afd was then reopened. Gråbergs Gråa Sång (talk) 10:58, 20 January 2025 (UTC)
- I can see the benefit of forbidding moving between namespaces, but this proposal would also catch simple renames. I've seen plenty of deletion discussions for articles with simple typos or spacing errors in their titles, where the nominating user has not corrected things before nominating. We should not forbid moving them to the correct title. Phil Bridger (talk) 13:49, 20 January 2025 (UTC)
- Simple renames (to fix typos, etc.) should be okay, but moving an article, for example, from Biden crisis (AfD on July 19) to Withdrawal of Joe Biden from the 2024 United States presidential election (moved July 21) (which also changed the scope of the article) while the AfD is still in progress should not IMO. Some1 (talk) 14:58, 20 January 2025 (UTC)
- I agree, which is why this should be left to human judgement and consensus rather than forbidding things. Phil Bridger (talk) 18:33, 20 January 2025 (UTC)
- Simple renames (to fix typos, etc.) should be okay, but moving an article, for example, from Biden crisis (AfD on July 19) to Withdrawal of Joe Biden from the 2024 United States presidential election (moved July 21) (which also changed the scope of the article) while the AfD is still in progress should not IMO. Some1 (talk) 14:58, 20 January 2025 (UTC)
- I don't see the benefit of retaining poorly worded article titles for seven days or more. I'd support against moving namespaces during an AfD, but not all renaming.
- This could actually cause an issue if someone was to move an article to a title that someone else wants to move an article to (in case of an obvious PRIMARY TOPIC/Dab change). Lee Vilenski (talk • contribs) 14:57, 20 January 2025 (UTC)
- Oppose There are some rare cases this is a problem, but I have seen many/most cases it is helpful. In the given example, let's say the move was disallowed and the article was deleted. Now wait a few weeks and make the article again with the new content. People will complain no matter what. You've got to be reasonable. If there was a major effort to redo the article it should be discussed during the AfD. -- GreenC 18:27, 20 January 2025 (UTC)
- Based on the comments above I think the best we can get will be a policy that requires any change of title be clearly and explicitly noted in an AfD, supplemented by a guideline that discourages controversial and potentially controversial changes in title while discussion is ongoing. Any change that would alter the scope of the article or which has been rejected by discussion participants (or rejected previously) is potentially controversial. On the other hand, a suggested change that has significant support and no significant objection among discussion participants is usually going to be uncontroversial. Thryduulf (talk) 19:02, 20 January 2025 (UTC)
- I think I agree. That seems to reflect current practice. Phil Bridger (talk) 19:54, 20 January 2025 (UTC)
- How about we limit such moves to admins? If there is an overriding good reason to move a page as part of editing and improvement of the encyclopedia, it should be movable. BD2412 T 22:20, 20 January 2025 (UTC)
- Not sure that restricting editorial/content choices to the discretion of admins is a good thing. While it will definitely help in case of overriding good reason, it also means an individual admin can enforce a potentially controversial choice of page title for their own reasons, and can't be reverted by another editor. And, of course, there's the wheel-warring aspect to that.An alternative could be to limit such moves to closing the discussion with a consensus to move – that way, we still limit spurious moves even more, but the editorial choices are still made by the community. Chaotic Enby (talk · contribs) 22:29, 20 January 2025 (UTC)
- Would the described swap be possible without special tools? I know that the title of this thread is "move", but that was more (and much harder or impossible for a regular editor to undo) than a move. North8000 (talk) 22:34, 20 January 2025 (UTC)
- A page mover can do this kind of swap too, but editors without either permission cannot. Chaotic Enby (talk · contribs) 22:38, 20 January 2025 (UTC)
- Would the described swap be possible without special tools? I know that the title of this thread is "move", but that was more (and much harder or impossible for a regular editor to undo) than a move. North8000 (talk) 22:34, 20 January 2025 (UTC)
- Comment. I would be chary of preventing this completely. There are quite a few cases where it rapidly emerges that the article is clearly at the wrong title (eg a transliteration error or a woman who exclusively publishes under another form of her name) so that the results of searches for sources are completely different between the two titles; moving the article even mid-AfD might be a good response in such cases. Espresso Addict (talk) 05:33, 21 January 2025 (UTC)
- I note that the text of the AfD notice used to read "Feel free to improve the article, but this notice must not be removed until the discussion is closed, and the article must not be blanked. For more information, particularly on merging or moving the article during the discussion, read the guide to deletion." until it was shortened in March 2021 by Kusma and then further shortened by Joe Roe in October 2023. Espresso Addict (talk) 05:47, 21 January 2025 (UTC)
- If you can find a concise replacement for the text that actually gives pertinent information, please do edit the notice. —Kusma (talk) 08:31, 21 January 2025 (UTC)
- I think sometimes clarity is more important than concision. Espresso Addict (talk) 09:44, 21 January 2025 (UTC)
- If the text is restored, the guide to deletion should feature the promised information more prominently. —Kusma (talk) 10:02, 21 January 2025 (UTC)
- Given that the current basis for the recommendation against moving is the relatively weak wording in WP:AFDEQ (
While there is no prohibition against moving an article while an AfD or deletion review discussion is in progress, editors considering doing so should realize such a move can confuse the discussion greatly
), highlighting this specifically in the template seems out of proportion. Perhaps we could revisit that if the consensus here is to strengthen the guidance, which would also allow us to be more concise (i.e. "do not move this page"). – Joe (talk) 18:37, 21 January 2025 (UTC)- It might be beneficial to tighten up that wording; something like
An article should not generally be moved while an AfD or deletion review discussion is in progress, as it can confuse the discussion greatly. However articles may exceptionally be moved if a clear consensus emerges during the discussion to change the title.
Espresso Addict (talk) 00:09, 22 January 2025 (UTC)
- It might be beneficial to tighten up that wording; something like
- Given that the current basis for the recommendation against moving is the relatively weak wording in WP:AFDEQ (
- If the text is restored, the guide to deletion should feature the promised information more prominently. —Kusma (talk) 10:02, 21 January 2025 (UTC)
- I think sometimes clarity is more important than concision. Espresso Addict (talk) 09:44, 21 January 2025 (UTC)
- If you can find a concise replacement for the text that actually gives pertinent information, please do edit the notice. —Kusma (talk) 08:31, 21 January 2025 (UTC)
- I note that the text of the AfD notice used to read "Feel free to improve the article, but this notice must not be removed until the discussion is closed, and the article must not be blanked. For more information, particularly on merging or moving the article during the discussion, read the guide to deletion." until it was shortened in March 2021 by Kusma and then further shortened by Joe Roe in October 2023. Espresso Addict (talk) 05:47, 21 January 2025 (UTC)
- Oppose. Moving an article to a new title can be confusing during an AfD, but otherwise good edits are good edits. In particular rewrites or replacements by drafts to address concerns raised in the discussion shouldn;t wait because they can make clear that a reasonable article can be (because it has been) created. Eluchil404 (talk) 06:09, 21 January 2025 (UTC)
- Weak support I think this should be formally discouraged, but I don't think we should ban it entirely. Certainly some moves during an AfD may be tendentious. SportingFlyer T·C 06:11, 21 January 2025 (UTC)
- Strong support This has been a problem for years. The solution is simple, there is no requirement to make such moves during an AfD duration, there is no downside to this proposal. Andy Dingley (talk) 19:30, 21 January 2025 (UTC)
- Oppose as a blanket rule, and strongly oppose this wording. Even if it is not intended as a blanket rule, and even if there are "obvious exceptions" as detailed above, wording like this will cause people to interpret it as one even when those "obvious exceptions" apply. "Well damn looks like the New York Times just reported that the shooting of Dudey McDuderson was a hoax, but sorry, we can't fix the title, template says so." (Example chosen since it's a plausible WP:NOTNEWS AfD.) Gnomingstuff (talk) 19:46, 21 January 2025 (UTC)
- If it's that clear and obvious that something needs to be fixed, then obtain consensus for it at the AfD (and if you can't, then it's not "clear and obvious"), speedy resolve it (close and re-open as needed, or even some sort of partial consensus for one aspect) and then do it. But we still can't do renames when we don't yet have agreement as to need and new target. Andy Dingley (talk) 20:12, 21 January 2025 (UTC)
- What I am saying is that wording like "please do not blank, merge, or move it, or remove this notice, while the discussion is in progress" will result in people arguing "the template says don't move it so don't move it, no exceptions allowed." Gnomingstuff (talk) 00:08, 22 January 2025 (UTC)
- The problem is less moving things during an AfD as moving them unilaterally, without consensus. We can surely demonstrate that during an AfD, or quickly, in order to resolve and close it, if it's that clear. Andy Dingley (talk) 12:03, 22 January 2025 (UTC)
- What I am saying is that wording like "please do not blank, merge, or move it, or remove this notice, while the discussion is in progress" will result in people arguing "the template says don't move it so don't move it, no exceptions allowed." Gnomingstuff (talk) 00:08, 22 January 2025 (UTC)
- If it's that clear and obvious that something needs to be fixed, then obtain consensus for it at the AfD (and if you can't, then it's not "clear and obvious"), speedy resolve it (close and re-open as needed, or even some sort of partial consensus for one aspect) and then do it. But we still can't do renames when we don't yet have agreement as to need and new target. Andy Dingley (talk) 20:12, 21 January 2025 (UTC)
- Oppose (except as to unilateral draftification). Renaming should be left to editors' judgment. This includes their judgment of whether the new name is likely to be controversial, or whether any past or present discussion is actually related to the new name and shows opposition to it. In other words, ordinary principles of WP:BOLDMOVE apply. There should not be a general prohibition or consensus-in-advance requirement, nor should editors revert moves solely "procedurally" because of AFD. (Editors can of course revert if they disagree on the merits of the name.) Reader-facing improvement efforts should not be held back by an overriding concern for internal administrators' confusion. That's getting priorities backward. Adumbrativus (talk) 01:21, 22 January 2025 (UTC)
- Hard cases make bad law. I don't know if that's always true, actually, but this discussion does strike me as an overreaction to an extremely unusual set of facts. --Trovatore (talk) 04:44, 22 January 2025 (UTC)
Proposal to prohibit the creation of new "T:" pseudo-namespace redirects without prior consensus
Around this time last year in 2024, the phabricator ticket T363757 created a brand new alias for the template namespace. From this point on, it is possible to get to any template by appending the letters "TM:" to any search. If I wanted to reach the centralized discussion template, I could always type TM:CENT and it works like a charm, for all templates on the site. Back in the day though, typing in 8 characters to reach a page became somewhat exhausting, especially for titles that might need to be navigated to frequently. As a helpful tool, a pseudo-namespace called "T:" was deployed, to quickly let people reach pages in the template namespace. (Nevermind the fact that "T" apparently ALSO stands for the talk namespace (T:MP) and template talk namespace (T:DYKT)). Regardless, in practice, pseudo-namespaces are great tools for navigation, but they have a flaw in the fact that the software does not really support them. All pseudo-namespace redirects occupy mainspace, which means that any PNRs which exist should be maintained with care and diligence, to avoid interfering with regular readers searching for articles.
Anyway, among the four PNRs currently in use today, "T:" has been, by-and-large, the most controversial among the rest. While CAT:, P:, and H: all have some usage in different circumstances, according to WP:Shortcut#Pseudo-namespaces, "T:" titles are for "limited and specific uses only". Generally speaking, the only reason to justify the creation of a T: title, is for a template that sees regular use and maintenance by members of the project. If it's not a template one would need to return to on a regular basis, there's no need to occupy mainspace with a "T:" title, further adding to the obfuscation of other genuine articles that also start with "T:", such as T:kort, T: The New York Times Style Magazine, and many others according to Special:PrefixIndex/T:.
In regards to controversy, T: titles have been the subject of persistent RfDs since 2009, with variable results. Several RfCs have been held relating to pseudo-namespace redirects, including one from 2014 that suggests that "new T: titles should be generally discouraged", in Wikipedia:Village pump (policy)/Archive 112#RFC: On the controversy of the pseudo-namespace shortcuts. Yet, despite the multiple RfCs and RfDs, new "T:" titles continue to crop up regardless. Whether that be from people who mis-interpret or misunderstand pseudo-namespaces, or for anyone that might not've noticed WP:Shortcut saying "T:" titles are for "limited uses only", these are frequently monitored and the number always grows.
In any case, with the advent of the [[TM:]] alias, there is little to no need for new "T:" titles. It is not important enough to shrink a two-letter namespace, into a one-letter namespace, so there's really no reason to have NEW titles that start with "T:". In 2022, the "WikiProject:" pseudo-namespace was added to the disallow-list for new article titles. I don't think that "T:" as a starter should be added to such a list, but I don't think there should be any new ones of this type now that [[TM:]] is a safer alternative that works for 100% of all templates, and doesn't affect mainspace searches.
I propose that on WP:Shortcut, "T:" is moved to a new classification indicating that new titles should not be created without prior consensus, and/or that "new titles do not enjoy broad community support", i.e. the category that the WikiProject prefix is listed at currently. (For that matter, I think that the WikiProject prefix should be removed from Shortcuts because no pages contain that prefix anywhere on Wikipedia; at least not any from the last 3 years). I also propose that "T:" be removed from the shortlist on WP:PNR, because I feel that contributes to the creation of new T: titles, and we should not encourage the creation of T: titles when TM: now exists. Utopes (talk / cont) 22:17, 20 January 2025 (UTC)
- Question: Is Special:PrefixIndex/T: all there is? I support at least a moratorium (consensus needed) for creating new T:, and also reeval existing T: in light of the new TM: alias. -- GreenC 14:45, 21 January 2025 (UTC)
- Yes, that's all there is. —Cryptic 23:22, 22 January 2025 (UTC)
- I would also support a moratorium outside of the DYK space. I note other main page uses are currently up for discussion at Wikipedia:Redirects for discussion/Log/2025 January 16#T:Pic of the day and etc., which would leave just DYK. Ideally if T: is deprecated, the DYK instructions would shift to TM: as well. I'll create a note at WT:DYK pointing to this proposal. CMD (talk) 15:57, 21 January 2025 (UTC)
- Support I've always found "T:" titles confusing. In particular, I never understood why sometimes it worked (i.e. T:DYK) and sometimes it didn't (T:Cite journal). At some point I gave up trying to figure it out and just resigned myself to typing out "template" all the time (and occasionally typing "templare" by accident). I wasn't even aware that TM: existed.It's absurd that there should be namespaces, aliases, pseudo-namespaces, all of which have slightly different behaviors (not to mention Help:Transwiki). You should be able to understand what something is by looking at it, i.e. if it has a ":" after it, it's a namespace. So yeah, I wholeheartedly support getting rid of T. Getting rid of the existing T links may be painful, but it's pain we will endure once and be done with. That's better than continuing to have something that's inconsistent and confusing forever.
- I ran into this recently when writing some code that handles matching template names. It turns out that if I give you a link foo:bar, you can't know if the "foo" part is case sensitive or not if you don't know what namespaces are configured on the particular wiki it came from. That's just stupid. RoySmith (talk) 16:25, 21 January 2025 (UTC)
- PS, as a follow-up to
You should be able to understand what something is by looking at it
, I suggest people watch Richard Feynman's comments on this subject. When I'm seeking wonder and amazement at discovering a deeper understanding of the world around me, I can turn to quantum mechanics. I'd prefer wiki-syntax to be a bit less wonderous. RoySmith (talk) 16:49, 21 January 2025 (UTC)
- PS, as a follow-up to
- Support – if we already have TM: as a perfectly functional
pseudonamespacealias that automatically redirects to Template:, we don't need to encourage the use of T: which only works for hardcoded redirects and adds another level of confusion. After the moratorium, we can leave DYK some additional time to shift to TM: if needed. (edited 15:14, 22 January 2025 (UTC): mixed up alias and pseudonamespace again) Chaotic Enby (talk · contribs) 17:10, 21 January 2025 (UTC)
- Oppose. "TM:" is not an intuitive redirect for "template", and longstanding usage - which I use frequently - is for "T:", e.g. T:ITN, T:DYK etc. If need be, we should tell the software to use "T:" universally for templates rather than "TM:". Using it for "Talk:" doesn't really make sense either, it's very rare to need a shortcut to a talk page, whereas templates are frequent targets. We should also add "TT:" for template talk. Editors drive how we work on the project, not suits at the Wikimedia Foundation. — Amakuru (talk) 19:49, 21 January 2025 (UTC)
- Despite your claim, the decision wasn't made by
suits at the Wikimedia Foundation
, but by this very community here at VPP (link), where "TM:" was chosen over "T:". Chaotic Enby (talk · contribs) 20:15, 21 January 2025 (UTC)- Even the code patch was written by a enwiki volunteer and the deployment was done by another volunteer developer lol. The claim of
suits at the Wikimedia Foundation
has no basis here. Literally nobody from the WMF was involved in this. Sohom (talk) 06:15, 23 January 2025 (UTC)
- Even the code patch was written by a enwiki volunteer and the deployment was done by another volunteer developer lol. The claim of
- What one person finds intuitive isn't always necessarily what another person finds intuitive. But the link Chaotic Enby posted above shows there's a consensus that TM: is a suitable alias, so I don't think we should reinvigorate that debate. The question here isn't whether we like TM, it's whether we should get rid of T now that we have TM. Cremastra (talk) 20:56, 21 January 2025 (UTC)
- Despite your claim, the decision wasn't made by
- Support. I agree we should not make new T redirects and stick with one abbreviation, TM, which behaves consistently and predictably. Adumbrativus (talk) 06:02, 22 January 2025 (UTC)
- Support. As Utopes points out, the advantage from writing "t" compared to "tm" is one character, however, the cons far outweigh them. Gonnym (talk) 09:22, 22 January 2025 (UTC)
- Note, listed this on TM:CENT. Utopes (talk / cont) 22:04, 23 January 2025 (UTC)
Replace links to twitter / "X"
Would it be a good idea to build a scraper and a bot that scrapes tweets and then replaces the link to the tweet to a link to a site populated with scraped tweets? That way we don't send traffic to Twitter or whatever its called these days. Polygnotus (talk) 00:38, 22 January 2025 (UTC)
- Wouldn't scraping be a copyright violation? —Jéské Couriano v^_^v threads critiques 00:48, 22 January 2025 (UTC)
- @Jéské Couriano: I do not know (I am not a lawyer). I do know Google cache and the Wayback Machine and various other services that would also infringe on copyright, if that is copyright infringement. If the Wayback Machine can archive tweets, we could ask it to index every tweet and then remove every direct link to twitter. Maybe meta:InternetArchiveBot can do this and we only have to supply a list of tweets and then replace the links? Polygnotus (talk) 00:52, 22 January 2025 (UTC)
- Google Cache is defunct and to avoid copyright issues the Wayback Machine removes archives on request. It also no longer works with Twitter. PARAKANYAA (talk) 22:51, 23 January 2025 (UTC)
- @Jéské Couriano: I do not know (I am not a lawyer). I do know Google cache and the Wayback Machine and various other services that would also infringe on copyright, if that is copyright infringement. If the Wayback Machine can archive tweets, we could ask it to index every tweet and then remove every direct link to twitter. Maybe meta:InternetArchiveBot can do this and we only have to supply a list of tweets and then replace the links? Polygnotus (talk) 00:52, 22 January 2025 (UTC)
- No. Wikipedia is not the place to try to attempt to voice your concerns with Elon Musk. Unless or until the site becomes actually harmful itself, more than others (i.e. scraping user data or similar), then there is no need to replace those links. Nobody is advocating for replacing links to Reuters, which requires you to sign up for an account and accept email ads/etc. to read articles for free. -bɜ:ʳkənhɪmez | me | talk to me! 01:00, 22 January 2025 (UTC)
until the site becomes actually harmful itself, more than others
It is already, right? WP:RGW is about WP:OR and WP:RS, so it is unclear why you linked to it and it appears to be offtopic.Reuters, which requires you to sign up for an account and accept email ads/etc. to read articles for free.
It does? I have never seen that (but I am using ublock and pihole and various related tools). Polygnotus (talk) 01:05, 22 January 2025 (UTC)- Why should Wikipedia be concerned what websites get traffic? If it's about the political views or actions of its owner or its userbase, then that's absolutely against the spirit of "righting great wrongs" in a literal sense, even if it's not what's specifically covered in WP:RGW. Thebiguglyalien (talk) 05:00, 23 January 2025 (UTC)
- ~~Agree that it's better not to send traffic to Twitter, but I don't know if Twitter is exactly getting a lot of traffic through Wikipedia, and in any case linking to the actual tweet (the actual source) is important.~~ Other users suggested archives. I oppose replacing links with links to a scraper, but I wouldn't oppose replacing links with links to the Internet Archive, for example -- something reputable. Mrfoogles (talk) 21:22, 22 January 2025 (UTC)
- The disagreement of some editors with Twitter and Elon Musk do not constitute a reason for getting rid of it.--Wehwalt (talk) 22:33, 22 January 2025 (UTC)
- Was this idea prompted by the banning of Twitter/X links by subreddits on reddit? https://www.theverge.com/2025/1/22/24349467/reddit-subreddit-x-twitter-link-bans-elon-musk-nazi-salute I'm not opposed to the idea of doing this on Wikipedia (replacing the links with an archived version of the tweets), but it does come off as somewhat like virtue signalling, considering that links to Twitter/X aren't commonly found on Wikipedia. Some1 (talk) 00:04, 23 January 2025 (UTC)
- Personally I'm not sure it's a good idea, but I don't think it's just "virtue signaling". Obviously the effect will not be enormous, but it will help slightly (all the subreddits together, even though they're small, have some effect) and it's good to have sort of statements of principle like this, in my opinion. As long as the goal is to actually not tolerate Nazism, rather than appear to not tolerate Nazism, I don't think it's virtue signaling. Mrfoogles (talk) 20:48, 23 January 2025 (UTC)
- @Polygnotus what is the specific reason you are suggesting this is something that should be implemented? I'm a terrible mind reader, and wouldn't want to make presumptions of your motives for you. TiggerJay (talk) 01:21, 23 January 2025 (UTC)
- There is clear and obvious value in ensuring all {{cite twitter}} or {{cite web}} URLs have archive URLs, what with Musk's previously shortly-held opinion about the value of externally accessible URLs. Other than that, I see little reason to "switch" things. Izno (talk) 22:23, 23 January 2025 (UTC)
- Most archiving services don’t work with Twitter anymore. Archive.org doesn’t and archive.is does it poorly. The only one that works consistently is GhostArchive which has been removed before over copyright concerns. For similar reasons, existing Twitter mirrors like Nitter are either defunct or broken. This would amount to removing all Twitter links then. PARAKANYAA (talk) 22:35, 23 January 2025 (UTC)
- This however wouldn't be terrible. Simply removing all links to Twitter would be valuable for multiple content reasons in the direction of WP:WEIGHT, WP:OR, and so on. Izno (talk) 22:38, 23 January 2025 (UTC)
- There is already tight guidelines on where and how tweets can be used in articles, and I don't think that it is any more prevalent than it is from any other primary source website. While the use of such primary sources need to be closely monitored in any article, there are places where its inclusion is appropriate and helpful, but it certainly is on the rare side of things. I also would proffer that if the main reason to prevent having links directly to twitter is some sort of virtue signaling we're going to get into a world of problems as the values and moralities of people in Wiki differ greatly. Should we then drop all links to Russian websites to support Ukraine? What about when it comes down to PIA issues or other areas of great contention? This would be murky waters that is best avoided all together. TiggerJay (talk) 22:47, 23 January 2025 (UTC)
- Unless you want to remove WP:ABOUTSELF broadly I don’t see the reason to apply it to Twitter instead of every other social media website there is. PARAKANYAA (talk) 22:48, 23 January 2025 (UTC)
- This however wouldn't be terrible. Simply removing all links to Twitter would be valuable for multiple content reasons in the direction of WP:WEIGHT, WP:OR, and so on. Izno (talk) 22:38, 23 January 2025 (UTC)
Idea lab
The prominence of parent categories on category pages
The format of category pages should be adjusted so it's easier to spot the parent categories.
Concrete example:
I happen to come across the page: Category:Water technology
I can see the Subcategories. Great. I can see the Pages in the category. Great. No parent categories. That's a shame --- discovering the parent categories can be as helpful as discovering the subcategories.
Actually, the parent categories are there (well, I think they are --- I'm not sure because they're not explicitly labelled as such). But I don't notice them because they're in a smaller font in the blue box near the bottom of the page: Categories: Water | Chemical processes | Technology by type
I think the formatting (the typesetting) of the parent categories on category pages should be adjusted to give the parent categories the same prominence as the subcategories. This could be done by changing: Categories: Water | Chemical processes | Technology by type to: Parent categories: Water | Chemical processes | Technology by type and increasing the size of the font of `Parent categories', or, perhaps better, by having the parent categories typeset in exactly the same way as the subcategories. D.Wardle (talk) 22:21, 22 December 2024 (UTC)
- Parent categories are displayed on Category: pages in exactly the same way that categories are displayed in articles. WhatamIdoing (talk) 04:26, 26 December 2024 (UTC)
- The purpose of an article page is to give a clear exposition of the subject. Having a comprehensive presentation of the categories on such a page would be clutter --- a concise link to the categories is sufficient and appropriate.
- The purpose of a category page is to give a comprehensive account of the categories. A comprehensive presentation of the categories would not clutter the subject (it is the subject).
- Therefore, I do not expect the parent categories to be presented the same on article and category pages --- if they are presented the same, that only reinforces my opinion that some change is necessary. D.Wardle (talk) 20:15, 27 December 2024 (UTC)
- I think the purpose of a category page is to help you find the articles that are in that category (i.e., not to help you see the category tree itself). WhatamIdoing (talk) 21:40, 27 December 2024 (UTC)
- Is there any research on how people actually use categories? —Kusma (talk) 21:48, 27 December 2024 (UTC)
- I don't think so, though I asked a WMF staffer to pull numbers for me once, which proved that IPs (i.e., readers) used categories more than I expected. I had wondered whether they were really only of interest to editors. (I didn't get comparable numbers for the mainspace, and I don't remember what the numbers were, but my guess is that logged-in editors were disproportionately represented among the Category: page viewers – just not as overwhelmingly as I had originally expected.) WhatamIdoing (talk) 22:43, 27 December 2024 (UTC)
- I'm fine with parent categories being displayed the same way on articles and categories but I think it's a problem that parent categories aren't displayed at all in mobile on category pages, unless you are registered and have enabled "Advanced mode" in mobile settings. Mobile users without category links probably rarely find their way to a category page but if they do then they should be able to go both up and down the category tree. PrimeHunter (talk) 15:39, 28 December 2024 (UTC)
- I don't think so, though I asked a WMF staffer to pull numbers for me once, which proved that IPs (i.e., readers) used categories more than I expected. I had wondered whether they were really only of interest to editors. (I didn't get comparable numbers for the mainspace, and I don't remember what the numbers were, but my guess is that logged-in editors were disproportionately represented among the Category: page viewers – just not as overwhelmingly as I had originally expected.) WhatamIdoing (talk) 22:43, 27 December 2024 (UTC)
- Am I missing something? Is there a way of seeing the category tree (other than the category pages)?
- If I start at:
- https://en.wikipedia.org/wiki/Wikipedia:Contents#Category_system
- ... following the links soon leads to category pages (and nothing else?). D.Wardle (talk) 20:20, 28 December 2024 (UTC)
- I'd start with Special:CategoryTree (example). WhatamIdoing (talk) 20:49, 28 December 2024 (UTC)
- You can click the small triangles to see deeper subcategories without leaving the page. This also works on normal category pages like Category:People. That category also uses (via a template)
<categorytree>...</categorytree>
at Help:Category#Displaying category trees and page counts to make the "Category tree" box at top. PrimeHunter (talk) 20:59, 28 December 2024 (UTC)
- You can click the small triangles to see deeper subcategories without leaving the page. This also works on normal category pages like Category:People. That category also uses (via a template)
- I'd start with Special:CategoryTree (example). WhatamIdoing (talk) 20:49, 28 December 2024 (UTC)
- Is there any research on how people actually use categories? —Kusma (talk) 21:48, 27 December 2024 (UTC)
- I think the purpose of a category page is to help you find the articles that are in that category (i.e., not to help you see the category tree itself). WhatamIdoing (talk) 21:40, 27 December 2024 (UTC)
- Now there are three words I would like to see added to every category page. As well as `parent' prefixing `categories' in the blue box (which prompted this discussion), I would also like `Category tree' somewhere on the page with a link to the relevant part of the tree (for example, on:
- https://en.wikipedia.org/wiki/Category:Water_technology
- ... `Category tree' would be a link to:
- https://en.wikipedia.org/wiki/Special:CategoryTree?target=Category%3AWater+technology&mode=categories&namespaces=
- ).
- I can only reiterate that I think I'm typical of the vast majority of Wikipedia users. My path to Wikipedia was article pages thrown up by Google searches. I read the articles and curious to know how the subject fitted into wider human knowledge, clicked on the category links. This led to the category pages which promised so much but frustrated me because I couldn't find the parent categories and certainly had no idea there was a category tree tool. This went on for years. Had the three additional words been there, I would have automatically learned about both the parent categories and the category tree tool, greatly benefitting both my learning and improving my contributions as an occasional editor. Three extra words seems a very small price to pay for conferring such a benefit on potentially a huge fraction of users. D.Wardle (talk) 03:43, 30 December 2024 (UTC)
- I think it would be relatively easy to add a link to Special:CategoryTree to the "Tools" menu. I don't see an easy way to do the other things. WhatamIdoing (talk) 07:33, 30 December 2024 (UTC)
- It's possible to display "Parent categories" on category pages and keep "Categories" in other namespaces. The text is made with MediaWiki:Pagecategories in both cases but I have tested at testwiki:MediaWiki:Pagecategories that the message allows a namespace check. Compare for example the display on testwiki:Category:4x4 type square and testwiki:Template:4x4 type square/update. PrimeHunter (talk) 18:01, 30 December 2024 (UTC)
- How much evidence of community consensus do you need to make that change here? WhatamIdoing (talk) 19:16, 30 December 2024 (UTC)
- I've looked at what you've done (and hopefully understood). MediaWiki:Pagecategories puts some of the words in the blue box at the bottom of all category pages. But what code makes the category pages (what code calls MediaWiki:Pagecategories)? I think the changes I'm suggested should be made to that calling code... D.Wardle (talk) 23:35, 9 January 2025 (UTC)
- Is the answer to your question "MediaWiki"?
- Every page has certain elements. You can see which ones are used on any given page with the mw:qqx trick, e.g., https://en.wikipedia.org/wiki/Category:Water_technology?uselang=qqx WhatamIdoing (talk) 01:58, 10 January 2025 (UTC)
- I looked at the MediaWiki Help and Manual. How the formatting of namespaces is controlled might be discussed somewhere, but, at the very least, it's not easy to find (I didn't find it). I've requested this be addressed (https://www.mediawiki.org/wiki/Help_talk:Formatting#The_formatting_of_namespaces) but, thus far, no one has volunteered.
- Returning to the issue here, my inference is that `normal' Wikipedia editors would not be able to implement the changes I'm suggesting (adding the word `parent' and a link to the category tree) assuming the changes were agreed upon. I therefore also conclude that the changes I'm suggesting do need to go to Village_pump_(proposals). Do you agree? D.Wardle (talk) 23:29, 17 January 2025 (UTC)
- @PrimeHunter already worked out how to do this change. Go to testwiki:Category:4x4 type square and look for the words "Parent categories:" at the bottom of the page. If that's what you want, then the technical end is already sorted. WhatamIdoing (talk) 00:12, 18 January 2025 (UTC)
- You are right that PrimeHunter's solution works but (not wishing to criticize PrimeHunter in any way --- I'm grateful for their input) I don't think it's the right way to do it. To explain: When an editor adds a section to an article, the edit box is initially blank. There is no code to specify e.g. the font, the size of the font, the colour of the font, the indentation from the margin, etc. These things must be specified somewhere but they are hidden from the editor. And that's a good feature (it enables the editor to do their work without having to wade through a whole heap of code specifying default formatting which isn't relevant to them). PrimeHunter's solution goes against that principle --- it's adding formatting code to the editor's box. You might argue that it's only a very small piece of code, but, if changes are routinely made in this way, over time the small pieces of code will accumulate and the editor's boxes will become a mess. D.Wardle (talk) 21:00, 18 January 2025 (UTC)
- Look at the page history. PrimeHunter has never edited that page. It does not add any code to the editor's box. WhatamIdoing (talk) 21:12, 18 January 2025 (UTC)
- Would a simpler cat page be easier for you to look at? Try testwiki:Category:Audio files or testwiki:Category:Command keys instead. All of the cats on that whole wiki are showing "Parent categories" at the bottom of the page. WhatamIdoing (talk) 21:18, 18 January 2025 (UTC)
- Agreed. And (I think you already understand this) that is because PrimeHunter's edit of testwiki:MediaWiki:Pagecategories affects all pages on https://test.wikipedia.org.
- Comparing:
- https://en.wikipedia.org/w/index.php?title=Category:Wikipedia&action=edit
- and:
- https://test.wikipedia.org/w/index.php?title=Category:Wikipedia&action=edit
- ...adds weight to two of my previous comments:
- The test.wikipedia page has this text:
- Categories: Root category
- ...at the bottom of the edit window (my apologies --- it's not actually in the edit window) --- this is not helpful for novice editors --- they could be confused and/or deterred by it --- it should be hidden from them.
- The en.wikipedia page has nothing analogous to the just mentioned text, suggesting that PrimeHunter's solution might not actually work in en.wikipedia.
- D.Wardle (talk) 23:59, 20 January 2025 (UTC)
- Would a simpler cat page be easier for you to look at? Try testwiki:Category:Audio files or testwiki:Category:Command keys instead. All of the cats on that whole wiki are showing "Parent categories" at the bottom of the page. WhatamIdoing (talk) 21:18, 18 January 2025 (UTC)
- Look at the page history. PrimeHunter has never edited that page. It does not add any code to the editor's box. WhatamIdoing (talk) 21:12, 18 January 2025 (UTC)
- You are right that PrimeHunter's solution works but (not wishing to criticize PrimeHunter in any way --- I'm grateful for their input) I don't think it's the right way to do it. To explain: When an editor adds a section to an article, the edit box is initially blank. There is no code to specify e.g. the font, the size of the font, the colour of the font, the indentation from the margin, etc. These things must be specified somewhere but they are hidden from the editor. And that's a good feature (it enables the editor to do their work without having to wade through a whole heap of code specifying default formatting which isn't relevant to them). PrimeHunter's solution goes against that principle --- it's adding formatting code to the editor's box. You might argue that it's only a very small piece of code, but, if changes are routinely made in this way, over time the small pieces of code will accumulate and the editor's boxes will become a mess. D.Wardle (talk) 21:00, 18 January 2025 (UTC)
- @PrimeHunter already worked out how to do this change. Go to testwiki:Category:4x4 type square and look for the words "Parent categories:" at the bottom of the page. If that's what you want, then the technical end is already sorted. WhatamIdoing (talk) 00:12, 18 January 2025 (UTC)
- It's possible to display "Parent categories" on category pages and keep "Categories" in other namespaces. The text is made with MediaWiki:Pagecategories in both cases but I have tested at testwiki:MediaWiki:Pagecategories that the message allows a namespace check. Compare for example the display on testwiki:Category:4x4 type square and testwiki:Template:4x4 type square/update. PrimeHunter (talk) 18:01, 30 December 2024 (UTC)
- I think it would be relatively easy to add a link to Special:CategoryTree to the "Tools" menu. I don't see an easy way to do the other things. WhatamIdoing (talk) 07:33, 30 December 2024 (UTC)
If editors can't see the list of categories that the page is in, how will they add or remove the categories?
On the testwiki page, the example has only one category, so this is what you see in wikitext:
[[Category:Root category]]
The analogous text in the en.wikipedia page you link is this:
[[Category:Creative Commons-licensed websites]] [[Category:Online encyclopedias| ]] [[Category:Virtual communities]] [[Category:Wikimedia projects]] [[Category:Wikipedia categories named after encyclopedias]] [[Category:Wikipedia categories named after websites]]
I thought your concern was about what readers see. You said "But I don't notice them [i.e., the parent categories] because they're in a smaller font in the blue box near the bottom of the page: Categories: Water | Chemical processes | Technology by type".
Now you're talking about a completely different thing, which is what you see when you're trying to change those parent categories. WhatamIdoing (talk) 02:10, 21 January 2025 (UTC)
- The "pre" formatting doesn't appear to play well with
:::
formatting. WhatamIdoing (talk) 02:12, 21 January 2025 (UTC) - Sorry about that.
- To begin again, I think it would be a good idea if all category pages had:
- a heading `Parent categories' similar to `Subcategories' (the current `Categories' in the blue box is ambiguous and too inconspicuous).
- a small link near the bottom of the page, the link having text `Category tree' and target the category's entry in the category tree.
- I don't have the technical competence to make either of these changes. Also, given that they would affect every category page (which is a large part of the encyclopedia), before making the changes it would be prudent to check others agree (or, at least, that there is not strong opposition).
- So how to make progress? (It would be great if a Wikipedian more experienced than myself would pick it up and run with it.) D.Wardle (talk) 23:46, 21 January 2025 (UTC)
- We currently have something like this:
- I think we can get this changed to:
- I do not think we can realistically get this changed to:
- Parent categories
- Category name 1, Category name 2, etc.
- Do you want to have the middle option, or is the third option the only thing that will work for you? WhatamIdoing (talk) 00:06, 22 January 2025 (UTC)
- The middle option is definitely a step in the right direction so if you could implement it that would be great.
- With regard to the third option (and also the link to the category tree), maybe the desirability of these could be put forward for discussion at a meeting of senior Wikipedians (and if they are deemed desirable but difficult to implement maybe that difficulty of implementation could also be discussed --- if the MediaWiki software does not allow desirable things to be done easily, it must have scope for improvement...)
- Thank you for your assistance. D.Wardle (talk) 19:55, 22 January 2025 (UTC)
- We don't have meetings of senior Wikipedians. The meetings happen right here, and everyone is welcome to participate.
- I'll go ask the tech-savvy volunteers at Wikipedia:Village pump (technical) if one of them would make the change to the middle setting. WhatamIdoing (talk) 20:11, 22 January 2025 (UTC)
Break
- Perhaps I don't understand what PrimeHunter has done. It's hard for me to follow: If I explore the https://en.wikipedia.org domain, I find that one of PrimeHunter's references (https://en.wikipedia.org/wiki/MediaWiki:Pagecategories) has been deleted, while, if I explore the https://test.wikipedia.org domain, I find that I cannot see what's in the edit box of one of the pages (https://test.wikipedia.org/wiki/Category:4x4_type_square) because `only autoconfirmed users can edit it'.
- Given that https://en.wikipedia.org/wiki/MediaWiki:Pagecategories has been deleted, maybe PrimeHunter's solution only works in the testsite? D.Wardle (talk) 23:14, 20 January 2025 (UTC)
- PrimeHunter's solution has only been created in the testsite. Nobody has ever posted it here.
- You do not need to be autoconfirmed to see what's in the edit box. You just need to scroll down past the explanation about not being able to change what's in the edit box.
- That said, I suggest that you stop looking at the complicated page of 4x4 type square, and start looking at a very ordinary category page like testwiki:Category:Command keys, because (a) it does not have a bunch of irrelevant stuff in it and (b) anyone can edit that cat page. WhatamIdoing (talk) 23:33, 20 January 2025 (UTC)
- Maybe I'm naive, but I think it must be easy to do the two things I'm suggesting. There is a piece of code somewhere that takes the content entered by a Wikipedian using `Edit' and creates the category page. It's just a case of modifying that code to add one word and two words which are also a link. It must be similar to changing a style file in LaTeX or a CSS in html.
- Again, maybe I'm naive, but it would seem to me appropriate to move this discussion to Village pump (proposals). Any objection? D.Wardle (talk) 21:07, 4 January 2025 (UTC)
- If @PrimeHunter is willing to make the change, then there's no need to move the discussion anywhere. WhatamIdoing (talk) 23:19, 4 January 2025 (UTC)
- We should still have an RFC before changing something for everyone, so a formal proposal sounds like a good idea. Otherwise it may be reverted on the opinion of one person. Graeme Bartlett (talk) 21:41, 22 January 2025 (UTC)
- Do you personally object? Or know anyone who objects? WhatamIdoing (talk) 03:45, 23 January 2025 (UTC)
- We should still have an RFC before changing something for everyone, so a formal proposal sounds like a good idea. Otherwise it may be reverted on the opinion of one person. Graeme Bartlett (talk) 21:41, 22 January 2025 (UTC)
- If @PrimeHunter is willing to make the change, then there's no need to move the discussion anywhere. WhatamIdoing (talk) 23:19, 4 January 2025 (UTC)
Moving categories to the top of a page
@D.Wardle I looked at your original request and it reminded me that Commons has a gadget (optional user preference) to move the categories box to the tops of all pages. That gadget is at c:MediaWiki:Gadget-CategoryAboveAll.js, and I've found it quite useful when working with files there. It's not quite what you're asking for, but it feels like it might help and be quite an easy win?
I've tested a local version of it at User:Andrew Gray/common.js - it's the last section on that page, lines 22-30, and I've set it up so that it only triggers when you're looking at a category page. If you copy that bit to your own common.js file (User:D.Wardle/common.js) then it should, touch wood, also work for you. Andrew Gray (talk) 18:31, 23 January 2025 (UTC)
- Hi Andrew, thanks very much for the info but it doesn't quite address the point I'm making: If Wikipedia is perfectly designed, complete newcomers to the site should discover all the useful features rapidly and by accident (without having to read help pages or similar). At the moment, that's true for the category pages. (A newcomer starts with an article. At the end of the article is `Categories'. Curious, they click on it and discover the category pages.) From the category pages they rapidly discover subcategories. But they are unlikely to discover parent categories (the parent categories being relegated to a small, ambiguous heading at the end of the page). And they certainly won't discover the category tree tool (it being missing all together). So, from my perspective, it's what newcomers see that needs to be changed, not what I see. D.Wardle (talk) 21:25, 23 January 2025 (UTC)
Implemeting "ChatBot Validation" for sentences of Wikipedia
Hi, I propose to define a "Validation process" using Chatbots (e.g. ChatGPT) in this way:
- The editor or an ordinary user, presses a button named "Validate this Sentence"
- A query named "Is this sentence true or not? + Sentence" is sent to ChatGPT
- If the ChatGPT answer is true, then tick that sentence as valid, otherwise declare that the sentence needs to be validated manually by humans.
I think the implementation of this process is very fast and convenient. I really think that "ChatBot validation" is a very helpful capability for users to be sure about the validity of information of articles of Wikipedia. Thanks, Hooman Mallahzadeh (talk) 10:34, 6 January 2025 (UTC)
- While it would certainly be convenient, it would also be horribly inaccurate. The current generation of chatbots are prone to hallucinations and cannot be relied on for such basic facts as what the current year is, let alone anything more complicated. Thryduulf (talk) 10:48, 6 January 2025 (UTC)
- @Thryduulf The question is
Is Wikipedia hallucinations or ChatGPT is hallucinations?
- This type of validation (validation by ChatGPT) may be inaccurate for correctness of Wikipedia, but when ChatGPT declares that "Wikipedia information is Wong!", a very important process named "Validate Manually by Humans" is activated. This second validation is the main application of this idea. That is, finding possibly wrong data on Wikipedia to be investigated more accurately by humans. Hooman Mallahzadeh (talk) 11:02, 6 January 2025 (UTC)
- The issue is, ChatGPT (or any other LLM/chatbot) might hallucinate in both directions, flagging false sentences as valid and correct sentences as needing validation. I don't see how this is an improvement compared to the current process of needing verification for all sentences that don't already have a source. Chaotic Enby (talk · contribs) 11:13, 6 January 2025 (UTC)
- If there was some meaningful correlation between what ChatGPT declares true (or false) and what is actually true (or false) then this might be useful. This would just waste editor time. Thryduulf (talk) 11:15, 6 January 2025 (UTC)
- @Chaotic Enby@Thryduulf Although ChatGPT may give wrong answers, but it is very powerful. To assess its power, we need to apply this research:
- Give ChatGPT a sample containing true and false sentences, but hide true answers
- Ask ChatGPT to assess the sentences
- Compare actual and ChatGPT answers
- Count the ratio of answers that are the same.
- I really propose that if this ratio is high, then we start to implement this "chatbot validation" idea. Hooman Mallahzadeh (talk) 11:24, 6 January 2025 (UTC)
- There are many examples of people doing this research, e.g. [16] ranks ChatGPT as examples accurate "88.7% of the time", but (a) I have no idea how reliable that source is, and (b) it explicitly comes with multiple caveats about how that's not a very meaningful figure. Even if we assume that it is 88.7% accurate at identifying what is and isn't factual across all content on Wikipedia that's still not really very useful. In the real world it would be less accurate than that, because those accuracy figures include very simple factual questions that it is very good at ("What is the capital of Canada?" is the example given in the source) that we don't need to use ChatGPT to verify because it's quicker and easier for a human to verify themselves. More complex things, especially related to information that is not commonly found in its training data (heavily biased towards information in English easily accessible on the internet), where the would be the most benefit to automatic verification, the accuracy gets worse. Thryduulf (talk) 11:38, 6 January 2025 (UTC)
- @Chaotic Enby@Thryduulf Although ChatGPT may give wrong answers, but it is very powerful. To assess its power, we need to apply this research:
- Have you read, for example, the content section of OpenAI's Terms of Use? Sean.hoyland (talk) 10:53, 6 January 2025 (UTC)
- @Sean.hoyland If OpenAI does not content with this application, we can use other ChatBots that content with this application. Nowadays, many chatbots are free to use. Hooman Mallahzadeh (talk) 11:04, 6 January 2025 (UTC)
- I'm sure they would be thrilled with this kind of application, but the terms of use explain why it is not fit for purpose. Sean.hoyland (talk) 11:17, 6 January 2025 (UTC)
- @Sean.hoyland If OpenAI does not content with this application, we can use other ChatBots that content with this application. Nowadays, many chatbots are free to use. Hooman Mallahzadeh (talk) 11:04, 6 January 2025 (UTC)
- Factual questions are where LLMs like ChatGPT are weakest. Simple maths, for example. I just asked "Is pi larger than 3.14159265?" and got the wrong answer "no" with an explanation why the answer should be "yes":
- "No, π is not larger than 3.14159265. The value of π is approximately 3.14159265358979, which is slightly larger than 3.14159265. So, 3.14159265 is a rounded approximation of π, and π itself is just a tiny bit larger."
- Any sentence "validated by ChatGPT" should be considered unverified, just like any sentence not validated by ChatGPT. —Kusma (talk) 11:28, 6 January 2025 (UTC)
- I get a perfect answer to that question (from the subscription version of ChatGPT): "Yes. The value of π to more digits is approximately 3.141592653589793… which is slightly larger than 3.14159265. The difference is on the order of a few billionths." But you are correct; these tools are not ready for serious fact checking. There is another reason this proposal is not good: ChatGPT gets a lot of its knowledge from Wikipedia, and when it isn't from Wikipedia it can be from the same dubious sources that we would like to not use. One safer use I can see is detection of ungrammatical sentences. It seems to be good at that. Zerotalk 11:42, 6 January 2025 (UTC)
- It's a good example of the challenges of accuracy. Using a different prompt "Is the statement pi > 3.14159265 true or false?", I got "The statement 𝜋 > 3.14159265 is true. The value of π is approximately 3.14159265358979, which is greater than 3.14159265." So, whatever circuit is activated by the word 'larger' is doing something less than ideal, I guess. Either way, it seems to improve with scale, grounding via RAG or some other method and chain of thought reasoning. Baby steps. Sean.hoyland (talk) 11:51, 6 January 2025 (UTC)
- I do not think we should outsource our ability to check whether a sentence is true and/or whether a source verifies a claim to AI. This would create orders of magnitude more problems than it would solve... besides, as people point out above, facts is where chatbots are weakest. They're increasingly good at imitating tone and style and meter and writing nicely, but are often garbage at telling fact from truth. Cremastra (u — c) 02:22, 7 January 2025 (UTC)
- Writing a script that would automatically give a "validation score" to every article—average probability of True vs. False across all sentences—would be helpful. (Even if it completely sucks, we can just ignore it, so there's no harm done.) Go ahead and do it if you know how! However, WMF's ML team is already very busy, so I don't think this will get done if nobody volunteers. – Closed Limelike Curves (talk) 04:41, 11 January 2025 (UTC)
Using ChatBots for reverting new edits by new users
Even though the previous idea may have issues, I really think that one factor for reverting new edits by new users can be "the false answer of verification of Chatbots". If the accuracy is near 88.7%, we can use that to verify new edits, possibly by new users, and find vandalism conveniently. Hooman Mallahzadeh (talk) 13:48, 6 January 2025 (UTC)
- Even if we assume the accuracy to be near near 88.7%, I would not support having a chatbot to review edits. Many editors do a lot of editing and getting every 1 edit out of 10 edit reverted due to an error will be annoying and demotivating. The bot User:Cluebot NG already automatically reverts obvious vandalism with 99%+ success rate. Ca talk to me! 14:11, 6 January 2025 (UTC)
- @Ca Can User:Cluebot NG check such semantically wrong sentence?
Steven Paul Jobs was an American engineer.
- instead of an inventor, this sentence wrongly declares that he was an engineer. Can User:Cluebot NG detect this sentence automatically as a wrong sentence?
- So I propose to rewrite User:Cluebot NG in a way that it uses Chatbots, somehow, to semantically check the new edits, and tag semantically wrong edits like the above sentence to "invalid by chatbot" for other users to correct that. Hooman Mallahzadeh (talk) 14:22, 6 January 2025 (UTC)
Can Cluebot detect this sentence automatically as a wrong sentence?
No. It can't. Cluebot isn't looking through sources. It's an anti-vandalism bot. You're welcome to bring this up with those that maintain Cluebot; although I don't think it'll work out, because that's way beyond the scope of what Cluebot does. SmittenGalaxy | talk! 19:46, 6 January 2025 (UTC)- I think you, Hooman Mallahzadeh, are too enamoured with the wilder claims of AI and chatbots, both from their supporters and the naysayers. They are simply not as good as humans at spotting vandalism yet; at least the free ones are not. Phil Bridger (talk) 20:46, 6 January 2025 (UTC)
- The number of false positives would be too high. Again, this would create more work for humans. Let's not fall to AI hype. Cremastra (u — c) 02:23, 7 January 2025 (UTC)
- Sorry this would be a terrible idea. The false positives would just be to great, there is enough WP:BITING of new editors we don't need LLM hallucinations causing more. -- LCU ActivelyDisinterested «@» °∆t° 16:26, 7 January 2025 (UTC)
- Dear @ActivelyDisinterested, I didn't propose to revert all edits that ChatBot detect as invalid. My proposal says that:
Use ChatBot to increase accuracy of User:Cluebot NG.
- The User:Cluebot NG does not check any semantics for sentences. These semantics can only be checked by Large Language Models like ChatGPT. Please note that every Wikipedia sentence can be "semantically wrong", as they can be syntacticly wrong.
- Because making "Large language models" for semantic checking is very time-consuming and expensive, we can use them online via service oriented techniques. Hooman Mallahzadeh (talk) 17:18, 7 January 2025 (UTC)
- But LLMs are not good at checking the accuracy of information, so Cluebot NG would not be more accurate, and in being less accurate would behave in a more BITEY manner to new editors. -- LCU ActivelyDisinterested «@» °∆t° 17:24, 7 January 2025 (UTC)
- Maybe ChatGPT should add a capability for "validation of sentences", that its output may only be "one word": True/False/I Don't know. Specially for the purpose of validation.
- I don't know that ChatGPT has this capability or not. But if it lacks, it can implement that easily. Hooman Mallahzadeh (talk) 17:33, 7 January 2025 (UTC)
- Validation is not a binary thing that an AI would be able to do. It's a lot more complicated than you make it sound (as it requires interpretation of sources - something an AI is incapable of actually doing), and may require access to things an AI would never be able to touch (such as offline sources). —Jéské Couriano v^_^v threads critiques 17:37, 7 January 2025 (UTC)
- @Hooman Mallahzadeh: I refer you to the case of Varghese v. China South Airlines, which earned the lawyers citing it a benchslap. —Jéské Couriano v^_^v threads critiques 17:30, 7 January 2025 (UTC)
- @Jéské Couriano Thanks, I will read the article. Hooman Mallahzadeh (talk) 17:34, 7 January 2025 (UTC)
- (edit conflict × 4) For Wikipedia's purposes, accuracy is determined by whether it matches what reliable sources say. For any given statement there are multiple possible states:
- Correct and supported by one or more reliable sources at the end of the statement
- Correct and supported by one or more reliable sources elsewhere on the page (e.g. the end of paragraph)
- Correct and self-supporting (e.g. book titles and authors)
- Correct but not supported by a reliable source
- Correct but supported by a questionable or unreliable source
- Correct according to some sources (cited or otherwise) but not others (cited or otherwise)
- Correct but not supported by the cited source
- Incorrect and not associated with a source
- Incorrect and contradicted by the source cited
- Incorrect but neither supported nor contradicted by the cited source
- Neither correct nor incorrect (e.g. it's a matter of opinion or unproven), all possible options for sourcing
- Previously correct, and supported by contemporary reliable sources (cited or otherwise), but now outdated (e.g. superceded records, outdate scientific theories, early reports about breaking news stories)
- Both correct and incorrect, depending on context or circumstance (with all possible citation options)
- Previously incorrect, and stated as such in contemporary sources, but now correct (e.g. 2021 sources stating Donald Trump as president of the US)
- Correct reporting of someone's incorrect statements (cited or otherwise).
- Predictions that turned out to be incorrect, reported as fact (possibly misleadingly or unclearly) at the time in contemporary reliable sources.
- And probably others I've failed to think of. LLMs simply cannot correctly determine all of these, especially as sources may be in different languages and/or not machine readable. Thryduulf (talk) 17:44, 7 January 2025 (UTC)
- But LLMs are not good at checking the accuracy of information, so Cluebot NG would not be more accurate, and in being less accurate would behave in a more BITEY manner to new editors. -- LCU ActivelyDisinterested «@» °∆t° 17:24, 7 January 2025 (UTC)
- I believe someone else had a working implementation of a script that would verify whether a reference supported a claim using LLMs - I think I saw it on one of the Village Pumps a while back. They eventually abandoned it because it wasn't reliable enough, if I remember correctly. — Qwerfjkltalk 16:46, 20 January 2025 (UTC)
- It probably struggles to understand meaning. On the other hand, I reckon you could get a working implementation to look for copyvio. CMD (talk) 18:02, 20 January 2025 (UTC)
- It could be great to have an LLM-supported system to detect potential close paraphrasing. —Kusma (talk) 18:06, 20 January 2025 (UTC)
- Even professional-grade plagiarism detectors are poor at that, generating both false positives and false negatives. That's fine in the environment where they are used with full understanding of the system's limitations and it is used only as one piece of information among multiple sources by those familiar with the topic area. Very little of that is true in the way it would be used on Wikipedia. Thryduulf (talk) 18:49, 20 January 2025 (UTC)
- It could be great to have an LLM-supported system to detect potential close paraphrasing. —Kusma (talk) 18:06, 20 January 2025 (UTC)
- It probably struggles to understand meaning. On the other hand, I reckon you could get a working implementation to look for copyvio. CMD (talk) 18:02, 20 January 2025 (UTC)
AfD's taking too long
I've noticed that a lot of AfD's get relisted because of minimal participation, sometimes more than once. This means that in the instance where the article does get deleted in the end, it takes too long, and in the instance where it doesn't, there's a massive AfD banner at the top for two, sometimes three or more weeks. What could be done to tackle this? How about some kind of QPQ where, any editor that nominates any article for deletion is strongly encouraged to participate in an unrelated AfD discussion? -- D'n'B-📞 -- 06:59, 7 January 2025 (UTC)
- I feel WP:RUSHDELETE is appropriate here. I don't understand why the article banner is a problem? Am I missing something? Knitsey (talk) 07:41, 7 January 2025 (UTC)
- The banners signal to a reader that there's something wrong with a page - in the case of an AfD there may well not be. -- D'n'B-📞 -- 06:30, 8 January 2025 (UTC)
- There's often a concern, and all relisted nominations seem to have reason to debate that concern, whether because someone registered an objection or the article was already nominated in the past. Aaron Liu (talk) 12:25, 8 January 2025 (UTC)
- The banners signal to a reader that there's something wrong with a page - in the case of an AfD there may well not be. -- D'n'B-📞 -- 06:30, 8 January 2025 (UTC)
- We already have WP:NOQUORUM which says that if an AfD nomination has minimal participation and meets the criteria for WP:PROD, then the closing admin should treat it like an expired PROD and do a soft deletion. I remember when this rule was first added, admins did try to respect it. I haven't been looking at AfD much lately—have we reverted back to relisting discussions? Mz7 (talk) 08:10, 7 January 2025 (UTC)
- From what I've seen when I was active there in November, ProD-like closures based on minimal participation were quite common. Aaron Liu (talk) 22:47, 7 January 2025 (UTC)
- Based on a recent samples, I think somewhere over a quarter of AfD listings are relistings. (6 Jan - 37 / 144, 5 Jan - 35 / 83, 4 Jan - 36 / 111, 3 Jan - 27 / 108). -- D'n'B-📞 -- 06:43, 8 January 2025 (UTC)
- Those relisted have more than minimal participation in the soft deletion sense. Aaron Liu (talk) 12:22, 8 January 2025 (UTC)
- so more than allows for soft deletion but not enough to reach consensus then. -- D'n'B-📞 -- 02:53, 11 January 2025 (UTC)
- yes. IMO that means they have reason for discussion and debate. Aaron Liu (talk) 23:31, 11 January 2025 (UTC)
- Okay, and I'm talking about encouraging that discussion to actually happen rather than fizzle out - so we're on the same page here? -- D'n'B-📞 -- 08:58, 12 January 2025 (UTC)
- And that's why there's a banner on the article. Aaron Liu (talk) 16:35, 12 January 2025 (UTC)
- Okay, and I'm talking about encouraging that discussion to actually happen rather than fizzle out - so we're on the same page here? -- D'n'B-📞 -- 08:58, 12 January 2025 (UTC)
- yes. IMO that means they have reason for discussion and debate. Aaron Liu (talk) 23:31, 11 January 2025 (UTC)
- so more than allows for soft deletion but not enough to reach consensus then. -- D'n'B-📞 -- 02:53, 11 January 2025 (UTC)
- Those relisted have more than minimal participation in the soft deletion sense. Aaron Liu (talk) 12:22, 8 January 2025 (UTC)
- In my experience relisting often does lead to more comments on the AFD, in practice. So the system works, mostly -- as long as the nominator doesn't have to stick around for the whole time, I don't think there's a problem. And if the page is well-frequented enough for the banner to be a problem, the AFD will probably be relatively well-attended. Mrfoogles (talk) 20:40, 23 January 2025 (UTC)
Is it possible to start the process of sunsetting the "T:" pseudo-namespace?
In the sense that, with the creation of the [[TM:]] alias in early 2024 from T363757, I can't think of a single reason why a new "T:" space redirect would ever need to exist.
Back in the day, well, "T:" has always been controversial even from 2010 and the several RfCs. There was Wikipedia:Redirects for discussion/Log/2013 November 18#T:WPTECH and multiple RfCs since regarding pseudonamespaces. And per WP:Shortcut#Pseudo-namespaces, the "T:" space is listed as "for limited uses only", but even that was added to the info page in that location a decade ago or so.
Nevertheless, even from the 2014 RfC at Wikipedia:Village pump (policy)/Archive 112#RFC: On the controversy of the pseudo-namespace shortcuts, there was consensus that "new "T:" redirects should be strongly discouraged if not prohibited in all but exceptional cases". It's been over a decade now and we still get a potluck assortment of new T: titles every year.
The difference is though, now we have the TM: alias. Just as it makes little sense to foster a "W:" shortcut for "WP:" titles, it really does not make sense to keep "T:" around when "TM:" is just another character more. H for Help and P for Portal don't have that luxury of an alias at this time, but templates do. There's hardly anything left on Special:PrefixIndex for T: titles. And I don't think we should necessarily delete everything at once. But it might be nice to make a hard rule that we don't need any more T: titles, especially so when TM: is the vastly preferable option at this time, from my POV.
I would suggest this as a proposal, but wanted to get feedback to see what else might need to happen in order to start sunsetting? Many of these have little to no links, but a lot of them do. Should these be replaced? Would it be worth the editing cost? I think the payoff is phenomenal - allowing easier navigation to actual articles that start with "T:", of which there are several. Utopes (talk / cont) 16:00, 9 January 2025 (UTC)
- I wouldn't be strongly opposed to this, but I'd suggest keeping the most-used ones, like T:CENT and T:DYK, for at least a few more years. Cremastra (u — c) 23:33, 9 January 2025 (UTC)
- For sure. As it happens, T:CENT only has 112 incoming links which almost entirely consist of archives, and it seems like there could be a bot (or a person, honestly) who could run through and fix the links to TM:CENT instead. Because this would be a sunset, I predict that really the only two functions that might actually want to hold onto these for a bit would be DYK and ITN. But even then, I don't necessarily want to delete every single T: title we have right now, but maybe slowly over time we could get to that point. In the interim, anything that T: does, TM: does better in a less harmful way, as TM: works for 100% of templates while T: works for 0%. Creating a note in WP:Shortcut#Pseudo-namespaces that "Newly created T: titles from the years 2025 and later are no longer permissible / are against consensus" could be a start. If it's indeed true that that is the case, of course, I have no idea. Hence a proposal to see where people are at re: T: titles. Utopes (talk / cont) 00:54, 10 January 2025 (UTC)
- I would support at least preventing the creation of new ones, so that the burden doesn't keep increasing and it is made clear that TM: is the recommended one. Chaotic Enby (talk · contribs) 01:16, 10 January 2025 (UTC)
- Some might be used as type-in shortcuts (I search for CAT:CSD almost every day) but page view statistics should tell you how common that is. —Kusma (talk) 18:10, 20 January 2025 (UTC)
- Regarding DYK, it currently has a few different T: shortcuts for the preps and queues as well. A sunset might have to exclude potential fiddling in this area. CMD (talk) 19:05, 20 January 2025 (UTC)
- If we turned the pages into soft redirects, that would discourage further use. WhatamIdoing (talk) 04:37, 21 January 2025 (UTC)
- For sure. As it happens, T:CENT only has 112 incoming links which almost entirely consist of archives, and it seems like there could be a bot (or a person, honestly) who could run through and fix the links to TM:CENT instead. Because this would be a sunset, I predict that really the only two functions that might actually want to hold onto these for a bit would be DYK and ITN. But even then, I don't necessarily want to delete every single T: title we have right now, but maybe slowly over time we could get to that point. In the interim, anything that T: does, TM: does better in a less harmful way, as TM: works for 100% of templates while T: works for 0%. Creating a note in WP:Shortcut#Pseudo-namespaces that "Newly created T: titles from the years 2025 and later are no longer permissible / are against consensus" could be a start. If it's indeed true that that is the case, of course, I have no idea. Hence a proposal to see where people are at re: T: titles. Utopes (talk / cont) 00:54, 10 January 2025 (UTC)
Now at Wikipedia:Village pump (proposals)#Proposal to prohibit the creation of new "T:" pseudo-namespace redirects without prior consensus. CMD (talk) 15:50, 21 January 2025 (UTC)
Reworking WP:NBAND
Per this discussion at WP:Village pump (proposals), there was a nearly an almost unanimous consensus to not constrain WP:BAND to WP:GNG requirements, but there did seem to be a strong consensus to revisit criterion 5, and possibly some consensus to revisit criterion 6. I've got an updated draft at Wikipedia:Band notability proposal where I tried to reflect this consensus. I basically just re-worked criterion 5 a bit. It now reads: # Has released two or more albums on a major record label, or one of the more important indie labels[note 5], before 2010.
The note is the importance of the indie label should be demonstrable from reliable independent coverage indicating that label's importance
. The exact cut-off date was debated, but it was around 2006 to 2010. I went for 2010, as that seems to be when streaming really took off. I'd like some input to see if there's any modifications or suggestions before I put this forward at Village pump (proposals). Thank you!--3family6 (Talk to me | See what I have done) 13:24, 10 January 2025 (UTC)
- Remove 5 and 6 entirely. Graywalls (talk) 02:21, 11 January 2025 (UTC)
- The problem with removing 5 entirely is because that would affect older groups that might not yet have articles. That's why the cut-off date of around 2010 was proposed in the previous discussion.--3family6 (Talk to me | See what I have done) 23:12, 12 January 2025 (UTC)
- Remove #6 entirely. Why? I Ask (talk) 03:53, 11 January 2025 (UTC)
Names of command-line tools in monospace
Websites such as the Arch Linux wiki frequently use inline <code>
tags to indicate that text is either entered into or read from the command line. I did some searches of the MOS and FAQ here on Wikipedia, but I was unable to find any policy or guideline formalizing the use of monospaced fonts for command line input and output. Does anyone else actually care about this, and if so does anyone think this should be formalized? Thanks for the input, /home/gracen/ (they/them) 18:50, 10 January 2025 (UTC)
- I feel I should also mention the issue of using
<code>
tags for bold page names (cf. grep and fdisk). /home/gracen/ (they/them) 18:54, 10 January 2025 (UTC) - If you WP:Boldly do something and nobody objects, that's consensus. That said, we actually do ask for such markup at MOS:CODE. Aaron Liu (talk) 19:04, 10 January 2025 (UTC)
- I'm aware of both of these, though I appreciate the consideration. I'm more asking about things that are in a gray area between "code" and "natural language" and whether this gray area should be standardized so we have more consistent style.
- I'll elaborate more if necessary once I get back to a computer; I dislike writing longer messages on mobile. /home/gracen/ (they/them) 19:10, 10 January 2025 (UTC)
- FWIW I use <kbd> in discussions when documenting my search term, e.g. "bright green" cake -wikipedia, I'm not sure what the direct relevance of that is to mainspace but is it the sort of grey are you are thinking of? Thryduulf (talk) 19:16, 10 January 2025 (UTC)
- Yup, that's pretty much what I was thinking of (also, thanks for the introduction to
<kbd>
, I think I prefer this for inline stuff because it doesn't have the annoying gray box)! An example that I just thought of could be error messages. For example, would an inline 404 Not Found be preferred over 404 Not Found? (Of course, you wouldn't be seeing this much in a CLI, but I feel 404's the most recognizable error message.) I feel this should be standardized. /home/gracen/ (they/them) 19:22, 10 January 2025 (UTC)- For that one you might wanna consider using <samp> instead since kbd is semantically "keyboard input". I don't think there's any guidelines about what you mentioned, so probably just Bold it in until someone hates it. Aaron Liu (talk) 01:35, 11 January 2025 (UTC)
- Alright, thanks! I'll revive this discussion if/when someone takes issue with this. /home/gracen/ (they/them) 15:58, 13 January 2025 (UTC)
- For that one you might wanna consider using <samp> instead since kbd is semantically "keyboard input". I don't think there's any guidelines about what you mentioned, so probably just Bold it in until someone hates it. Aaron Liu (talk) 01:35, 11 January 2025 (UTC)
- Something like
<syntaxhighlight lang="shell" inline>ls -alF
(with an closing</syntaxhighlight>
tag) provides both quoting behaviour and (theoretically) syntax highlighting, so it's what I would prefer, but of course it's more typing. (For shell, there isn't much syntax highlighting that could happen anyway, and I can't seem to get any to appear.) Otherwise,<kbd>
is appropriate markup to use for text entered as input. isaacl (talk) 22:41, 10 January 2025 (UTC) <kbd>...</kbd>
or {{kbd}}?<pre>...</pre>
or {{pre}}?<samp>...</samp>
or {{samp}}? -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 16:37, 13 January 2025 (UTC)- Does it matter? Isn't this just a WP:COSMETIC difference? /home/gracen/ (they/them) 16:49, 13 January 2025 (UTC)
- Apparently not quite, the template:kbd indicates that it
applies some styling to it, namely a faint grey background [...] and slight CSS letter-spacing to suggest individually entered characters
. The output of the others also differs - It seems {{pre}} really doesn't play nicely with bulleted lists, I've not looked into why. I've also not looked into why the templates apply the styling they do. Thryduulf (talk) 14:33, 14 January 2025 (UTC)
- Apparently not quite, the template:kbd indicates that it
- Does it matter? Isn't this just a WP:COSMETIC difference? /home/gracen/ (they/them) 16:49, 13 January 2025 (UTC)
- Yup, that's pretty much what I was thinking of (also, thanks for the introduction to
- FWIW I use <kbd> in discussions when documenting my search term, e.g. "bright green" cake -wikipedia, I'm not sure what the direct relevance of that is to mainspace but is it the sort of grey are you are thinking of? Thryduulf (talk) 19:16, 10 January 2025 (UTC)
Better methods than IP blocks and rangeblocks for completely stopping rampant recurring vandals
So, I intend for this thread to be about the discussion of various theoretical methods other than IP blocks / rangeblocks that could be used to mitigate a persistent vandal highly effectively while causing little to no collateral damage.
Some background
|
---|
Wikipedia was founded in 2001, a time when a good majority of residential IP addresses were relatively all static, due to the much lesser number of internet users at that time. IP blocks probably made a lot of sense at that time due to that fact - you couldn't just reboot your modem to obtain a new IP address and keep editing, and cell phones pretty much had no usable web browsing capability at the time. Today, the only type of tool used to stop anonymous vandals and disruptors, despite dynamic IP addresses and shared IPs being very common, is still the same old IP address blocks and range blocks. While IP block are effective at stopping the "casual" / "one-off" type of vandals from editing again, when it comes to the more dedicated disruptors and LTAs, IP blocks simply don't seem to hinder them at all, due to the highly dynamic IP address nature. Okay, but range blocks exist, right? Well, unfortunately not all IP address allotment sizes are the same, and it varies a lot from ISP to ISP - some ISPs just seem to put literally all their customers on one gigantic (i.e. /16 or bigger for IPv4, /32 or bigger for IPv6) subdivision, making it straight up impossible to put a complete stop to the LTA vandal without also stopping all those thousands and thousands of innocent other people from being able to edit. |
I've always had these thoughts in my mind, about what the Wikimedia team could potentially do / implement to more accurately yet effectively put a complete halt to long-term abusers. But I felt like now's the time we really could use some better method to stop LTAs, as there are just sooooo many of them today, and soooo much admin time/effort is being spent trying to stop them only for them to come back again and again because pretty much the only way to stop them is to literally block the entire ISP from editing Wikipedia.
The first thing that might come to one's mind, and probably the most controversial method too, is disabling anonymous editing entirely and making it so only registered editors can edit English Wikipedia. Someone pointed out to me before that the Portuguese Wikipedia is a registration-only wiki. I tried it out for myself, and indeed when you click the edit button while not logged in, you are brought to an account login page. I'm guessing ENwiki will never become like this because it would eliminate a large and thriving culture of "casual" type of editors who don't want to register an account and just simply want to fix a typo, update a table's data or add a small sentence. It's probably not 100% effective either, as a registered-only wiki still wouldn't stop someone from creating a whole bunch of throwaway accounts to keep vandalising, and account creation blocks on IP addresses could still be dodged by, you know, the modem power plug dance or good ol' proxies/VPNs.
I've noticed some other language wikis like the German Wikipedia have "pending changes" type protection pretty much enabled on every single page. I imagine this isn't going to work on the English Wikipedia because of the comparatively high volume of edits from anonymous editors compared to DEwiki, as it would overload the pending changes review queue and there just will never be enough active reviewers to keep up with the volume of edits.
Now here are some of my original thoughts which I don't think I've seen anyone discuss here on Wikipedia before. The first of which, is hardware ID (HWID) bans or "device bans". The reason why popular free-to-play video games like League of Legends, Overwatch 2, Counter-Strike 2 etc aren't overrun with non-stop cheaters and abusers despite them being free-to-play is because they employ an anti-cheat and abuse system that will ban the serial numbers of the computer, rather than just simply banning the user or their IP address. Now, I have heard of HWID spoofing before, but cheating isn't rampant in these games anyway so I guess they are effective in some form. Besides replacing hardware, one could theoretically use a virtual machine to evade the HWID ban, but virtual machines don't provide the performance, graphics acceleration and special features needed to get a modern multiplayer video game to work. However though, I could see virtual machines as being a rather big weakness for Wikipedia HWID bans, as a web browser doesn't need a dedicated powerful video card and any of those special features to work; web browsers easily run in virtualised environments. But I guess not a great deal of LTAs are technologically competent enough to do that, and even if they did, spinning up a new VM is significantly slower than switching countries in a VPN.
The second, and probably the most craziest one, is employing some form of mandatory personal ID system. Where, even if you're not going to sign up and only edit anonymously, you will be forced to enter a social security number or passport number or whatever ID number that is completely unique to you, to be able to edit. In South Korea, some gaming companies like Blizzard make you enter a SSN when signing up for an account, which makes it virtually impossible for a person to go to an internet cafe ("PC bang") and make a whole bunch of throwaway accounts and jump from computer to computer when an account/device becomes banned to keep on cheating (see PC bang § Industry impact). One could theoretically get the IDs of family members and friends when they become "ID banned", but after all there are only going to be so few other people's IDs they will be able to obtain, certainly nowhere near on the order of magnitude as the number of available IP addresses on a large IP subnet or VPN. I'm guessing this method isn't going to be feasible for English Wikipedia either, as it completely goes against the simple, "open" and "anonymous" nature of Wikipedia, where not only can you edit anonymously without entering any personal details, but even when signing up for an account you don't even have to enter an email address, only just a password.
A third theoretical method is that what if, the customer ID numbers of ISPs were visible to Wikimedia, and then Wikimedia could ban that ISP customer therefore making them completely unable to edit Wikipedia even if they jump to a different IP address or subnet on that ISP? Or maybe how about the reverse where the ISP themselves ban the customer from being able to access Wikipedia after enough abuse? Perhaps ISPs need to wake up and implement such a site-level blocking policy.
Here's a related "side question": how come other popular online services like Discord, Facebook, Reddit, etc aren't overly infested with people who spam, attack, or otherwise make malicious posts on the site everyday? Could Wikimedia implement whatever methods these services are using to stop potential "long-term abusers"? — AP 499D25 (talk) 13:29, 12 January 2025 (UTC)
- I just thought of yet another theoretical solution: AI has gotten good enough to be able to write stories and poems, analyse a 1000 page long book, make songs, realistic pictures, and more. Wikipedia already uses AI (albelt a rather primitive and simple one) in the famous anti-vandal bot User:ClueBot NG. What if, we deploy an edit filter based on the latest and greatest AI model, to filter out edits based on past vandalism/disruption patterns? — AP 499D25 (talk) 13:37, 12 January 2025 (UTC)
- I'll preface this by saying that I have quite a few problems with this idea (although I may be biased because I'm strongly opposed to the direction that modern AI is going); but I'd like to hear why and how you think this would work in more detail. For instance, would the AI filter just block edits outright? Would they be flagged like with WP:ORES? What mechanisms would the hypothetical AI use to detect LTA? How would we reduce false positives? And so on. Thanks, /home/gracen/ (they/them) 17:24, 13 January 2025 (UTC)
- The AI idea I have in mind is a rather "mild" form of system, where it only works on edits based on past patterns of disruption. Take for example, MAB's posts. They are quite easily recognisable from a distance even with the source code obscuring that makes it impossible for traditional edit filters to detect the edits. Maybe an AI could perform OCR on that text to then filter it out?
- The AI will not filter out new types of vandalism, or disruptive edits that it isn't "familiar" with. There will be an "input text file" where admins can add examples of LTA disruption for the AI to then watch for any edits that closely resemble those examples. It will not look for, or revert edits that aren't anywhere near as being like those samples. That way I think false positives will be minimised a lot, and of course there shall be a system for reporting false positives much like how there exists WP:EFFP. — AP 499D25 (talk) 22:44, 13 January 2025 (UTC)
- Ah, thanks! I'm immediately hesitant whenever I hear the word "AI" because of the actions of corporations like OpenAI, among others. However, given what you've just said, I actually think this might be an interesting idea to pursue. I'm relatively new to WP and I've never looked at WP:SPI, so I'd rather leave this to more experienced editors to discuss, but this does seem like a good and ethical application of neural networks and is within their capabilities. /home/gracen/ (they/them) 16:16, 14 January 2025 (UTC)
- I'll preface this by saying that I have quite a few problems with this idea (although I may be biased because I'm strongly opposed to the direction that modern AI is going); but I'd like to hear why and how you think this would work in more detail. For instance, would the AI filter just block edits outright? Would they be flagged like with WP:ORES? What mechanisms would the hypothetical AI use to detect LTA? How would we reduce false positives? And so on. Thanks, /home/gracen/ (they/them) 17:24, 13 January 2025 (UTC)
This means that editors will have to give up a large amount of privacy, and the vast majority of people casually editing Wikipedia aren't ready to give their passport number in order to do so. Plus, editors at risk might be afraid of their ID numbers ending in the wrong hands, which is much more worrying than "just" their IP address.The second, and probably the most craziest one, is employing some form of mandatory personal ID system. Where, even if you're not going to sign up and only edit anonymously, you will be forced to enter a social security number or passport number or whatever ID number that is completely unique to you, to be able to edit.
They are, it's just that the issue is more visible on Wikipedia as the content is easy to find for all readers, but it doesn't mean platforms like Discord or Reddit aren't full of bad actors too. Chaotic Enby (talk · contribs) 13:38, 12 January 2025 (UTC)Here's a related "side question": how come other popular online services like Discord, Facebook, Reddit, etc aren't overly infested with people who spam, attack, or otherwise make malicious posts on the site everyday?
- Portuguese Wikipedia is not a registration-only wiki. They require registration for the mainspace, but not for anything else. See RecentChanges there. (I don't think they have a system similar to our Wikipedia:Edit requests. Instead, you post a request at w:pt:Wikipédia:Pedidos/Páginas protegidas, which is a type of noticeboard.) I'm concerned that restricting newbies may be killing their community. See the editor trends for the German-language Wikipedia; that's not something we really want to replicate. Since editors are not immortal, every community has to get its next generation from somewhere. We are getting fewer new accounts making their first edit each year. The number of editors who make 100+ edits per year is still pretty stable (around 20K), but the number of folks who make a first edit is down by about 30% compared to a decade ago.
- WMF Legal will reject any sort of privacy invasion similar to requiring a real-world identity check for a person. A HWID ban might be legally feasible (i.e., I've never heard them say that it's already been considered and rejected). It would require amending the Privacy Policy, but that happens every now and again anyway, so that's not impossible. However, I understand that it's not very effective in practice (outside of proprietary systems, which is not what we're dealing with), and the whole project involves a significant tradeoff with privacy: Everything that's possible to track a Wikipedia vandal is something that's possible to track you for advertising purposes, or that could be subpoenaed for legal purposes. Writing a Wikipedia article (in the mainspace, to describe what it is and how it works) about that subject, or updating device fingerprint, might actually be the most useful thing you could do, if you thought that was worth pursuing. If a proposal is made along these lines, then the first thing people will do is read the Wikipedia article to find out what it says.
- I understand that when Wikipedia was in its early days, a few ISPs were willing to track down abusive customers on occasion. My impression now is that basically none of them are willing to spend any staff time/expense doing this. We can e-mail their abuse@ addresses (they should all have one), but they are unlikely to do anything. A publicly visible approach on social media might work in a few cases ("Hey, @Name-of-ISP, one of your customers keeps vandalizing #Wikipedia. See <link to WP:AIV>. Why don't you stop them?"). However, if the LTA is using a VPN or similar system, then the ISP we claim they're using might be the wrong one anyway. WhatamIdoing (talk) 03:58, 13 January 2025 (UTC)
- I dont know exactly what is meant by hardware id (something like [17]?), but genrrally speaking most things that come under that heading require you to be using a native app and not a web browser. Web Environment Integrity is a possible exception but was abandoned. Bawolff (talk) 00:13, 14 January 2025 (UTC)
- I was thinking that it might be something like a MAC address (for which we had MAC spoofing). WhatamIdoing (talk) 08:00, 21 January 2025 (UTC)
- I dont know exactly what is meant by hardware id (something like [17]?), but genrrally speaking most things that come under that heading require you to be using a native app and not a web browser. Web Environment Integrity is a possible exception but was abandoned. Bawolff (talk) 00:13, 14 January 2025 (UTC)
Page for ABBA's I have a dream links to the wrong year in the UK Charts
I don't know if this is the correct place to post this or not, I am only doing so because I am not sure how to fix it myself. EmDavis158 (talk) 02:57, 13 January 2025 (UTC)
- @EmDavis158, is this about I Have a Dream (song)? Which bit exactly in there? WhatamIdoing (talk) 04:04, 13 January 2025 (UTC)
- Yeah, the citation link for the UK Charts links to december 1969 and not 1979. EmDavis158 (talk) 05:02, 13 January 2025 (UTC)
- It looks like the citation is built into Template:Single chart, so let's get some help from people who are familiar with that template. Dxneo or Muhandes, are either of you around? I think the goal is to have this link to https://www.officialcharts.com/charts/singles-chart/19791223/7501/ WhatamIdoing (talk) 06:38, 13 January 2025 (UTC)
- I might have fixed it (diff). It seems the UK chart functionality requires YYYYMMDD date formatting. Sean.hoyland (talk) 07:58, 13 January 2025 (UTC)
- Oh Sean beat me to it. Like they mentioned above, the problem was
|date=
You cannot use "23 December 1979" for the date, next time use yyyymmdd, thank you. dxneo (talk) 08:02, 13 January 2025 (UTC)
- It looks like the citation is built into Template:Single chart, so let's get some help from people who are familiar with that template. Dxneo or Muhandes, are either of you around? I think the goal is to have this link to https://www.officialcharts.com/charts/singles-chart/19791223/7501/ WhatamIdoing (talk) 06:38, 13 January 2025 (UTC)
- Yeah, the citation link for the UK Charts links to december 1969 and not 1979. EmDavis158 (talk) 05:02, 13 January 2025 (UTC)
- It's alright to find random places to help, though the usual forums for this are Wikipedia:Village pump (technical) for technical help or Wikipedia:Help desk. Aaron Liu (talk) 12:35, 14 January 2025 (UTC)
Give patrollers the suppressredirect right?
As part of New Page Patrol, a lot of articles are draftified, which is done by moving the it to the Draft: or User: namespace. The problem is that without page mover rights, patrollers are forced to leave redirects behind, which are always deleted under speedy deletion criterion R2. Giving patrollers the suppressredirect
right would make the process easier and reduce workload for admins. What do you think? '''[[User:CanonNi]]''' (talk • contribs) 11:02, 13 January 2025 (UTC)
- Draftifying is happening far too much. But the idea has merit, as then the last log entry will say the page was moved, rather than a redirect deleted. Graeme Bartlett (talk) 11:11, 13 January 2025 (UTC)
- Note: This has been proposed before. See Wikipedia:Village pump (proposals)/Archive 203 § Give NPR additional rights? JJPMaster (she/they) 14:55, 13 January 2025 (UTC)
- The other option would be to not have it automatically given, but to make it easy to grant to new page reviewers frequently doing draftifications, and encourage them to apply. Chaotic Enby (talk · contribs) 15:36, 13 January 2025 (UTC)
- I don't think this is a good idea. Suppressing the redirect right away (whether you're an admin or not) makes it harder for people to find the page they were editing. WhatamIdoing (talk) 18:52, 13 January 2025 (UTC)
- Opening up the page will show the log entry that the page was moved (allowing people to easily find it). Current policy does not place a time limit on when to delete pages that qualify for WP:R2 (beyond the standard wait an hour before draftifying). Once that happens, it's nominated for speedy deletion if the patroller isn't a page mover or an admin. R2s are usually dealt with immediately, so it's not like forcing people to nominate them for speedy deletion is going to accomplish much other than make their workflow slightly longer. Clovermoss🍀 (talk) 23:18, 17 January 2025 (UTC)
- This is de facto already the case. It's quite easy for an NPR to become a page mover on those grounds alone. JJPMaster (she/they) 19:16, 13 January 2025 (UTC)
- I don't think this is a good idea. Suppressing the redirect right away (whether you're an admin or not) makes it harder for people to find the page they were editing. WhatamIdoing (talk) 18:52, 13 January 2025 (UTC)
- Reluctantly oppose not per WhatamIdoing but because the suppressredirect right has too much ancillary power for me to be comfortable bundling it in like this. * Pppery * it has begun... 18:59, 13 January 2025 (UTC)
- I also oppose bundling it with anything else beyond pagemover, per both Pppery and WAID. I'm also minded to agree with Graeme Bartlett that drafifying is happened too often (but I realise that it's been a while since I looked at this in detail). Nobody should be granted the suppressredirect right without it being clear they understand the policy surrounding when redirects should and should not be suppressed specifically. Thryduulf (talk) 14:21, 14 January 2025 (UTC)
- I agree with JJPMaster that NPPers that qualify for the right don't much trouble gaining it. I think each case should be examined individually because draftifying on a frequent basis isn't required to be a new page patroller. User right requests also provide a chance to double check that such drafticiations are actually being done correctly. Clovermoss🍀 (talk) 23:25, 17 January 2025 (UTC)
Using a Tabber for infoboxes with multiple subjects
There are many articles that cover closely related subjects, such as IPhone 16 Pro which covers both the Pro and Pro Max models, Nintendo Switch which covers the original, OLED, and Lite models, and Lockheed Martin F-35 Lightning II which covers the A, B, C, and I variants. Most of these articles use a single infobox to display specifications and information about all of the covered subjects, leading to clutter and lots of parentheticals.
I propose that a tabber, like Tabber Neue, be used to instead create distinct infobox tabs for each subject. This would allow many benefits, such as clearly separating different specifications, providing more room for unique photos of each subject, and reducing visual clutter. An example of good use of tabs is one of my personal favorite wikis, https://oldschool.runescape.wiki, which uses tabs effectively to organize the many variants of monsters, NPCs, and items. A great example is the entry for Guard, a very common NPC with many variants. It even uses nested tabs to show both the spawn location grouped by city, and the individual variants within each city. While this is an extreme example in terms of the raw number of subjects, it provides a good look at how similar subjects can be effectively organized using tabs. Using Wikipedia's system instead, it would be substantially more cluttered, with parentheticals such as: Examine: "He tries to keep order around here" (Edgeville 1, Edgeville 2, Falador (sword) 1...)
If you tried to save space using citations, it becomes very opaque: Examine: "He tries to keep order around here" [1][2][7][22]...
Overall I think this would make infoboxes more easily readable and engaging. It encourages "perusing" by clicking or tapping through the tabs, as opposed to trying to figure out what applies where. DeklinCaban (talk) 18:42, 16 January 2025 (UTC)
- That would be an interesting idea! To go back to you iPhone 16 Pro example, a lot of information gets repeated in both tabs – maybe there could be a way to have it so that it only has to be added to the article in one place (even if shown in both tabs) to make them easier to keep in sync? Chaotic Enby (talk · contribs) 18:46, 16 January 2025 (UTC)
- If it can print and display without JS effectively. From my testing under these environments, Tabber(Neue) makes these awkward line/paragraph-breaks that don't display the header at all. $wgTabberNeueUseCodex may be promising, but at least with the examples at wmdoc:codex/latest/components/demos/tabs.html, it's even worse: the tabs don't expand for the printing view at all, and the info under the other tabs will just be inaccessible on paper. Aaron Liu (talk) 20:21, 16 January 2025 (UTC)
- A couple points at first blush: first, having a tabbed infobox seems like it's a usability nightmare. Secondly, it seems to be doing an end run around the overarching problem, which is that the infobox for iPhone 16 Pro is terrible. Software and tech articles are often like this (bad) where they try and cram an entire spec sheet into the infobox, and that's a failing of the infobox and the editors maintaining it. Trying to create a technical solution rather than the obvious one (just edit what's in the infobox to the most important elements) seems like a waste of everyone's time. Der Wohltemperierte Fuchs talk 20:33, 16 January 2025 (UTC)
- I suspect that our users would not even realise that they could click the tabs to see other info. So it will make it harder for our readers. Alternatives are to have multiple infoboxes, but this does take up space, particularly on mobile. Another way is to use parameter indexing as in the Chembox. Parameters can have a number on the end to describe variations on related substances in the one infobox. Graeme Bartlett (talk) 20:37, 16 January 2025 (UTC)
- Tabs are widely used even on amateur wikis like 90% of Fandom Wikia. I'm sure readers know how to use them. (In fact, the "Article/Talk" "Read/Edit/View history" thing on the top is a tab.) Aaron Liu (talk) 21:27, 16 January 2025 (UTC)
- Judging by how few readers understand we have or ever see the talk pages, I'm not sure that's exactly a good argument. Der Wohltemperierte Fuchs talk 22:10, 16 January 2025 (UTC)
- [citation needed] for that. I started out processing semi-protected edit requests and there were a ton of clueless readers' requests. Aaron Liu (talk) 00:00, 17 January 2025 (UTC)
- Readers and potential editors don't know what the protection, good article, featured article, and other icons mean. I'm just one person but I'd never heard of tabs like that until I read this. CambridgeBayWeather (solidly non-human), Uqaqtuq (talk), Huliva 01:35, 17 January 2025 (UTC)
- Sorry. That should read "Some readers..." CambridgeBayWeather (solidly non-human), Uqaqtuq (talk), Huliva 01:37, 17 January 2025 (UTC)
- Judging by how few readers understand we have or ever see the talk pages, I'm not sure that's exactly a good argument. Der Wohltemperierte Fuchs talk 22:10, 16 January 2025 (UTC)
- Tabs are widely used even on amateur wikis like 90% of Fandom Wikia. I'm sure readers know how to use them. (In fact, the "Article/Talk" "Read/Edit/View history" thing on the top is a tab.) Aaron Liu (talk) 21:27, 16 January 2025 (UTC)
dissensus as an alternative to consensus
For contentious pages, from what I can tell, there is no way in Wikipedia to come to a consensus when both camps are not making a good faith effort, and maybe even then. My proposal is: an expert could start an alternative page for one that he thinks is flawed, and have the same protections from further editing as the original? Then there could be a competition of narratives Iuvalclejan (talk) 19:32, 17 January 2025 (UTC)
- We call those WP:POVFORKs and we try to prevent them from happening. Simonm223 (talk) 19:42, 17 January 2025 (UTC)
- Honestly, the consensus system works especially well on contentious pages, even if the discussions can sometimes get heated. Having content forks everywhere would not really be preferable, as, not only would you not have a single place to link the reader to, but you would quickly end up with pages full of personal opinions or cherry-picking sources if each group was given its own place to write about its point of view. A competition of narratives could be interesting as a website concept, but it would be pretty far from an encyclopedia. Chaotic Enby (talk · contribs) 19:43, 17 January 2025 (UTC)
- The competition would not be the last step. Selection of alternatives could happen by votes, with some cutoffs: if a fork does not get votes above a cutoff, it is eliminated. That would prevent proliferation of narratives. Or you could have the selction criteria be differential instead of absolute: if one narrative gets 2x (for example) more votes than another, the other one is eliminated. Consensus does not work if pages become protected but the disagreement is still strong. Iuvalclejan (talk) 19:48, 17 January 2025 (UTC)
Honestly, the consensus system works especially well on contentious pages,
I'd agree, but I'd also say we don't actually use the consensus system for contentious pages in practice—the more controversial the topic, the more I notice it devolving into straight voting issue-by-issue. (Even though that's the situation where you actually need to identify a consensus that all sides can live with.) – Closed Limelike Curves (talk) 21:42, 20 January 2025 (UTC)
- Interestingly, it's been theorized ([18], pg 101) that we already have a "community of dissensus" whereby contentious and poorly-supported claims are weeded out from our articles until only that which can be verified remains. signed, Rosguill talk 19:45, 17 January 2025 (UTC)
- The problems I see are not due to poorly supported claims. They are due to a biased reporting, that is technically correct (e.g. "hostilities erupted", rather than side A attacked side B), or outright omissions (e.g. the leader of said group is not mentioned because of his shady associations with Nazis, whereas the leader of the other group is mentioned many times). Iuvalclejan (talk) 20:29, 17 January 2025 (UTC)
- In that case, we should stick to what sources say, rather than making multiple versions trying to please each editor. If sources mention the names of both leaders, then we should have them both in the article, rather than hiding one in a separate article. Chaotic Enby (talk · contribs) 20:36, 17 January 2025 (UTC)
- So that addresses one issue, but evern there, if the page is protected, you can't "mention them both". What about the way of presenting a phenomenon, that while technically correct, is misleading by omission of important details? Iuvalclejan (talk) 20:42, 17 January 2025 (UTC)
- For both cases: page protection doesn't mean that no one can propose any changes, it just means that you have to go to the talk page and discuss them with other editors (usually, to avoid someone else coming just after you and reverting it). If you feel like the discussion isn't going anywhere, we have channels for Wikipedia:Dispute resolution. Chaotic Enby (talk · contribs) 20:49, 17 January 2025 (UTC)
- That said, there are special restrictions on articles related to Palestinian–Israeli conflicts, and you shouldn't attempt to edit them or discuss them until you have made 500+ edits elsewhere. This will give you a chance to learn our processes, jargon, and rules in a less fraught context. WhatamIdoing (talk) 08:13, 21 January 2025 (UTC)
- For both cases: page protection doesn't mean that no one can propose any changes, it just means that you have to go to the talk page and discuss them with other editors (usually, to avoid someone else coming just after you and reverting it). If you feel like the discussion isn't going anywhere, we have channels for Wikipedia:Dispute resolution. Chaotic Enby (talk · contribs) 20:49, 17 January 2025 (UTC)
- So that addresses one issue, but evern there, if the page is protected, you can't "mention them both". What about the way of presenting a phenomenon, that while technically correct, is misleading by omission of important details? Iuvalclejan (talk) 20:42, 17 January 2025 (UTC)
- In that case, we should stick to what sources say, rather than making multiple versions trying to please each editor. If sources mention the names of both leaders, then we should have them both in the article, rather than hiding one in a separate article. Chaotic Enby (talk · contribs) 20:36, 17 January 2025 (UTC)
- The problems I see are not due to poorly supported claims. They are due to a biased reporting, that is technically correct (e.g. "hostilities erupted", rather than side A attacked side B), or outright omissions (e.g. the leader of said group is not mentioned because of his shady associations with Nazis, whereas the leader of the other group is mentioned many times). Iuvalclejan (talk) 20:29, 17 January 2025 (UTC)
- This might be a good idea for social media, but this is an encyclopedia. Phil Bridger (talk) 20:45, 17 January 2025 (UTC)
- Even more important then, so as not to deceive Iuvalclejan (talk) 20:48, 17 January 2025 (UTC)
Making categorization more understandable to the average editor
See Wikipedia talk:Categorization#When to diffuse large categories?. There is an underlying dispute that caused this but what I'm more interested in finding out how to make Wikipedia:Categorization more helpful to the average editor trying to learn about categorization and when to diffuse/not diffuse because the current text isn't as clear as I think it should be. I suck at RfCs and I don't think discussion is near the point where one should be started yet, so more input really is welcome. Clovermoss🍀 (talk) 23:03, 17 January 2025 (UTC)
- I've tried understanding Wikipedia:Categorization and it hurt my brain so I gave up, but kudos for attempting to tackle it. Schazjmd (talk) 15:48, 18 January 2025 (UTC)
- It makes my brain hurt too, but I'm hoping enough editors who find it confusing can come together and make this process less of a maze. Clovermoss🍀 (talk) 23:32, 18 January 2025 (UTC)
- One good start might be to move the section on creating categories below that of categorizing articles - there are far more article categorization changes than category creations Jo-Jo Eumerus (talk) 08:05, 19 January 2025 (UTC)
- It makes my brain hurt too, but I'm hoping enough editors who find it confusing can come together and make this process less of a maze. Clovermoss🍀 (talk) 23:32, 18 January 2025 (UTC)
More levels of protection and user levels
I think the jump from 4 days and 10 edits to 30 days and 500 edits is far too extreme and takes a really long time to do it when there are many editors with just 100, 200 edits (including me) that are not vandals, they do not have strong opinions on usually controversial opinions and just want to edit. Which is why I want the possibility for more user levels to be created. For example one for 200 edits, and 15 days that can be applied whenever vandalism happens somewhat, in that case normally ECP would be applied however I that is far too extreme and a more moderate protection would be more useful. Vandals that are that dedicated to make 200 edits and wait 30 days will be dedicated enough to get Extended Confirmed Protection. Though I want to see what the community thinks of sliding in another protection being ACP and ECP. 2 levels should suffice to bridge the gap between 4 edits and 500 edits would allow low edit count editors to edit while still blocking out vandalism. This is surprisingly not a perennial proposal. SimpleSubCubicGraph (talk) 02:19, 21 January 2025 (UTC)
- It's more that editors who have 500/30 generally have been in enough situations to hold Wikipedian knowledge that's in-depth enough. That doesn't necessarily hold true for those you've proposed. Time is part of the intention. Aaron Liu (talk) 02:28, 21 January 2025 (UTC)
possibility for more user levels to be created
I had thought about this before and think more levels (or at least an additional level with tweaks to the current ones) would be a good idea. Something along the lines of:- 1. WP:SEMI - 7 days / 15 edits
- 2. WP:ECP - 30 days / 300 edits
- 3. WP:??? - 6 months / 750 edits (reserved for pages with rampant sockpuppetry problems, such as those in the WP:PIA topic area). Some1 (talk) 02:50, 21 January 2025 (UTC)
- @Aaron Liu Yes, that may be apart of the intention but I feel like there are editors with under 500 edits who can make just a good enough edit to not get it instantly reverted. Also protection is there mainly for vandalism, if we lived in a perfect society anyone could edit wikipedia pages without needing accounts and making tons of edits.
- @Some1 I think 180/750 would be far too harsh, not even the most divisive topics and controversial issues get vandalized often with ECP.
- My idea generally was keeping ECP the same but inserting another type of protection level in-between for mildly controversial topics and pages that are vandalized infrequently. SimpleSubCubicGraph (talk) 03:25, 21 January 2025 (UTC)
- Can you give some specific examples of "controversial topics and pages that are vandalized infrequently"? Is there a particular article you want to edit but are unable to? Some1 (talk) 03:29, 21 January 2025 (UTC)
- SimpleSubCubicGraph, if this is regarding Skibidi Toilet (per the comments below), then under my proposed ECP level requirements (30 day/300 edits), you would be able to edit that article. Some1 (talk) 12:35, 21 January 2025 (UTC)
- Can you give some specific examples of "controversial topics and pages that are vandalized infrequently"? Is there a particular article you want to edit but are unable to? Some1 (talk) 03:29, 21 January 2025 (UTC)
- There is not too much utility to creating a variety of new levels, as it generally gets clunky trying to define everything, and it makes the system less easy to grasp. What differentiates 100 edits from 200 from 300? ECP is not usually for vandalism, it is deployed for topics that receive particular levels of non-vandalistic (WP:VAND is very narrow) disruption. These are topics where experience is usually quite helpful, where editors who just want to edit are more likely to get in trouble. However, it is also a very narrow range of topics, apparently only affecting 3,067 articles at the moment, or less than 0.05% of articles. CMD (talk) 03:39, 21 January 2025 (UTC)
- Isn't EC protection just for contentious topics? I didn't think we were using it just to protect against common or garden vandalism. Espresso Addict (talk) 05:59, 21 January 2025 (UTC)
- @Espresso Addict even though there are 3,000 articles that have ECP protection, many articles are often upgraded to ECP in light of infrequent vandalism (once a day, few times a week, etc). I know Skidibi Toilet was upgraded to ECP when the page was vandalized a few times. It was quite hilarious but it demonstrates a wider problem with liberally putting ECP on everything that gets even remotely vandalized. SimpleSubCubicGraph (talk) 07:06, 21 January 2025 (UTC)
- Now, are there that many people that care for Skidibi Toilet? No. But it is also liberally applied to other wiki pages that are infrequently vandalized and editors can be there, wanting to edit, but they have to wait until an admin removes the protection which can vary depending on how active they are. It can be a day, to a week, and up to a month if you are really unlucky and the article is not that well known/significant. Which is why another type of protection can allow these editors to edit their favorite subject while still preventing vandalism. There are very few ECP users and that is with counting alternate accounts. So this change will affect a lot with how wikipedia works. SimpleSubCubicGraph (talk) 07:09, 21 January 2025 (UTC)
- ECP is not liberally applied. Admins are usually very cautious about applying it, and if there is a particular case where you think it is no longer needed, raise it and it will very likely be looked at. CMD (talk) 08:11, 21 January 2025 (UTC)
- It wasn't "infrequent" vandalism. Just look at the page history. Though I would use PC protection instead. Aaron Liu (talk) 15:40, 21 January 2025 (UTC)
- Now, are there that many people that care for Skidibi Toilet? No. But it is also liberally applied to other wiki pages that are infrequently vandalized and editors can be there, wanting to edit, but they have to wait until an admin removes the protection which can vary depending on how active they are. It can be a day, to a week, and up to a month if you are really unlucky and the article is not that well known/significant. Which is why another type of protection can allow these editors to edit their favorite subject while still preventing vandalism. There are very few ECP users and that is with counting alternate accounts. So this change will affect a lot with how wikipedia works. SimpleSubCubicGraph (talk) 07:09, 21 January 2025 (UTC)
- @Espresso Addict even though there are 3,000 articles that have ECP protection, many articles are often upgraded to ECP in light of infrequent vandalism (once a day, few times a week, etc). I know Skidibi Toilet was upgraded to ECP when the page was vandalized a few times. It was quite hilarious but it demonstrates a wider problem with liberally putting ECP on everything that gets even remotely vandalized. SimpleSubCubicGraph (talk) 07:06, 21 January 2025 (UTC)
- Isn't EC protection just for contentious topics? I didn't think we were using it just to protect against common or garden vandalism. Espresso Addict (talk) 05:59, 21 January 2025 (UTC)
- 500 edits is also when you earn access to Wikipedia:The Wikipedia Library.
- Editors who make it to about ~300 edits without getting blocked or banned usually stick around (and usually continue not getting blocked or banned). So in that sense, we could reduce it to 300/30 without making much of a difference, or even making the timespan a bigger component (e.g., 300 edits + 90 days). But it's also true that if you just really want to get 500, then you could sit down with Special:RecentChanges and get the rest of your edits in a couple of hours. You could also sort out a couple of grammar problems. Search, e.g., on "diffuse the conflict": diffuse means to spread the conflict around; it should say defuse (remove the fuse from the explosive) instead. I cleaned up a bunch of these a while ago, but there will be more. You could do this for anything in the List of commonly misused English words (so long as you are absolutely certain that you understand how to use the misused words!). WhatamIdoing (talk) 08:36, 21 January 2025 (UTC)
- [to SimpleSubCubicGraph] Sorry, I must have missed the various RfCs that extended the use outside contentious topics. SimpleSubCubicGraph, if you finding pages that could safely be reduced in protection level, and that don't fall within contentious topics, then you should ask the protecting admin to reduce the level on their talk page. But if you have an urge to edit Skibidi Toilet then the simplest thing to do is make small improvements to mainspace for a couple of hundred edits. If you don't have a topic you are interested in that isn't protected just hit random article a few times or do a wikilink random walk until you find something that you can improve. Espresso Addict (talk) 08:47, 21 January 2025 (UTC)
- For anyone who wants to run up their edit count: Search for "it can be argued that", and replace them with more concise words, like "may" ("It can be argued that coffee tastes good" → "Coffee may taste good"). WhatamIdoing (talk) 00:27, 22 January 2025 (UTC)
- [to SimpleSubCubicGraph] Sorry, I must have missed the various RfCs that extended the use outside contentious topics. SimpleSubCubicGraph, if you finding pages that could safely be reduced in protection level, and that don't fall within contentious topics, then you should ask the protecting admin to reduce the level on their talk page. But if you have an urge to edit Skibidi Toilet then the simplest thing to do is make small improvements to mainspace for a couple of hundred edits. If you don't have a topic you are interested in that isn't protected just hit random article a few times or do a wikilink random walk until you find something that you can improve. Espresso Addict (talk) 08:47, 21 January 2025 (UTC)
Ways to further implement restricting non-confirmed users from crosswiki file uploading
The whole community unanimously approved restricting newest, i.e. non-(auto)confirmed, users from transferring files to Commons. How else to implement such restrictions besides an abuse filter that's already done and hiding the "Export to Wikimedia Commons" button from non-confirmed users (phab:T370598#10105456)? Someone at Meta-wiki suggested making ways to implement this, so here I am. George Ho (talk) 06:01, 21 January 2025 (UTC)
Disambiguation
I don't know if this is technically feasible or not (advice sought) but would it be possible to create a shortcut for disambiguation? Something like [[Joseph Smith (general)!]] where the bang causes it to display as Joseph Smith rather than having to write [[Joseph Smith (general)|Joseph Smith]] which can be error prone. (I am not attached to the form in the example, it is the functionality I am interested in.) Hawkeye7 (discuss) 21:33, 21 January 2025 (UTC)
- Isn't that how Wikipedia:Pipe trick works? Schazjmd (talk) 21:46, 21 January 2025 (UTC)
- Yes. Phil Bridger (talk) 21:52, 21 January 2025 (UTC)
- I did not know that! I was aware of the pipe trick suppressing the namespaces but not the disambiguation. Thanks for that! Hawkeye7 (discuss) 23:16, 21 January 2025 (UTC)
- Yes. Phil Bridger (talk) 21:52, 21 January 2025 (UTC)
WMF
I can’t upload Auferstanden aus Ruinen
You see, the East German anthem doesn’t have an audio file because when I tried to upload it, it doesn’t work. It keeps telling it is unconstructive, but there is no other file. Same thing for the Chechen anthem, even thought the file doesn’t work on mobile. 197.167.245.218 (talk) 11:27, 6 December 2024 (UTC)
- Have you tried uploading it to https://commons.wikimedia.org? If that doesn't work, maybe post on their commons:Commons:Help desk. –Novem Linguae (talk) 18:46, 6 December 2024 (UTC)
Wikimedia Foundation Bulletin December Issue
Upcoming and current events and conversations
Talking: 2024 continues
- Wikimania: Open call to host Wikimania 2027 and beyond is open until end of January 27 anywhere on earth.
Annual Goals Progress on Infrastructure
See also newsletters: Wikimedia Apps · Growth · Research · Web · Wikifunctions & Abstract Wikipedia · Tech News · Language and Internationalization · other newsletters on MediaWiki.org
- Tech News: Chart extension is now available on Commons and Testwiki; a new version of the standard wikitext editor-mode syntax highlighter will be available as a beta feature; Edit Check will be relocated to a sidebar on desktop. More updates from tech news 50, 49, and 48.
- Wikifunctions: WordGraph dataset is released, which is particularly useful for abstract descriptions for people in Wikidata. More status updates.
- Wikipedia 2024 Year in Review: Wikipedia 2024 Year in Review launched, showcasing the collective impact of Wikipedia and Wikipedia contributors in the last calendar year. The iOS App also released a personalized Year in Review to Italy and Mexico, with insights based on reading, editing, and donation history.
- Wikipedia Android App: The Android team has launched the Rabbit Holes feature in the final release of the year as part of Wiki Experiences 3.1. Currently being tested in Sub-Saharan Africa and South Asia, this feature suggests a search term and a reading list based on the user's last two visited articles. For more details or to share feedback, visit the project page.
Annual Goals Progress on Equity
See also a list of all movement events: on Meta-Wiki
- WikiCelebrate: From Challenges to Change-Making: We Wikicelebrate Chabota Isaac Kanguya, a passionate contributor from Zambia, whose journey through the Wikimedia movement embodies resilience, collaboration, and a commitment to representing underrepresented voices.
- Conference: Announcing Central Asian WikiCon 2025 which will be hosted at Diplomat International School on April 19–20, 2025, in Tashkent, Uzbekistan.
- Campaigns and topical collaboration: The Campaign Product and Programs teams published research on the needs of WikiProject and other topical collaborations.
- Wikisource: The journey so far and looking ahead with Wikisource Loves Manuscripts (WiLMa).
- CEE Meeting: Experiences and Highlights by Central Asian Community Members.
- Partnership: Wikimedia Indonesia and Google Join Forces for Wikipedia Content Enrichment in Indonesia.
- Wikimedia Research Showcase: Watch the latest showcase which discussed AI for Wikipedia.
Annual Goals Progress on Safety & Integrity
See also blogs: Global Advocacy blog · Global Advocacy Newsletter · Policy blog
- Ongoing litigation: Update on litigation in India.
Board and Board committee updates
See Wikimedia Foundation Board noticeboard · Affiliations Committee Newsletter
- Board Elections: The Board’s Executive Committee shared some thoughts on the 2024 Wikimedia Foundation Board of Trustees elections.
External media releases & coverage
- Most popular articles: Announcing English Wikipedia’s most popular articles of 2024.
- Interview: Jimmy Wales on Why Wikipedia Is Still So Good.
Other Movement curated newsletters & news
See also: Diff blog · Goings-on · Planet Wikimedia · Signpost (en) · Kurier (de) · Actualités du Wiktionnaire (fr) · Regards sur l’actualité de la Wikimedia (fr) · Wikimag (fr) · other newsletters:
- Topics: Education · GLAM · The Wikipedia Library
- Wikimedia Projects: Milestones · Wikidata
- Regions: Central and Eastern Europe
Subscribe or unsubscribe · Help translate
For information about the Bulletin and to read previous editions, see the project page on Meta-Wiki. Let askcacwikimedia.org know if you have any feedback or suggestions for improvement!
MediaWiki message delivery 18:03, 16 December 2024 (UTC)
2024 English fundraising campaign finished yesterday, 31st of December
Dear all,
The banner campaign for non-logged in readers in Australia, Canada, Ireland, New Zealand, the UK, and the US, finished on the 31st of December.
Thank you all for your collaboration during the campaign. We will post a campaign recap to the collaboration page later in January.
Wishing you all a good start to the New Year, JBrungs (WMF) (talk) 15:06, 1 January 2025 (UTC)
Taking stock of the new Community Wishlist process
Over on the Meta talk page of the new Community Wishlist process I've done a post taking stock of the changes so far. Followers of this page may be interested in that discussion. Best, Barkeep49 (talk) 17:48, 7 January 2025 (UTC)
From The Forward. Any comment/advice from the WMF on this? Gråbergs Gråa Sång (talk) 10:52, 8 January 2025 (UTC)
- I see Wikipedia:Village_pump_(miscellaneous)#Heritage_Foundation_intending_to_"identify_and_target"_editors is ongoing. Gråbergs Gråa Sång (talk) 11:09, 8 January 2025 (UTC)
WMF annual planning: How can we help more contributors connect and collaborate?
Hi all - the Wikimedia Foundation is kicking off our annual planning work to prepare for next fiscal year (July 2025-June 2026). We've published a list of questions to help with big-picture thinking, and I thought I'd share one of them here that you all might find interesting: We want to improve the experience of collaboration on the wikis, so it’s easier for contributors to find one another and work on projects together, whether it’s through backlog drives, edit-a-thons, WikiProjects, or even two editors working together. How do you think we could help more contributors find each other, connect, and work together? KStineRowe (WMF) (talk) 20:27, 10 January 2025 (UTC)
- @KStineRowe (WMF), by providing more funding for scholarships to Wikimania and other conferences, for one thing. Sdkb talk 22:57, 10 January 2025 (UTC)
We want to buy you books
I've opened a discussion on Wikipedia talk:WikiProject Resource Exchange/Resource Request to get your input on a pilot project that would fund resource requests to support you in improving content on Wikipedia. The project is very much in its early stages, and we're looking for all of your thoughts and suggestions about what this pilot should look like. Best, RAdimer-WMF (talk) 23:36, 22 January 2025 (UTC)
Miscellaneous
Clarification on what soapboxing is or isn't
I'm coming here because ANI seems like an overreaction at this point, this isn't a content dispute that Wikipedia:Dispute resolution could easily deal with, administrative action review is pretty much only for admin actions, and it seems like everyone is talking past each other. The gist of the situation is that a new editor made this edit and was reverted here. This was then discussed at Talk:Governing Body of Jehovah's Witnesses#Source material and then also at my talk page. Three editors (including me) think that a newbie citing a reference can't possibly be soapboxing. Jeffro77 disagrees (and to their credit, has apologized for some of their behaviour). Is there any way there could maybe be more eyes on this to resolve the situation so there's not some back and forth going on at my talk page? The crux of the issue really does seem to be whether citing a source can meet the definition of soapboxing.
Courtesy pings to Jeffro77, JPxG, and Hey man im josh. Clovermoss🍀 (talk) 23:02, 14 January 2025 (UTC)
- I don't know what the editor's intentions were but it may not have been soapboxing. It may simply have been to supply a source that they felt supports one of the preceding assertions better than the existing source did—but I agree with the sentiment that that source itself, by virtue of its title and subject matter, introduces an awfully volatile topic, without a foundation having been laid out for it, into an otherwise inocuous lead, and seems out of place. Also, I agreed with reverting the addition of "all male" to the first sentence. While the council is all male, that's a characteristic of it (even if a mandatory one under the by-laws), not its identity. Second sentence is fine. Largoplazo (talk) 02:52, 15 January 2025 (UTC)
- The editor added a source that is explicitly about a controversy to ‘support’ a fact that is not directly related to the controversy. The source does not discuss the cited fact. Giving undue attention to a controversy is soapboxing—Jeffro77 Talk 03:46, 15 January 2025 (UTC)
- Did you consider finding a different source for the claim? If someone wants to specify that the council is all male (not IMO an unreasonable thing to say in an article), and they cite a news article that is primarily about a child abuse scandal, then you could replace the source with a better one. If the editor's goal was to get the scandal-oriented source in the article, then you'll find out soon enough, and can tackle it head on. If the editor just spammed in the first source that mentioned the uncontested fact that they're all men, then you will have improved the article.
- I don't think that it's worth worrying too much about sources. We need them to get the article content right, but readers don't seem to care. WP:RSBIAS (which explicitly permits citing biased sources) is one of our rules, and besides, almost nobody reads the refs. In an article with that level of traffic, we'd expect just one (1) reader per day to click on any one (1) source – and if there are a lot of sources on the page, then it's almost certainly not going to be that one. WhatamIdoing (talk) 05:00, 15 January 2025 (UTC)
- It isn't a biased source, though. I wouldn't say it's ideal for much because it's mostly interviews, but it's not like ABC News is some random blog out to call Jehovah's Witnesses a cult or something. The new editor made it clear on the talk page that they were trying to help address the primary source tag (because almost all the sources in that article are from the religious group's own publications). I don't think it's odd that a source that mentions Jehovah's Witnesses' handling of child sexual abuse would mention the Governing Body, as they create the protocols and doctrine for everything (this is somewhat explained at Jehovah's Witnesses#Organization). It's why one of the members was called to testify at the Royal Commission into Institutional Responses to Child Sexual Abuse. I think it's very harsh to say someone is soapboxing for citing a source and not doing anything to the content unless you have a very good reason. And again, that's usually covered by other policies that you can point towards without assuming bad faith, like "please cite a reliable source". Clovermoss🍀 (talk) 15:17, 15 January 2025 (UTC)
- Bias can be in the eye of the beholder, and it is not unusual for editors to complain that citing a "negative" source for routine content is inappropriate (e.g., any source that is primarily about a scandal, to support any content that isn't specifically about the scandal). It can be a form of POV pushing, but it can also be an understandable impulse to not accidentally imply anything defamatory, especially if they're editing a BLP.
- WP:BURDEN requires the source-supplying editor to provide exactly one (1) source. That's because a few editors kept reverting sources, and then demanding that you WP:Bring me a rock again. Once that first source has been added, if you dislike the source someone else added, IMO you should just replace it with a {{better source}} yourself (however you define "better"). If that means you need to spend a little while searching for a news article that mentions this group is all male but doesn't mention a scandal, then that's what you need to do. People are rarely upset when you replace their weak-but-maybe-okay-ish source with a better one (and when they are, that often reveals interesting things about their goals). WhatamIdoing (talk) 20:25, 15 January 2025 (UTC)
- I agree with you (my advice was to cite a source that covers them in more detail and another editor already has), I just don't think that saying a newbie citing a negative source is "soapboxing" in any capacity. The crux of the issue is whether that's an assumption of good faith or bad faith. Clovermoss🍀 (talk) 22:20, 15 January 2025 (UTC)
- This particular case doesn't look like soapboxing to me, but adding new text and sources to the lead can be soapboxing, especially under definition 2 (Opinion pieces). Soapboxing can be done in good faith, although perhaps raising it is not always the most effective way to carry out discussion. CMD (talk) 23:21, 15 January 2025 (UTC)
- I agree with you (my advice was to cite a source that covers them in more detail and another editor already has), I just don't think that saying a newbie citing a negative source is "soapboxing" in any capacity. The crux of the issue is whether that's an assumption of good faith or bad faith. Clovermoss🍀 (talk) 22:20, 15 January 2025 (UTC)
- It isn't a biased source, though. I wouldn't say it's ideal for much because it's mostly interviews, but it's not like ABC News is some random blog out to call Jehovah's Witnesses a cult or something. The new editor made it clear on the talk page that they were trying to help address the primary source tag (because almost all the sources in that article are from the religious group's own publications). I don't think it's odd that a source that mentions Jehovah's Witnesses' handling of child sexual abuse would mention the Governing Body, as they create the protocols and doctrine for everything (this is somewhat explained at Jehovah's Witnesses#Organization). It's why one of the members was called to testify at the Royal Commission into Institutional Responses to Child Sexual Abuse. I think it's very harsh to say someone is soapboxing for citing a source and not doing anything to the content unless you have a very good reason. And again, that's usually covered by other policies that you can point towards without assuming bad faith, like "please cite a reliable source". Clovermoss🍀 (talk) 15:17, 15 January 2025 (UTC)
- The editor added a source that is explicitly about a controversy to ‘support’ a fact that is not directly related to the controversy. The source does not discuss the cited fact. Giving undue attention to a controversy is soapboxing—Jeffro77 Talk 03:46, 15 January 2025 (UTC)
Need access to journal "Women's History Review"
I need to read an article in "Women's History Review" 21 (5): 733–752. (year 2012). Access online is via the Taylor & Francis company; cost is $65 to access the article. There used to be ways in WP to get free subscriptions to do research; or sometimes WP already had subscriptions that could be used by editors. Anyone know how I can legally access that article for purposes of WP research? Noleander (talk) 23:19, 15 January 2025 (UTC)
- You should try WP:WikiProject Resource Exchange/Resource Request for specific articles, or if you meet the requirements there's WP:The Wikipedia Library that I believe gives access to some of Taylor & Francis' publications. -- LCU ActivelyDisinterested «@» °∆t° 23:30, 15 January 2025 (UTC)
- One of the nice things about Google Scholar is that it often provides multiple sources for a single article. This is the Google Scholar cluster for that article, and there's a link to a free academia.edu copy there. It's also sometimes worth investigating whether JSTOR has a copy, as JSTOR gives people a fairly large number of free-to-view articles per month. Last but not least, article authors are often happy to email a copy of the article to someone if they ask. Looks like this has the author's current email address. FactOrOpinion (talk) 01:07, 16 January 2025 (UTC)
- @FactOrOpinion - Thanks, that is perfect. I qualify for the WP Library and was able to get access to the article I needed. Noleander (talk) 01:12, 16 January 2025 (UTC)
why does dark yellow look ugly
it only just occurred to me that dark yellow is ugly, why is that Northpark997 (talk) 18:48, 16 January 2025 (UTC)
- This question belongs at the reference desk, if anywhere, not here. Phil Bridger (talk) 19:37, 16 January 2025 (UTC)
- Whether something is ugly is a matter of personal perception. Nobody else can tell you why you find something ugly. Largoplazo (talk) 19:45, 16 January 2025 (UTC)
Need copy of magazine "American Weekly" 27 Mar 1934.
Does anyone know where I can get a copy (digital/online is okay) of the 27 Mar 1934 issue of magazine "American Weekly"? I've searched high and low on the web, and cannot find it anywhere. I did find a mention of it in Library of Congress, but that appears to be just a typed draft of an article that may or may not have made it into the magazine. Also, I found several not-reliable websites that purport to have the text of the article, but I need a trustworthy source. Noleander (talk) 02:21, 17 January 2025 (UTC)
- Did you try asking at WP:RX or looking in WP:TWL? –Novem Linguae (talk) 02:42, 17 January 2025 (UTC)
- Thanks for the suggestions, I posted an inquiry in WP:RX. Noleander (talk) 02:54, 17 January 2025 (UTC)
- Awesome. I hope it helps. Good luck in your search :) –Novem Linguae (talk) 03:02, 17 January 2025 (UTC)
- Thanks for the suggestions, I posted an inquiry in WP:RX. Noleander (talk) 02:54, 17 January 2025 (UTC)
- See The American Weekly. These is a citation in there to an archived copy of a 24-year old blog website (since usurped) of someone who had a lot of issues (1918 to 1943) of the publication.[19] The email link doesn't work, but there may be enough there for you to track them down. A long shot, at best, but if all else fails ... Donald Albury 14:34, 17 January 2025 (UTC)
BAG nomination
Hi! I have nominated myself for BAG membership. Your comments would be appreciated on the nomination page. Thanks! – DreamRimmer (talk) 14:04, 18 January 2025 (UTC)
UK to require age verification for adult content
"The UK announces that, as of July, any site that allows adult content — including social media sites — will have to age/identity verify all users, or face enforcement action by the British government." - [20]
Pass the popcorn... Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 11:29, 19 January 2025 (UTC)
- Oh yeah, face enforcement. That's where you get Siri to check your older brother's face. And it checks he's still alive by poking his tongue out and saying spin, bro. 2A00:23C7:518:7B00:216C:A32E:70C7:3F80 (talk) 11:45, 19 January 2025 (UTC)
- Texas is trying to do this, too. https://www.texastribune.org/2025/01/15/texas-porn-site-ban-us-supreme-court/ 331dot (talk) 14:15, 19 January 2025 (UTC)
- Nothing new here. Virginia's had this for a couple of years. I'm unaware of any jurisdiction that's pursued Wikipedia over this, if it's concern over that that motivated this thread. Largoplazo (talk) 14:45, 19 January 2025 (UTC)
- Ofcom's guidance is online here. Please point out the part that exempts Wikipedia. Or Wikimedia Commons. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 15:21, 19 January 2025 (UTC)
- Please point out where anyone claimed that such an exemption exists. Largoplazo (talk) 15:37, 19 January 2025 (UTC)
- Florida's law applies to websites on which more than one-third of the material is "harmful to minors",[21] so WP will not be affected for now. Donald Albury 18:17, 19 January 2025 (UTC)
- Ofcom's guidance is online here. Please point out the part that exempts Wikipedia. Or Wikimedia Commons. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 15:21, 19 January 2025 (UTC)
- (edit conflict) Texas's is at least a bit more limited. It seems the UK wants age verification for any site where a child might possibly see something "harmful to children", including any site where users can post content (even if no "harmful" content is ever posted), while Texas's law (which is already in force, BTW, but is being challenged) is only for sites where over 1/3 of the content is pornographic. Anomie⚔ 14:57, 19 January 2025 (UTC)
- I'm guessing the WMF is aware of this? 331dot (talk) 16:23, 19 January 2025 (UTC)
- Seems like a reasonable guess. You could ask them? 🤷 Anomie⚔ 21:25, 19 January 2025 (UTC)
- I'm guessing the WMF is aware of this? 331dot (talk) 16:23, 19 January 2025 (UTC)
- Nothing new here. Virginia's had this for a couple of years. I'm unaware of any jurisdiction that's pursued Wikipedia over this, if it's concern over that that motivated this thread. Largoplazo (talk) 14:45, 19 January 2025 (UTC)
- Texas is trying to do this, too. https://www.texastribune.org/2025/01/15/texas-porn-site-ban-us-supreme-court/ 331dot (talk) 14:15, 19 January 2025 (UTC)
- What's the UK's definition of "adult content"? The article makes it clear that the main concern is about kids watching pornography, and it's not clear how they're planning on implementing anything. signed, Rosguill talk 16:58, 19 January 2025 (UTC)
- @Rosguill: The guidance is online here. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 17:11, 19 January 2025 (UTC)
- Thanks, that page seems to be even more explicitly focused on "pornography", so this may not end up impacting us based on what I can see. signed, Rosguill talk 17:52, 19 January 2025 (UTC)
- from one of the PDFs linked there, "Pornographic content is defined in the Act as “content of such a nature that it is reasonable to assume that it was produced solely or principally for the purpose of sexual arousal.”". Which WP immediately would not be in violation since we specifically do not allow for such images and moderate those off. — Masem (t) 17:59, 19 January 2025 (UTC)
- Wouldn't the Commons be affected? There are some pornographic content and categories on that site (e.g. c:Category:Erotic photography). Some1 (talk) 18:13, 19 January 2025 (UTC)
- There is certainly some instances of erotic photography that would meet an encyclopedic need, but I do think that category appears to be used for ppl just dropping their personal erotic photos in there, and probably should be dealt with. Masem (t) 18:17, 19 January 2025 (UTC)
- Keep in mind that the inclusion criteria for Commons isn't that the media meets an encyclopedic need, but an educational need. An image could be inappropriate for Wikipedia's needs, but could still be useful, for instance, in a class on erotic photography as part of an MfA photography program. Photos of Japan (talk) 23:28, 19 January 2025 (UTC)
- Sure, but I'd hope it would be identified that way. Masem (t) 00:31, 20 January 2025 (UTC)
- Usually people upload first and only discuss the educational merit of media if its nominated for deletion. Out of scope explicitly excludes low quality pornographic content, but I'm not sure how the community evaluates what constitutes that. My comment though was mostly concerning how it's a wiki faux pas to imply being unsuitable for Wikipedia makes something OOS for Commons. Photos of Japan (talk) Photos of Japan (talk) 03:46, 20 January 2025 (UTC)
- Sure, but I'd hope it would be identified that way. Masem (t) 00:31, 20 January 2025 (UTC)
- Keep in mind that the inclusion criteria for Commons isn't that the media meets an encyclopedic need, but an educational need. An image could be inappropriate for Wikipedia's needs, but could still be useful, for instance, in a class on erotic photography as part of an MfA photography program. Photos of Japan (talk) 23:28, 19 January 2025 (UTC)
- There is certainly some instances of erotic photography that would meet an encyclopedic need, but I do think that category appears to be used for ppl just dropping their personal erotic photos in there, and probably should be dealt with. Masem (t) 18:17, 19 January 2025 (UTC)
- Debbie Does Dallas#Legacy. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 19:26, 19 January 2025 (UTC)
- Wouldn't the Commons be affected? There are some pornographic content and categories on that site (e.g. c:Category:Erotic photography). Some1 (talk) 18:13, 19 January 2025 (UTC)
- from one of the PDFs linked there, "Pornographic content is defined in the Act as “content of such a nature that it is reasonable to assume that it was produced solely or principally for the purpose of sexual arousal.”". Which WP immediately would not be in violation since we specifically do not allow for such images and moderate those off. — Masem (t) 17:59, 19 January 2025 (UTC)
- Thanks, that page seems to be even more explicitly focused on "pornography", so this may not end up impacting us based on what I can see. signed, Rosguill talk 17:52, 19 January 2025 (UTC)
- @Rosguill: The guidance is online here. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 17:11, 19 January 2025 (UTC)
- We live in the real world, not some sort of libertarian utopia (or dystopia). Part of being one of the top sites on the Internet is that we have to take our reponsibilities seriously within the law. The WMF has done that in India and other places (in my view sometimes in the wrong way), and will do so in the UK. Just please nobody propose [redacted per WP:BEANS]. I hope that the WMF will take legal advice, but make the final decision themselves on whether to follow it. Phil Bridger (talk) 18:49, 19 January 2025 (UTC)
How to request user talk page access revocation
What's the approved way to request the removal of a blocked user's access to their talk page when you see them using it only to rant, carrying on the behavior that got them blocked in the first place? Largoplazo (talk) 00:59, 20 January 2025 (UTC)
- WP:ANI. — xaosflux Talk 01:24, 20 January 2025 (UTC)
- Thanks. Largoplazo (talk) 01:27, 20 January 2025 (UTC)
AI-generated comments?
I'm not sure where is best to ask about this, but as someone who works on film articles and participates on their talk pages, I am seeing a lot of comments that seem AI-generated, being lowercase and half-nonsensical. I detail this more here: Wikipedia talk:WikiProject Film § AI-generated comments? Any thoughts from anyone, or recommendation of another page to post about this? Erik (talk | contrib) (ping me) 22:38, 20 January 2025 (UTC)
- Those don't seem AI-generated to me. If you see stuff like that, just revert it. If it continuously comes from one IP, then you can raise that at WP:AIV or WP:AN/I. It looks like this is all from the same IP range. CMD (talk) 02:18, 21 January 2025 (UTC)
- Agreed. AIs usually have perfect grammar. –Novem Linguae (talk) 09:47, 22 January 2025 (UTC)
- They're probably not "AI" in the LLM sense. But they do fall into a category of unconstructive drive-by talk page edits that started in 2022. Some are AI prompts, some appear to be text-to-speech or Siri/Alexa/etc prompts, some seem to be bot-generated (which these seem to be.)
- When you see them nuke them on sight (which the Wikipedia policy WP:NOTFORUM allows) and nuke them ASAP because if they go into the archive (which is out of people's control, everything is bot-archived nowadays) then people will yell at you for following policy. Gnomingstuff (talk) 20:34, 22 January 2025 (UTC)
- Agreed. AIs usually have perfect grammar. –Novem Linguae (talk) 09:47, 22 January 2025 (UTC)
This matter seems well-explained by User:Photos of Japan here (permalink), if others want to know. Erik (talk | contrib) (ping me) 17:12, 23 January 2025 (UTC)
Succession boxes
Which WikiProject deals with succession boxes? GoodDay (talk) 22:06, 21 January 2025 (UTC)
- Succession to what? A political office? A peerage? Something else? Blueboar (talk) 23:34, 21 January 2025 (UTC)
- Political offices. GoodDay (talk) 00:18, 22 January 2025 (UTC)
- There is Wikipedia:WikiProject Succession Box Standardization, though said to be semi-active. PamD 06:26, 22 January 2025 (UTC)
- Quite a bit of tumble weeds in that WikiProject. A politics-based WikiProject might be best. GoodDay (talk) 06:29, 22 January 2025 (UTC)
New essay on recentism
After seeing years worth of (what I believe to be) misuse of WP:RECENTISM as an essay, I've created an essay for responding to it, WP:CRYRECENTISM. Hopefully it speaks for itself, but my core problem is that RECENTISM is sometimes used in a way that allows people to completely dismiss all sourcing on something recent, which doesn't reflect what RECENTISM says (it doesn't even describe recentism as a bad thing!) and contradicts WP:NPOV. Obviously we have to be cautious about giving undue weight to recent events, and sometimes it's true that something recent is so undue relative to the topic as a whole that it should be included entirely - but these arguments ultimately have to be made using sources (or the limitations and lack thereof), not just by bludgeoning people with all-caps links to essays. It feels like WP:RECENTISM has become a go-to argument for anyone who wants anything recent excluded for any reason, which isn't really constructive because it doesn't reflect policy, provides no real room for discussion or compromise, and implicitly allows people to just ignore any degree of coverage in a way that contradicts WP:NPOV's requirement to use sourcing as the basis for weight. --Aquillion (talk) 18:30, 22 January 2025 (UTC)
- Not a bad essay… but it leaves me with a question: would you say that RECENTISM could be a valid argument for temporary omission rather than exclusion? ie, arguing that it is too soon to add some bit of material, and that we simply need to wait a bit - so that we can properly determine how much (if any) weight to give it. Blueboar (talk) 19:17, 22 January 2025 (UTC)
- Sometimes? But it has to engage with the sources on some level. I've sometimes said "there's not enough sourcing yet, let's swing back later", which is certainly a fair argument. My problem with WP:RECENTISM is that it's frequently used as an argument that ignores current sourcing entirely, which I don't think is appropriate (or policy-compliant.) The main point of the essay, I think, is that WP:NPOV means you have to engage with the sourcing somehow, even if it's just to say "sorry, this requires a very high threshold and these sources aren't enough"; there has to be a level and type of sourcing that would allow for immediate inclusion, otherwise we're deciding article content based on our guts. Arguing for temporary omission without regard for the sources would still have the same problem. --Aquillion (talk) 19:37, 22 January 2025 (UTC)
- Can you give some examples of where this has caused a problem? Phil Bridger (talk) 20:07, 22 January 2025 (UTC)
- Sometimes? But it has to engage with the sources on some level. I've sometimes said "there's not enough sourcing yet, let's swing back later", which is certainly a fair argument. My problem with WP:RECENTISM is that it's frequently used as an argument that ignores current sourcing entirely, which I don't think is appropriate (or policy-compliant.) The main point of the essay, I think, is that WP:NPOV means you have to engage with the sourcing somehow, even if it's just to say "sorry, this requires a very high threshold and these sources aren't enough"; there has to be a level and type of sourcing that would allow for immediate inclusion, otherwise we're deciding article content based on our guts. Arguing for temporary omission without regard for the sources would still have the same problem. --Aquillion (talk) 19:37, 22 January 2025 (UTC)
- If this is the conclusion reached by WP:RECENTISM, then I'd say it's reason to improve the recentism essay rather than using it differently. I wrote an essay in the past that's something of a counterpoint: User:Thebiguglyalien/Avoid contemporary sources. Thebiguglyalien (talk) 22:29, 22 January 2025 (UTC)
- "Insisting that a recent event should be excluded simply for being recent, without further explanation or analysis, is not helpful to building an encyclopedia."
- The problem with this essay is that strawmans WP:RECENTISM. Recentism addresses a real issue: certain subjects are perennially in the news and every news spike of them leads to content added to their article until they are inundated with material that is of no lasting interest to the reader. Recentism doesn't reject content just because it is recent, it asks people to provide justification for including content beyond just the fact that it was covered by a flurry of news sources. Photos of Japan (talk) 02:27, 23 January 2025 (UTC)