I think this is a misunderstanding of how most of the AI that feed into workflows work. Most of them don’t dynamically re-train live based off how users are using them. At least not outside of the context of that user/chat instance.
Most likely what these and others are doing is to download pre-trained open source AI datasets thrn and run them locally so they aren’t restrained by any of the commercial AI’s limitations on what they will and won’t output to users. I highly doubt there’s enough material out there to truly train a new AI model on only explicitly racist material. This is just a bunch of assholes doing prompt engineering on open source models running locally.
Oh, if it’s being run locally, then I’ve fundamentally misunderstood the situation. Thanks for pointing it out.