AI detectors
Do we use one?
No.
Why we do not use an AI detector
Bad Accuracy
AI detectors are notorious for vastly overstating their accuracy, because they usually cherry-pick the sample data on which to conduct tests, and then use the results of those tests to advertise themselves. It's also well established that fooling such detectors is far easier than they would like to admit.
False Positives
AI detectors can flag up non-AI content as being generated by AI, which has happened in quite a few cases. The problem this poses is that human moderators rarely challenge the detector's findings, causing people to be punished for things they didn't do (an artist was banned from a Reddit community on that basis, and more than one student has been falsely accused of submitting a paper written by AI after a detector gave it a '97% likely to be AI' score).
https://www.vice.com/en/article/y3p9yg/artist-banned-from-art-reddit
Lack Of Data Privacy
AI models can be used to launder data in a similar way to how money laundering works. Once an AI model has been trained on a given image or other data, proving the resulting model was trained on that data is practically impossible.
Many companies offeringwho deal solely in AI, and offer AI detection services, also offer AI generation,generation. givingThis gives a distinct conflict of interest to use customer data for AI training (and many either state that they do, or disclaim all legal responsibility for doing so, which amounts to the same thing). Given the implausibility of ever proving customer data was used in a model, and the incentive of companies that only deal in AI products to keep few logs, there is a lot of reward and little risk for them in using customer data in this way.
We therefore don't use an AI detection service for all of the above reasons.