Inserting an undetectable 1-bit watermark into a multi megapixel image is not particularly difficult.
If you assume competence from Google, they probably have two different watermarks. A sloppy one they offer an online oracle for and one they keep in reserve for themselves (and law enforcement requests).
Also given that it's Google we are dealing with here, they probably save every single image generated (or at least its neural hash) and tie it to your account in their database.
The dual-watermark theory makes alot of sense for defensive engineering. You always assume your outer layer will be broken and so keep a second layer that isn't publicly testable. Same as defence in depth anywhere else. I'm curious - as new models are being built constantly and they're naturally non-deterministic, do you think it's possible for end users to prove that?
> I'm curious - as new models are being built constantly and they're naturally non-deterministic, do you think it's possible for end users to prove that?
How is the model relevant? The models are proprietary and you never see any of its outputs that haven't been watermarked.