The idea that AI can create content that humans are less and less capable to distinguish from reality is no longer sci-fi. As it happened for images, already well counterfeited by photo retouching software, the next will be video and definitely sound.
Soon it will be impossible to claim the authenticity of what people watch and listen from the Internet. The problem of fake-news is only the beginning.
Asymmetrical cryptography can solve this issue by signing any message that one sends into cyberspace.
With asymmetrical cryptography, two communicating peers – Alice and Bob, will have a (pub, sec)
keypair each. Any content encrypted with Alice’s pub key can only be decrypted with Alice’s sec key. Equally for signing, any content that is signed with Alice’s sec key, can be verified only with Alice’s pub key. No other keypair is required to verify that content has been truly created by Alice.
Signing audio, images or video before publishing will make it impossible for AI and malicious actors to manipulate content and create realistic data on behalf of someone else. If AI can create one’s voice and speak on her behalf, it will never sign the fake voice with the signature of that person. Unless that person’s secret key has been stolen or loss.
Signing content must become a habit. For this to happen, signing content must become approachable to the masses and easy to do.