Mercenary fake accounts have driven altruistic coders from the cryptocurrency community. Can we re-inspire the lost population of altruistic developers with a watering hole prohibiting financially-incentivized mercenary accounts?
The most underrated threat to society today is the threat of artificial intelligence, but not in the manner you might expect. Movies like iRobot concoct an image of actual robots turning on society. More reasonable portrayals in the media and among tech leaders convey the very real threat to persons’ employability by intelligent machines. One such threat rarely discussed is the loss of freedom-of-speech on digital watering holes, due to hired “socketpuppet” accounts, including those controlled by mercenary A.I.
Why should we be scared of A.I. pretending to be humans on the internet? Because in industries like cryptocurrency, it’s already been happening. The difference between fake accounts now and 10 years from now is that fake accounts today have to be controlled by humans (see www.reddit.com/r/HailCorporate). These “astroturfers” can make you think a product, government public comment section, or news article is more deserving of your attention than it actually is, by means of pretending to be multiple users sharing a single consensus. While today these are probably just hired humans controlling multiple accounts in a clickfarms, tomorrow cheap computer programs running simple machine learning algorithms will allow everyone and their grandmother to pollute the internet’s discussion boards with fake consensus. We need an area of the internet where we can be 100% certain that a human is a human, and they are who they say they are. This network works best if is decentralized, because decentralized networks do not rely on trust in a third party, such as a CEO.
A.I. is already capable of generating human text indistinguishable of that from human-generated language.
Please see “Brainwaves as a future-resistant biometric: Human detection, Identification, and Authorization” for more info.