How can we use liveness-detectable biometrics to provide computational barriers to creating fake accounts and fake votes?
Network security today is more difficult than it needs to be, largely because it must be based on the assumption that a single user can hold multiple accounts or multiple IP addresses. The one commonality between e-mail spam, DDOS attacks, presidential election meddling, and blockchain fees is that they’re all based on a single person holding multiple accounts. This person may submit multiple forum posts, multiple http requests, or multiple high-fee cryptocurrency transactions for the sole purpose of attacking or clogging the network, ruining it for everyone else. You might know this just as “spam”, or as “fake news”.
If there were a walled garden of the internet where each user was guaranteed to have provably one account, based on their biometricity, just like in the real world. You could also ban users who spam the network with these inexpensive requests, and could hold each person’s posts to a higher degree of accountability (people wouldn’t submit fake news if they couldn’t shed their post’s biometric signature). In blockchain, the real reason blockchain payments aren’t cheaper than say Visa for millions of users is due to this fact; People are clogging the network with low-priority payments and spam payments.
So, building a biometric internet isn’t really that simple. You can’t just say “here’s my fingerprint, now create my account”, because a fingerprint is really just an image file. An attacker could copy the scan of your fingerprint and pretend to be you very easily, otherwise this would have been done before.
What you need is a signal that exhibits biometricity, unique features that identify you, but is also a liveness challenge, a puzzle that basically asks “are you human?” and “are you actually there, right now, submitting this biometric signal?”. If what you submit equates to a yes, you can enter the garden. If you’ve ever had to type in numbers on a street sign, or crooked letters in all caps, or anything that’s like “I’m not a robot”, that’s a liveness challenge (Google’s reCAPTCHA). It’s a liveness challenge, but it doesn’t tie you to one account because it doesn’t exhibit biometricity. There’s nothing biological to distinguish your response from that over other users — it guarantees one person per computer, but it doesn’t guarantee you’re going to act honestly once you’re inside, because you’re still anonymous. A liveness challenge that also exhibited biometricity would ensure that whatever biometric signal you were submitting, like a fingerprint, wasn’t copy and pasted from a file, but rather the signal was generated quite recently, from an actual human being behind their computer screen.
A simple example of a challenge that exhibits both liveness detection and human detectability might be to have some sort of public ledger that is a string of random values, like a blockchain. Each block contains a random hash value that you can use to derive 20 random words out of a list of 1000. These words are derived from the hash value, meaning there is a relation between the block’s hash and this new, presentable stimulus. You could then present these 20 random words to a user and tell them, “quick, say these 20 random words out loud!”. A person says the words and their voice, which is unique to them, is recorded and propagated across the network, where it is confirmed by nodes to be a biometric unique to them. It’s a liveness challenge because it’s difficult for a computer program to generate audio that (1) exhibited biometricity and (2) contained words seeded by a very recent block (i.e. the computational power to generate this may expensive if the seeding block was found quite recently). The fact that you were able to say the words so quickly would indicate that the signal wasn’t generated in advance, or before the random value was submitted, which is a sign that the words came from a live human. A person’s voice is a good (but not great) biometric, so it exhibits biometric features as well.
Everything discussed so far is great, except for the fact that the NSA and big tech companies have been collecting voice data for years and would be really good at generating a voice simulation to say those 20 words within 3 minutes (or three seconds). We learned from the Titanic that nothing is ever going to be 100% secure, so voice data definitely has a role in our network, but as a more future-proof liveness challenge/biometric, imagine now you’re inputting an electrical signal that was generated based on a block hash. The signal passes over a user’s skin near their left ear and comes out their right ear. The signal as it leaves the right ear still contains elements of the original signal, but it has been modulated based on the unique biological properties of your skin. We then digitize the analog signal so that it can be transmitted to a network of computers which analyze it.
The most efficient blockchain will be one where every single user has provably one account. Decentralized systems are plagued by spam, so if you can unclog the network from these spam attacks, you can make the fees cheaper for everyone. Right now credit card companies take 2% off every transaction, a financial network with 0% fees is obviously going to be preferable; Money will always flow to the space with highest liquidity (unless it’s prone to corruption like EOS). To build this one-person-one-vote network you need a biometric that exhibits liveness, and one method for doing that is described here, using the unique properties of brainwave biometricities.
A long-term validation metric can be used as an appendage to this, as pioneered by OpenMined.org. Basically, to validate that data isn’t faked, a machine learning algorithm (over a time-consuming process) determines which data improves its recognition rates and which do not. If it were possible to fake out a machine learning algorithm with bad data, then ML theorists would already be doing it, and nobody would be paying for data. Thus, the most stable foundation for a network like this ultimately (and ironically) relies on artificial intelligence (in the long-term / for cementing transactions) as well as fluid biometrics and a reputation system (in the short-term / for immediate payouts).
Finally, we provide a financial incentive for identifying fake accounts based on biometric signals by creating a challengers-verifiers market. People are motivated by financial reward to check each biometric signal submitted and verified by block producers. Block producers are expected to submit fake data 1% of the time and distribute new oblio as a reward. See our github for the latest spec on this component.
Keep an eye out for our next post, where we’ll be delving into the fully-blockchain-compatible algorithm that reached higher identification rates on brainwaves than any other study we’ve seen.