Trust

If one were to control a “funded” private key and agree to run a program that controlled the release of that private key, there would be nothing to stop the owner from failing to run that program. Some people aren’t honest and would seek to disrupt the process. And yet: if you were to repeat this same experiment at scale some people would still stay honest. Why is that? Well, not everyone is unethical (like the people in Japan who routinely return wallets with their full sums intact!) Then there are thieves who are only thieves on occasion. That is to say – if someone found a wallet full of money on an isolated street they would take it.

Opportunistic attackers aren’t the kind of people who would rob a bank or trouble themselves to steal money. In our experiment, you could defeat these attackers with a trusted execution environment or TEE. But what happens if we stop our security there? Some attackers are motivated. Much like bank robbers, certain people would be motivated to attack the TEE to extract private keys. There is also the case where a TEE fails completely like in recent Intel CPU exploits.

If you take the sum of honest participants, opportunistic attackers, and people who are technically unable to run an exploit – could you design a system where even the failure of certain contracts could be replaced from the running fees generated by successful contracts? In such a case security wouldn’t need to be perfect. Indeed, security cannot be perfect anyway. But why should we stop there? There is more ways to improve the security. The most obvious tweak is wherever possible contracts should be filled by multiple parties. Exchange agreements and futures make this simple, but for highly unique, non-fungible agreements - this is not always possible.

We wish to make our system harder to attack by motivated attackers. We have added TEEs and split up our business between counter-parties where ever possible. The next layer to improve would be the cryptography. One way to do that is rather than generating private keys inside a single enclave it should be shared between N other peers with threshold signing. Again, none of this is perfect.

Could a motivated attacker compromise the system? Yes, if they went after multiple signers and managed to bypass their TEEs they could steal the funds. But in the worst case scenario they will have stolen an amount of funds split up between a counter party. If everyone limits their risk to say $1000 per counter-party eventually you reach a threshold where the cost of attack is simply unreasonable for these type of contracts and the success fees would help to cover more risky use-cases.

Inherently, this design disperses risk while reducing the impact of hacks. You cannot do this with generic private computation because every secret is potentially unique (like medical records), irreplaceable, and shouldn’t ever be leaked. Private keys on the other hand are just random noise, and $50 of my money is as good as $50 of anyone else’s.

 
0
Kudos
 
0
Kudos

Now read this

Building a decentralized cryptocurrency exchange using zero-knowledge proofs

7/12/2017: The protocol is flawed and contains a black mail risk. The other side cannot claim a refund without knowledge of the secret so even if the TXIDs can be validated with ZK-proofs the scheme still doesn’t work. I guess its back... Continue →