Expl0itch4ins

Update: 19/02/2017 - added discussion on hacker news.
I also added an example.
Update: 21/02/2017 - added discussion of obfuscated exploits, early disclosure penalties, incentives, and scalability

Bug bounties suck. Researchers routinely don’t get paid for their work and vendors continue to get away with the same shitty behavior. It’s a system that lacks any kind of accountability and only benefits the company.

Solution: Do it as a smart contract on a blockchain.

An example #

  1. A smart contract to audit a C-based program is written. It includes a test case to see if a file with a specific name has been created under the process’ permissions. It also includes information about the program.
  2. A researcher finds a bug and uses it to write a buffer overflow exploit. The exploit is designed to pass the test case and is written using a special domain specific language for exploitable code for security reasons.
  3. They go through the protocol to claim the reward by committing the exploit and a payment address.
  4. The network runs the exploit against the software within a virtual machine and runs the test case to check for the file. If a valid exploit was found the process should have been hijacked to write the file. The validity of an exploit thus forms part of the consensus rules for the exploit blockchain.

This process means that the complexity of a program to audit doesn’t need to be accounted for and only the results of an exploit are worth checking. But one also needs to be careful with obfuscated exploits as these would make it much harder for the vendor to release a patch.

One possible solution to the obfuscated exploit issue is to create a special language specifically for describing exploitable code that can then be used to express highly compact exploits. Such a language might not necessary be turning complete to start with but it would be formulated in a way to make writing obfuscated code very difficult.

This exploit DSL would be closely tied to how the test-cases work so that many details of an exploit can be omitted to make code more readable. Also note how this process doesn’t depend on trust since the results of all computations are verifiable. Hence there is no need to introduce a trusted third-party. It’s also important to differentiate this from so-called “oracle” schemes which depend on a number of trusted participants who vouch for external state or some subset of trusted operations.

This scheme can be done entirely based on regular proof-of-work as every full node can easily check for themselves a transaction’s validity based on the program’s response to exploits. So this whole thing is really just another part of the consensus (which is what makes this a smart contract.)

Technical details #

1. contract_tx = Hash(binary_file) as H1, URL(binary_file) as URL1

2. exploit_tx = Hash(exploit_code + payout_pub_key) as H2

3. disclosure_tx = RSA_Encrypt(exploit_code, vendor_pub_key) as E

4. Vendor receives encrypted exploit

5. The vendor (actually) pushes a patch to their customers on time.
Because if they don’t the smart contract punishes them. Can you hear the trolls singing?

6. confirm_tx (optimal)
The vendor signals to the researcher that they may disclose the exploit now.

7. release_tx (the researcher actually gets paid for once)
The researcher releases the exploit code and claims the reward.
Input = Release exploit_code + payout address such that H(exploit_code + payout address) == H2.
Output = anything.
Sig = Must be signed with the ECDSA key pair used for payout address!

8. Validate the release TX

9. patch_tx = Hash(new_binary_file) as H3, URL(new_binary_file) as URL2, code_changes as diff

10. Validate the patch TX.

Goto: step 1 again.

1. Continue running exploits and allocating rewards.

Most of the complexity for this scheme is to stop people from stealing exploits and saying that they wrote them. Because if you didn’t use commitments someone can just copy the exploit hash directly from your transaction and try confirm it before you. I suppose this isn’t that much different from how blockchain notaries work if that helps.

Scalability and incentives #

The problem of scaling such an expensive blockchain and designing it in a way that has proper full node incentives is a huge problem by itself. I still need to think more about this issue but so far I am liking the idea of having incentivized full nodes.

This will be a reward system for maintaining the blockchain. In Bitcoin rewards are given out for ordering events and validating transactions. In the exploit chain rewards will be given for checking exploits on behalf of researchers and auditing rewards.

There is likely going to need to be cryptographic proof done on the blockchain and a file storage protocol will work for proving that a node actually has a full copy. Periodic audits will need to be done to check that full nodes are actually providing the service that they need to.

I will keep thinking on the scalability issue.

Optional crowd sourcing #

The contracts can specify that patch updates can be given by anyone rather than always by the vendor with as little or as much controls in place as required (manual approval or must pass test suites.)

Edit: Most likely this won’t work. There’s no way to autonomously prove that additional bugs haven’t been added to the software. At best you can use a smart contract to automatically decide to flag a patch as being ready to a group of human oracles based on tests passing but that is really no different to how software is already written. I’ve left the original scheme in bellow.

1. good_samaritan_tx = Commit to Hash(patch_code + pay_out_address) as H4.

2. samaritan_claim_tx
Input = patch_code + payout_address such that Hash(patch_code + payout address) == H4
output = anything
sig = Signed with payout address.

3. Patch code is validated against the exploit and if it defeats it the transaction is valid.

Private exploit sales #

In theory zero-knowledge proofs can be used to prove that a person has produced a valid exploit for a buyer. This satisfies the requirement of doing trustless private purchases for an exploit (as the existing contract requires the exploit to be revealed for the reward to be given.)

The ZK proof approach means that after you encrypt the exploit your proof is given in zero knowledge. If you try to get smart and produce a valid proof but encrypt an invalid exploit the vendor can always release what you provided as the encrypted exploit to prove to the network that you are cheating. This requires a clearing phase for security.

Thus autonomous private exploit markets are possible in theory.

Btw: Something else I just realized is that this protocol has its place within the bug bounties too. The use-case would be to reward researchers for their work much earlier on without having to wait for the the patch time-frame set forth within the contract to elapse before doing a disclosure.

Why does this matter #

Under the current bug bounty system:

By representing the interests of the customer, researcher, and vendor as a smart contract we can build an autonomous bug bounty system / exploit marketplace that will produce much better results.

Feedback so far #

There’s a good discussion for this going on hacker news at the moment.

Some thoughts that people have raised:

There are still a lot of open questions on how best to achieve this so I’d love to know what your thoughts are on this dear reader.

 
9
Kudos
 
9
Kudos

Now read this

Decentralized virus scanner

A game of war for decentralized threat detection: One side plays the attacker The other side plays defence The field is a virtual machine The ombudsman is software that monitors the VMs health The ombudsman is concerned with what has... Continue →