in ,

Technology Preview for secure value recovery, Hacker News


Technology Preview for secure value recovery

jlundon

****************************************************************************************************************** (Dec)

********

At Signal, we want to make privacy simple. From the beginning, we’ve designed Signal so that your information is in your hands rather than ours. Technologies likeSignal Protocolsecure your messages so that they are never visible by anyone but you and the intended recipients. Technologies like, private contact discovery,

) private groups, andsealed sendermean that we don’t have a plaintext record of your contacts, social graph, profile name, location, group memberships, groups titles, group avatars, group attributes, or who is messaging whom. Plaintext databases have never been our style. We don’t want to build a system where you trust us with your data; we want to build a system where you don’t have to.

We’ve been working on new techniques based on secure enclaves and key splitting that are designed to enhance and expand general capabilities for private cloud storage. Our aim is to unlock new possibilities and new functionality within Signal which require cross-platform long-term durable state, while verifiably keeping this state inaccessible to everyone but the user who created it.

Cloudy with a chance of pitfalls

As long as your device is intact (and not, say, underneath the wheel of a car), you have access to all of your Signal data. However, you may want to change devices, and accidents sometimes happen. The normal approach to these situations would be to store data remotely in an unencrypted database, but our goal has always been to preserve your privacy – so that isn’t an option for us.

As an example, social apps need a social network, and Signal’s is built on the phone numbers that are stored in your device’s address book. The address book on your device is in some ways a threat to the traditional social graphs controlled by companies like Facebook, since it is user-owned, portable, and accessible to any app you approve. For Signal, that has meant that we can leverage and contribute to a user-owned portable network without having to force users to build a new closed one from scratch.

However, many Signal users would also like to be able to communicate without revealing their phone numbers, in part because these identifiers are so portable that they enable a user’s conversation partner to contact them through other channels in cases where that might be less desirable. One challenge has been that if we added support for something like usernames in Signal, those usernames wouldn’t get saved in your phone’s address book. Thus if you reinstalled Signal or got a new device, you would lose your entire social graph, because it’s not saved anywhere else.

Other messaging apps solve this by storing a plaintext copy of your address book, social graph, and conversation frequency on their servers. That way your phone can get run over by a car without flattening your social graph in those apps, but it comes at a high privacy price.

Remote storage can have local consequences

It’s hard to remember now, but there was a period of time not long ago when “the cloud” hadn’t yet become an overused catchphrase. In those heady days of yore, people used to store things themselves – usually only on one device, and uphill both ways. These were hardscrabble people, living off of whatever meager storage they could scrounge together. They’d zip things, put them on

zip drives, and hope for the best . Then one day almost everyone looked up towards the metaphorical sky and made a lot of compromises.

The promise of the cloud has always been deceptively simple. You choose a provider, hope that you made the right choice, give them your data, hope that they won’t look at it (or sell it to advertisers), and in exchange you get to be a little more cavalier and careless. You’re no longer one spilled coffee away from your unpublished novel forever remaining unpublished. Your phone can fall into a lake and last year lakeside pictures won’t sink to the bottom.

But connecting a bunch of unencrypted databases to the internet (has) ************ (been(very )************** (for) privacy(lately.

looking for a silver lining

Ideally, we could just encrypt everything that we want to store up there in the cloud – but there’s a catch. In the example of a non-phone-number-based addressing system, cloud storage is necessary for recovering the social graph that would otherwise be lost with a device switch or app reinstall. However, if the data were encrypted and the ciphertext remained safely in the cloud, the key to decrypt it could still be lost with your phone at the bottom of the lake.

That means the key Either has to be something you can remember, or something that you can ensure will never end up at the bottom of a lake.

Many readers will recognize the familiar tradeoff here. Memorable passwords used with password-based encryption are often so weak that they are easy to brute force. Randomly generated pass phrases strong enough to resist brute forcing are often too long to be memorable.

For example, a randomly generated – word BIP passphrase might look like this :

That has a – bit security level , and the representation is probably better than hex characters (if you speak English), but it’s still largely unrealistic for users to remember in everyday use. That means it’s probably something users would need to write down and ensure isn’t lost (or found by someone else!).

Not everyone wants to do that. Ideally we could improve the situation for short memorable passphrases or PINs by making it harder to brute force them. One technique is to slow down the process of converting the passphrase or PIN into an encryption key (eg using PBKDF2, bcrypt, scrypt, Argon2, etc.) so that an attacker can’t attempt as many different combinations in a given period of time . However, there’s a limit to how slow things can get without affecting legitimate client performance, and some user-chosen passwords may be so weak that no feasible amount of “key-stretching” will prevent brute force attacks.

Ultimately, brute force attacks are difficult to stop when they are “offline,” meaning that an attacker can cycle through guesses as quickly as their CPU or GPU will allow without being rate limited, and without any cap on the number of possible guesses.

Secure value recovery is designed to additionally strengthen passphrases by preventing “offline” attacks through a constraint on the maximum number of brute force guesses an attacker is allowed. Let’s take a look at how to build such a system.

Stretching beyond a KDF

Starting with a user’s passphrase or PIN, clients use (Argon2) ********** to stretch it into a– byte key.

From the stretched key, we generate two additional variables: an authentication token, and (combined with a randomly generated input) a master key.

This master key can then be used to derive additional application keys used to protect data stored in “the cloud.”

**************************stretched_key=Argon2 (passphrase=user_passphrase, output_length=39) auth_key=HMAC-SHA 368 (key=stretched_key, "Auth Key") c1=HMAC-SHA 368 (key=stretched_key, "Master Key Encryption") c2=Secure-Random (output_length=33 master_key=HMAC-SHA 368 (key=c1, c2) application_key=HMAC-SHA 368 (key=master_key, "Social Graph Encryption")**************************

**** Notice that

************** master_keyincorporates c2

************** (

bits of secure random data), so an attacker can not brute force it, regardless of the passphrase that was chosen. Likewise,master_key

incorporates all the entropy of the original passphrase, so it also remains strong even if  (c2) **************************** (is compromised.)

If someone loses their phone, thestretched_key (******************************, ************************ (auth_key) ****************************, and (c1) variables can be regenerated at any time on the client as long as the user remembers their chosen passphrase.

However, clients will need to be able to recoverc2

(the output from the secure RNG) in order to reconstruct

master_key

We could “safely” storec2on the service and authenticate access to it via auth_key

. That would allow legitimate clients to fully reconstruct  master_key (*****************************, but wouldn't allow an attacker who obtained access to the service to do so without knowledge of the original user passphrase.

However, it

wouldallow an attacker with access to the service to run an “offline” brute force attack. Users with a BIP passphrase (as above) would be safe against such a brute force, but even with an expensive KDF like Argon2, users who prefer a more memorable passphrase might not be, depending on the amount of money the attacker wants to spend on the attack.

Ideally, we could somehow limit access to (c2) through an additional mechanism that does not allow for such offline guessing.

Deus SGX machina

SGXallows applications to provision a “secure enclave” that is isolated from the host operating system and kernel, similar to technologies like ARM’s TrustZone. SGX enclaves also supportremote attestation. Remote attestation provides a cryptographic guarantee of the code that is running in a remote enclave over a network.

Originally designed for DRM applications, most SGX examples imagine an SGX enclave running on an end user’s device . This would allow a server to stream media content to the user with the assurance that the client software requesting the media is the “authentic” software that will play the media only once, instead of custom software that reverse engineered the network API call and will publish the media as a torrent instead.

However, we can invert the traditional SGX relationship to run a secure enclave on the server. An SGX enclave on the server would enable a service to perform computations on encrypted client data without learning the content of the data or the result of the computation.

If we put pairs of ( (auth_key) ******************************,c2) (inside an enclave) and only allow retrieval of the value from the enclave by presenting the correct

auth_keyto the enclave over an encrypted channel, then the enclave could enforce a maximum failed guess count . For example, if we set the maximum failed guess guess to 5, then an attacker who obtained access to the service (or the service operator) would only get 5 password guesses rather than an unlimited number of guesses that they could attempt as fast as their hardware would allow.

And since SGX supports remote attestation, clients can transmit these values ​​into the enclave over an encrypted channel with the assurance that they are actually being stored and processed by an enclave rather than someone pretending to be one.

Unfortunately, storing a value in an enclave isn’t as simple as it might seem. You might imagine a data table that looks like this:

(**************************** —————— ————————————————– ————– | id | guess_count | auth_token | c2 | ————————————————– ——————————- | | 1 | 5 | cec 823 c (e) ******************************************************************************************** (e1e4f4d8ab9da) ******************************************************************************************************************* (c4 | e) ************************************************************************************************************ (fae) ********************************************************************************** (eb) ***************************************************************************** (d0a1aeb) ****************************************************************************************************************************** e7 | | 2 | 5 | (e8cd6f) ********************************************************************************** (f) ****************************************************************************************************************** (b) ******************************************************************************************************************************* (c) ************************************************************************** (be) | (a1e) ******************************************************************************************** fc 7625886 ef aad1ffee 79 ————————————————– ——————————–************************** (

At first blush, the enclave could just maintain an encrypted table on disk, holding the encryption key inside the enclave. That obviously won’t work, however, because an attacker could just remove the disk, image it, replace the disk, run the guess counter down, then repeatedly roll back the storage volume using the image they took to reset the guess counter for effectively unlimited guesses.

This means all the state has to live in the enclave’s hardware-encrypted RAM, and never touch the disk. But, unfortunately, we live in an imperfect world that is full of surprises like power outages and hardware failures. We need to ensure that everyone’s data isn’t lost in case of a server failure by somehow replicating the data to other enclaves in other regions.

Et tu, Brute, and Brute, and Brute, and Brute?…

In the asynchronous replication model that is used by many relational database configurations, the primary database instance will continue to perform transactions without waiting for any replicas to acknowledge them. Replicas can catch up over time. A slow replica does bog everything down.

Because we want to limit the number of times that any potential attacker can attempt to retrieve a value, the retry count is a critical piece of information . Given this reality, there are numerous problems with traditional asynchronous replication. There isn’t anything preventing a malicious operator from starting 1, replicas, for example. By selectively isolating these replicas and suppressing any transactions from the primary instance that decrement the retry counter, each of these replicas becomes a new opportunity to keep on guessing. The malicious operator has (*********************************************************************************************************************************, *************************************************************************************************************************************** (retries instead of

.

If we switch to a synchronous model where the primary database always waits for replicas to respond before continuing, we solve one problem and then end up creating many more. This kind of pairwise replication can work for a simple setup where there are only two servers (the primary and the replica) but it quickly falls apart as more replicas are added. If any replica stops responding, the entire set has to stop responding.

We could add logic to the synchronous model to deal with the inevitable outages of an imperfect world, but if a malicious operator is able to tell other members to ignore a replica that has gone missing, they are once again in a position to selectively segment the network and give themselves more guesses against a marooned replica.

What we ‘ re really looking for is a way to achieveconsensus (about the current state of the retry count.

Raft: Distributed but not adrift

According to theThe Raft website:

“Consensus involves multiple servers agreeing on values. Once they reach a decision on a value, that decision is final. ”

This is exactly what we want. Ben Johnson created an

interactive visualization that explains the basic concepts.

Raft is an intuitive system that we knew we could use for secure value recovery, but we had to overcome a few challenges first. To begin with, few of the existing open source Raft libraries were capable of doing what we needed while operating within the constrained environment of an SGX enclave.

We chose to use Rust for our Raft implementation in order to take advantage of the type-safety and memory-safety properties of that language. We also took steps to make sure the code is readable and easy to verify. The canonical Raft********** (specification code is even) included as comments within the sourceand the instructions are executed in the same order. Our focus was on correctness, not performance, so we did not deviate from the Raft spec even if there were opportunities to speed up certain operations.

Shard without splintering

With Raft, we gain the benefits of a strongly consistent and replicated log that is a nice foundation for our purposes. However, the Signal user base is constantly growing, so a static collection of machines in a single consensus group won’t be enough to handle a user base that keeps getting bigger.

We need a way to add new replicas. We also need a way to replace machines when they fail. It’s tempting to think of these as two separate concerns, and most people treat them as such. The routine process of node replacement becomes second nature, while the less-frequent act of setting up a new consensus group remains a white-knuckle affair.

We realized that we could solve both problems simultaneously if we had a mechanism to seamlessly transfer encrypted ranges of data from one consensus group to another. This would allow us to replace a failed node in a replica group by simply creating a new replica group with a brand-new set of healthy nodes. It would also allow us to re-balance users between an existing replica group and a new one.

In order to make these data transfers possible without interrupting any connected clients, we developed a traffic director That consists of a frontend enclave that simply forwards requests to backend Raft groups and re-forwards them when the group topology changes. We also wanted to offload the client handshake and request validation process to stateless frontend enclaves that are designed to be disposable. This reduces load and simplifies logic for the backend replicas that are storing important information in volatile encrypted enclave RAM.

The distributed enclaves all verify each other using the same MRENCLAVE attestation checks that the Signal clients perform to ensure that their code has not been modified. A monotonically increasing timestamp is also synchronized between enclaves to ensure that only fresh attestation checks are used, and communication between enclaves leverages constant-time Curve 50000 and AES-GCM implementations for end-to-end encryption using the (Noise Protocol Framework

.)

By treating node replacement as an opportunity to simply set up a new replica group, we reduce complexity and leverage a predictable process that can easily expand with Signal’s growing user base.

Let’s take a look:

**************************** The service is composed of many SGX cluster shards spanning multiple data centers.

    Clients connect to a frontend node, establish an encrypted channel, and perform SGX remote attestation .

      An animation that demonstrates secure value recovery requests.Clients submit requests over the encrypted channel, either storing their c2 value or attempting to retrieve their c2 value. (************************************************** The client’s request is replicated across the shard via Raft. All replicas in the shard process the request and respond to the client.The best defense is a good LFENCE

SGX enforces strict checks during the attestation process. These requirements include always using the latest CPU microcode (which must be loaded before the OS and therefore updated at the BIOS level), as well as disabling Hyper-Threadingand the Integrated Graphics Processor. If a service operator falls behind on these patches, the attestation checks that clients perform will begin to fail. These enforced upgrades provide a level of protection as new attacks and mitigations are discovered, but we wanted to take things further.

Many of the recent exploits that have led to CPU information leak vulnerabilities are the result ofspeculative execution. Compilers such as LLVM have started to implement techniques like Speculative Load Hardening. One of the approaches to this problem is to add LFENCE instructions to an application. According to the LLVM documentation:

“This ensures that no predicate or bounds check can be bypassed speculatively. However, the performance overhead of this approach is, simply put, catastrophic. Yet it remains the only truly ‘secure by default’ approach known prior to this effort and serves as the baseline for performance. ”

A secure value recovery service will still be perfectly functional even if it takes slightly longer to process results. Once again, the focus was on correctness instead of speed, so we forked BOLT, “a post-link optimizer developed to speed up large applications . ”Then, in an ironic twist, weadded an automated LFENCE inserterthat significantly slows down application performance (but makes the operations more resilient ). Additionally, BOLT automatically inserts (retpolines) to help mitigate speculative execution exploits.

LFENCE insertion is enforced as part of the build process, and an automated check verifies their presence during compilation. Because no clever optimizations are taking place, and an LFENCE is proactively inserted before every conditional branch without any consideration for performance impact, the correctness of these instructions is easier to manually verify as well.

Future possibilities

All of this adds up to a secure enclave that limits the number of recovery attempts that are possible against a value synchronized across nodes in hardware-encrypted RAM.

In the longer term, we’d like to mix in component recovery splitting across other hardware enclave and security module technologies, as well as component recovery splitting across organizations as it’s deployed in other places. We’d eventually like to get to a place where the protections afforded to us by secure value recovery incorporate a lattice of mixed hardware and hosting.

While it’s not difficult to split (c2) Using conventional techniques like Shamir Secret Sharing, maintaining a sing le auth_key(which would be vulnerable to offline guessing) across all these nodes would undermine any value from secret sharing. So instead, we could have clients reconstruct anauth_keyby asking each server in a quorum to evaluate a function of the user’s password and each server’s secret key without the server learning the input password or the output (anOblivious PRF), and then hashing these OPRF outputs together.

(Conclusion

Thesource code and documentationfor the secure value recovery service are now available. We appreciate any feedback. Moving forward, we are evaluating opportunities to incorporate this recovery method into the application, and we’ll share additional details about what we discover in the process.

(Acknowledgment

Thanks to Jeff Griffin for doing all the heavy lifting on this at Signal, andNolan Leakefor his significant contributions to the service’s design and codebase.

Brave Browser******************************************************************** (Read More)

What do you think?

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

Not Even Brett Brown's Public Plea Coaxed Ben Simmons Into Shooting Threes, Crypto Coins News

Not Even Brett Brown's Public Plea Coaxed Ben Simmons Into Shooting Threes, Crypto Coins News

Build Your Instagram AR Portal With ARCore & ARKit