Posts Tagged ‘quantum-key distribution
Quantum cryptography involves a surprisingly elaborate suite of specialized protocols, which we term “QKD protocols.”Many aspects of these protocols are unusual – both in motivation and in implementation – and may be of interest to specialists in communications protocols.
Fig.1 Effects of an unbalanced interferometer on a photon
We have designed this engine so it is easy to “plug in” new protocols, and expect to devote considerable time in coming years to inventing new QKD (Quantum Key Distribution) protocols and trying them in practice. As shown in Fig. 1, these protocols are best described as sub-layers within the QKD protocol suite. Note, however, that these layers do not correspond in any obvious way to the layers in a communications stack, e.g., the OSI layers. As will be seen,they are in fact closer to being pipeline stages.
Sifting is the process whereby Alice and Bob winnow away all the obvious “failed qubits” from a series of pulses. As described in the introduction to this section, these failures include those qubits where Alice’s laser never transmitted,Bob’s detectors didn’t work, photons were lost in transmission,and so forth. They also include those symbols where Alice choose one basis for transmission but Bob chose the other for receiving.
At the end of this round of protocol interaction – i.e. after a sift and sift response transaction – Alice and Bob discard all the useless symbols from their internal storage, leaving only those symbols that Bob received and for which Bob’s basis matches Alice’s. In general, sifting dramatically prunes the number of symbols held in Alice and Bob. For instance, assume that 1% of the photons that Alice tries to transmit are actually received at Bob and that the system noise rate is 0. On average, Alice and Bob will happen to agree on a basis 50% of the time in BB84. Thus only 50% x 1% of Alice’s photons give rise to a sifted bit, i.e.,1 photon in 200. A transmitted stream of 1,000 bits therefore would boil down to about 5 sifted bits.
Error correction allows Alice and Bob to determine all the“error bits” among their shared, sifted bits, and correct them so that Alice and Bob share the same sequence of error-corrected bits. Error bits are ones that Alice transmitted as a 0 but Bob received as a 1, or vice versa. These bit errors can be caused by noise or by eavesdropping.
Error correction in quantum cryptography has a very unusual constraint, namely, evidence revealed in error detection and correction (e.g. parity bits) must be assumed to be known to Eve, and thus to reduce the hidden entropy available for key material. As a result, there is very strong motivation to design error detection and correction codes that reveal as little as possible in their public control traffic between Alice and Bob. Our first approach for error correction is a novel variant of the Cascade protocol and algorithms. The protocol is adaptive,in that it will not disclose too many bits if the number of errors is low, but it will accurately detect and correct a large number of errors (up to some limit) even if that number is well above the historical average.
Our version works by defining a number of subsets (currently 64) of the sifted bits and forming the parities of each subset. In the first message, the list of subsets and their parities is sent to the other side, which then replies with its version of the parities. The subsets are pseudo-random bit strings, from a Linear-Feedback Shift Register (LFSR) and are identified by a 32-bit seed for the LFSR. Once an error bit has been found and fixed, both sides inspect their records of subsets and subranges, and flip the recorded parity of those that contained that bit.This will clear up some discrepancies but may introduce other new ones, and so the process continues.
Since these parity fields are revealed in the interchange of “error correction” messages between Alice and Bob, these bits must be taken as known to Eve. Therefore, the QKD protocol engine records amount of information revealed (lost) due to parity fields, and later requires a compensating level of privacy amplification to reduce Eve’s knowledge to acceptable levels.
Privacy amplification is the process whereby Alice and Bob reduce Eve’s knowledge of their shared bits to an acceptable level. This technique is also often called advantage distillation.
Authentication allows Alice and Bob to guard against “man in the middle attacks,” i.e., allows Alice to ensure that she is communicating with Bob (and not Eve) and vice versa. Authentication must be performed on an ongoing basis for all key management traffic, since Eve may insert herself into the conversation between Alice and Bob at any stage in their communication. The original BB84 paper described the authentication problem and sketched a solution to it based on universal families of hash functions, introduced by Wegman and Carter. This approach requires Alice and Bob to already share a small secret key, which is used to select a hash function from the family to generate an authentication hash of the public correspondence between them. By the nature of universal hashing, any party who didn’t know the secret key would have an extremely low probability of being able to forge the correspondence, even an adversary with unlimited computational power. The drawback is that the secret key bits cannot be re-used even once on different data without compromising the security. Fortunately, a complete authenticated conversation can validate a large number of new,shared secret bits from QKD, and a small number of these maybe used to replenish the pool.
There are many further details in a practical system which we will only mention in passing, including symmetrically authenticating both parties, limiting the opportunities for Eveto force exhaustion of the shared secret key bits, and adapting the system to network asynchrony and retransmissions. Another important point: it is insufficient to authenticate just the QKD protocols; we must also apply the these techniques to authenticate the VPN data traffic.
The quantum cryptography protocols described above provide two parties with nearly identical secret keys along with an estimate of the discrepancy between the shared key. In quantum cryptographic systems, the discrepancy between the shared secret key can be due to either a third party eavesdropping or imperfections in the transmission line and/or equipment. For security purposes, the discrepancy is always assumed to be from third party eavesdropping, since there is no way to distinguish the true cause of the discrepancy.
Using the BB84 encoding scheme, Alice and Bob check for eavesdropping by comparing a portion of their remaining bits that they would use for a shared secret key. If a third party were eavesdropping, errors would be introduced into Bob’s photon measurements. If a number of bits beyond a threshold differ, the derived key is discarded and the process QKD (Quantum Key Distribution) process is repeated. Using the B92 encoding scheme, Bob can determine if a third party was eavesdropping directly from the measurements of the photon. As stated in the previous section, if the measurement of the photon is greater than 1 then the receiver can be certain that no one was eavesdropping otherwise the photon is discarded.
Using the Ekert encoding scheme, Bob and Alice check for eavesdropping by comparing their rejected bits, called the rejected key. If their comparison satisfies Bell’s inequality, then a third party has been detected and the entire process is repeated. Otherwise the raw key is retained. Bell’s inequality is essentially a method for determining the probability that two bits do not match given the measurement basis used.
Given an error rate between the shared key, two processes can be used to reduce the erroneous bits and reduce a third party’s knowledge of the key to a negligibly small value. The two processes are privacy amplification and information reconciliation, which are used together.
Broadly stated, QKD(Quantum Key Distribution) offers a technique for coming to agreement upon a shared random sequence of bits within two distinct devices, with a very low probability that other devices(eavesdroppers) will be able to make successful inferences as to those bits’ values. In specific practice, such sequences are then used as secret keys for encoding and decoding messages between the two devices. Viewed in this light, QKD is quite clearly a key distribution technique, and one can rate QKD’s strengths against a number of important goals for key distribution, as summarized in the following paragraphs.
Confidentiality of Keys : Confidentiality is the main reason for interest in QKD. Public key systems suffer from an ongoing uncertainty that decryption is mathematically intractable. Thus key agreement primitives widely used in today’s Internet security architecture, e.g., Diffie-Hellman, may perhaps be broken at some point in the future. This would not only hinder future ability to communicate but could reveal past traffic.Classic secret key systems have suffered from different problems, namely, insider threats and the logistical burden of distributing keying material. Assuming that QKD techniques are properly embedded into an overall secure system, they can provide automatic distribution of keys that may offer security superior to that of its competitors.
Authentication : QKD does not in itself provide authentication.Current strategies for authentication in QKD systems include prepositioning of secret keys at pairs of devices, to be used in hash-based authentication schemes, or hybrid QKD-public key techniques. Neither approach is entirely appealing. Prepositioned secret keys require some means of distributing these keys before QKD itself begins, e.g., by human courier,which may be costly and logistically challenging. Furthermore, this approach appears open to denial of service attacks in which an adversary forces a QKD system to exhaust its stockpile of key material, at which point it can no longer perform authentication. On the other hand, hybrid QKD-public key schemes inherit the possible vulnerabilities of public key systems to cracking via quantum computers or unexpectedadvances in mathematics.
Sufficiently Rapid Key Delivery : Key distribution systems must deliver keys fast enough so that encryption devices do not exhaust their supply of key bits. This is a race between the rate at which keying material is put into place and the rate at which it is consumed for encryption or decryption activities. Today’s QKD systems achieve on the order of 1,000 bits/second throughput for keying material, in realistic settings, and often run at much lower rates. This is unacceptably low if one uses these keys in certain ways, e.g., as one-time pads for high speed traffic flows. However it may well be acceptable if the keying material is used as input for less secure (but often secure enough) algorithms such as the Advanced Encryption Standard. Nonetheless, it is both desirable and possible togreatly improve upon the rates provided by today’s QKD technology.
Robustness : This has not traditionally been taken into account by the QKD community. However, since keying material is essential for secure communications, it is extremely important that the flow of keying material not be disrupted, whether by accident or by the deliberate acts of an adversary (i.e. by denial of service). Here QKD has provided a highly fragile service to date since QKD techniques have implicitly been employed along a single point-to-point link. If that link were disrupted,whether by active eavesdropping or indeed by fiber cut, all flow of keying material would cease. In our view a meshed QKD network is inherently far more robust than any single point-to-point link since it offers multiple paths for key distribution.
Distance- and Location-Independence : In the ideal world,any entity can agree upon keying material with any other(authorized) entity in the world. Rather remarkably, the Internet’s security architecture does offer this feature – any computer on the Internet can form a security association with any other, agreeing upon keys through the Internet IPsec protocols. This feature is notably lacking in QKD, which requires the two entities to have a direct and unencumbered path for photons between them, and which can only operate fora few tens of kilometers through fiber.
Resistance to Traffic Analysis : Adversaries may be able to perform useful traffic analysis on a key distribution system,e.g., a heavy flow of keying material between two points might reveal that a large volume of confidential information flows, or will flow, between them. It may thus be desirable to impede such analysis. Here QKD in general has had a rather weak approach since most setups have assumed dedicated, point-to-point QKD links between communicating entities which thus clearly lays out the underlying key distribution relationships.
Quantum cryptography makes use of the quantum-mechanical behavior of nature for the design and analysis of cryptographic schemes. Optimally (but not always), quantum cryptography allows for the design of cryptographic schemes whose security is guaranteed solely by the laws of nature. This is in sharp contrast to standard cryptographic schemes, which can be broken in principle, i.e., when given sufficient computing power. From a theory point of view, quantum cryptography offers a beautiful interplay between the mathematics of adversarial behavior and quantum information theory. We discuss the traditional application of quantum cryptography, quantum key distribution (QKD), from a modern perspective, and we discuss some recent developments in the context of quantum two-party cooperation (2PC). QKD allows two distant parties to communicate in a provably-secure way in the presence of an outside eavesdropper, whereas 2PC is concerned with protecting information against possibly malicious insiders. We show the basic idea of constructing quantum cryptographic schemes, but we also show some connectionsto quantum information theory as needed for the rigorous security analyses, and we discuss some of the relevant quantum-information-theoretic results.
The security of most of the cryptographic schemes currently used relieson unproven computational complexity assumptions (like the assumed hardness of factoring large numbers), combined with an assumed bound on a potential attacker’s computing power. This complexity-theoretic approach of designing cryptographic schemes leads to very practical solutions but obviously has its downside: one cannot be fully certain about the security of the scheme! Indeed, the underlying computational complexity assumption might be broken from one day to another (e.g. byan efficient factoring algorithm being discovered) since complexity theory is still far from being able to prove some computational problem to be “hard” in the sense as needed. Furthermore, it is known that the standard complexity assumptions used in practice (factoring and computing discrete-logs) break down as soon as a quantum computer can be built. Finally, even if it is computationally infeasible for an attacker to extract sensitive data from the information available to him at the time the cryptographic scheme is used, the attacker can still store, say, an intercepted ciphertext and wait until computer technology has advanced enough so that he eventually can recover the data that was to be protected. This clearly poses a serious threat to long term highly-sensitive data.
Optimally, but not always, quantum cryptography allows for the design of cryptographic schemes that can be proven secure under the sole assumption that the laws of quantum mechanics are correct—or that they at least describe sufficiently well the behavior of certain particles like photons or spin-1/2 particles, which would be used to implement the quantum-cryptographic schemes. For instance, the search for a rigorous analysis of one of the first quantum cryptographic schemes led to important insights into quantum information theory, which in turn proved to be useful for the design of new quantum cryptographic schemes.
On the other hand, we also want to present quantum cryptography as an exact mathematical science that combines elements from classical cryptography, information theory and quantum mechanics. Therefore, besides the quantum-cryptographic schemes we show, we also discuss the theoretical foundations needed to rigorously understand and prove their security. These are quantum-information-theoretic results, specifically developed for the analysis of quantum-cryptographic schemes, but can be appreciated in their own right as providing interesting insight into the theory of quantum information. For instance, we show a meaningful way to measure the uncertainty that some piece of classical (meaning non-quantum) data contains when givena correlated quantum state, and we show that this measure determines the number of nearly-random-and-independent bits that can be extracted from the classical data. Also, we show a variant of the uncertainty principle that expresses the amount of uncertainty in terms of the above measure.
As of specific quantum cryptographic results, we focus in this article on the question of tackling classical (i.e. non-quantum) cryptographic tasks by quantum cryptographic means, like how to securely communicate a classical private message by using a quantum channel. Specifically, we focus on quantum-key distribution(QKD), which is the traditional application of quantum cryptography, and on recent new developments in the context of quantum two-party cooperation (2PC).
QKD allows two parties, Alice and Bob, to agree on a secret key K by public communication, i.e., even if an attacker Eve can access the complete conversation between Alice and Bob. By the laws of quantum mechanics, it is guaranteed that the agreed-upon secret key K is (close to) random-and-independent of Eve’s (quantum) view. As such, K can then be safely used for instance as encryption key for a (possibly perfectly-secure) encryption scheme to securely communicate a private message viathe public communication channel.
2PC, on the other hand, is concerned with protecting information against inside attackers. Unfortunately, quantum cryptographic 2PC schemes whose security is guaranteed by the laws of quantum mechanics alone do not exist (unless one settles for avery low level of security), but in addition, some “technological restriction” needs to be assumed about the attacker: for instance, that he cannot reliably store arbitrarily many, say, photons without affecting their polarization. While the theory of quantum physics permits to store quantum states, doing so in the form of photons, for instance, is technically very challenging and essentially impossible with current technology. Itis thus reasonable to base security upon it.