# Posts Tagged ‘Linear-Feedback Shift Register

### QKD Protocols Implementations

Quantum cryptography involves a surprisingly elaborate suite of specialized protocols, which we term “QKD protocols.”Many aspects of these protocols are unusual – both in motivation and in implementation – and may be of interest to specialists in communications protocols.

Fig.1 Effects of an unbalanced interferometer on a photon

We have designed this engine so it is easy to “plug in” new protocols, and expect to devote considerable time in coming years to inventing new QKD (Quantum Key Distribution) protocols and trying them in practice. As shown in Fig. 1, these protocols are best described as sub-layers within the QKD protocol suite. Note, however, that these layers do not correspond in any obvious way to the layers in a communications stack, e.g., the OSI layers. As will be seen,they are in fact closer to being pipeline stages.

**Sifting** is the process whereby Alice and Bob winnow away all the obvious “failed qubits” from a series of pulses. As described in the introduction to this section, these failures include those qubits where Alice’s laser never transmitted,Bob’s detectors didn’t work, photons were lost in transmission,and so forth. They also include those symbols where Alice choose one basis for transmission but Bob chose the other for receiving.

At the end of this round of protocol interaction – i.e. after a sift and sift response transaction – Alice and Bob discard all the useless symbols from their internal storage, leaving only those symbols that Bob received and for which Bob’s basis matches Alice’s. In general, sifting dramatically prunes the number of symbols held in Alice and Bob. For instance, assume that 1% of the photons that Alice tries to transmit are actually received at Bob and that the system noise rate is 0. On average, Alice and Bob will happen to agree on a basis 50% of the time in BB84. Thus only 50% x 1% of Alice’s photons give rise to a sifted bit, i.e.,1 photon in 200. A transmitted stream of 1,000 bits therefore would boil down to about 5 sifted bits.

**Error correction** allows Alice and Bob to determine all the“error bits” among their shared, sifted bits, and correct them so that Alice and Bob share the same sequence of error-corrected bits. Error bits are ones that Alice transmitted as a 0 but Bob received as a 1, or vice versa. These bit errors can be caused by noise or by eavesdropping.

Error correction in quantum cryptography has a very unusual constraint, namely, evidence revealed in error detection and correction (e.g. parity bits) must be assumed to be known to Eve, and thus to reduce the hidden entropy available for key material. As a result, there is very strong motivation to design error detection and correction codes that reveal as little as possible in their public control traffic between Alice and Bob. Our first approach for error correction is a novel variant of the Cascade protocol and algorithms. The protocol is adaptive,in that it will not disclose too many bits if the number of errors is low, but it will accurately detect and correct a large number of errors (up to some limit) even if that number is well above the historical average.

Our version works by defining a number of subsets (currently 64) of the sifted bits and forming the parities of each subset. In the first message, the list of subsets and their parities is sent to the other side, which then replies with its version of the parities. The subsets are pseudo-random bit strings, from a Linear-Feedback Shift Register (LFSR) and are identified by a 32-bit seed for the LFSR. Once an error bit has been found and fixed, both sides inspect their records of subsets and subranges, and flip the recorded parity of those that contained that bit.This will clear up some discrepancies but may introduce other new ones, and so the process continues.

Since these parity fields are revealed in the interchange of “error correction” messages between Alice and Bob, these bits must be taken as known to Eve. Therefore, the QKD protocol engine records amount of information revealed (lost) due to parity fields, and later requires a compensating level of privacy amplification to reduce Eve’s knowledge to acceptable levels.

**Privacy amplification** is the process whereby Alice and Bob reduce Eve’s knowledge of their shared bits to an acceptable level. This technique is also often called advantage distillation.

**Authentication** allows Alice and Bob to guard against “man in the middle attacks,” i.e., allows Alice to ensure that she is communicating with Bob (and not Eve) and vice versa. Authentication must be performed on an ongoing basis for all key management traffic, since Eve may insert herself into the conversation between Alice and Bob at any stage in their communication. The original BB84 paper described the authentication problem and sketched a solution to it based on universal families of hash functions, introduced by Wegman and Carter. This approach requires Alice and Bob to already share a small secret key, which is used to select a hash function from the family to generate an authentication hash of the public correspondence between them. By the nature of universal hashing, any party who didn’t know the secret key would have an extremely low probability of being able to forge the correspondence, even an adversary with unlimited computational power. The drawback is that the secret key bits cannot be re-used even once on different data without compromising the security. Fortunately, a complete authenticated conversation can validate a large number of new,shared secret bits from QKD, and a small number of these maybe used to replenish the pool.

There are many further details in a practical system which we will only mention in passing, including symmetrically authenticating both parties, limiting the opportunities for Eveto force exhaustion of the shared secret key bits, and adapting the system to network asynchrony and retransmissions. Another important point: it is insufficient to authenticate just the QKD protocols; we must also apply the these techniques to authenticate the VPN data traffic.