Friday, July 15, 2011

Key management for public keys

Public or asymmetric key algorithms do not require the exchange of confidential material or the prior provisioning of both sides with a preshared secret. Instead, each participant in the cryptographic algorithm generates a pair of keys. One key, called the private key, is not disclosed to any other party. The other key, called the public key (from which this class of algorithms takes its name) is not confidential and can be sent over unencrypted connections to other parties. For most public key algorithms, the public and private keys are calculated algorithmically using random numbers generated autonomously by the owner from a good pseudorandom number generator. The random numbers are then

operated upon by the key generation algorithm for a specific public key cryptographic scheme to generate the public and private keys. The random numbers ensure that the private key is mathematically difficult to guess. The key generation algorithm establishes a mathematical connection between the private key and the public key that allows the cryptographic algorithm to work. Public key algorithms are sometimes called asymmet- ric algorithms because the need for confidentiality on the keys is asymmetric. The public key can be exposed but the private key cannot, unlike shared key algorithms where the confidentiality requirement is symmetric: the shared key must be kept confidential by both sides.

The two keys are used for different security services in different ways. For data origin authentication, the owner of the key pair generates a digital signature on data using the private key that allows the receiver to verify the origin and integrity of the data using the public key. For confidentiality protection, the sender of a confidential message to the owner of the key pair uses the public key to encrypt data that allows the owner of the key pair to decrypt the data using the private key thereby protecting it in transit. As mentioned above, in addition to the cryptographic algorithms for data origin authentication and confidentiality protection, public key algorithms also require a key generation algorithm to generate the public key from the private key.

Principles of secure key management protocols

In general, an existing key management protocol having the right characteristics for the application at hand should always be preferred to developing a new key management protocol from scratch. Security protocols usually get better over time because the bugs in them are found and fixed as more and more applications use them. So older protocols – provided they are not so poorly designed as to be in effect insecure – are usually better understood and therefore better to reuse. Of course, to reuse an existing protocol, the assumptions and constraints on the protocol must be carefully noted and not violated; otherwise, a secure protocol can easily be converted into an insecure one.

If an existing protocol is not a good match for a particular system, a new protocol is required. The following principles, discussed in more detail in RFC 4962 (RFC 4962,2007), have proven successful in mitigating threats in practice and should be kept in mind if a new protocol is developed. These principles are primarily of relevance to key management protocols that provision or derive shared keys:

Confidentiality protection, replay detection, and authentication are required for key distribution or exchange protocols over the network. Keys are confidential material, and therefore proper security protection is required. In order to prevent spoofing, both sides in the key exchange must be mutually authenticated to ensure that they fully know and trust each other. Finally, replay protection is required to avoid an attacker sending an old session key obtained by snooping a prior exchange, and thereby disrupting an ongoing session.

The cryptographic algorithm used in a security protocol should be negotiable. The security of cryptographic algorithms is not fixed, and often depends on the processing.

power available to an adversary. Since processing power is increasing, the crypto- graphic algorithms used in a security protocol should be negotiable. This allows par- ties in the exchange to use the most secure algorithm consistent with their hardware processing power and software implementation availability. In addition, negotiations for selecting a cryptographic algorithm must be performed between authenticated entities, and the messages must be covered by data origin authentication. This pre- vents an attacker from spoofing one side of the conversation into accepting a weak cryptographic algorithm that the attacker is able to compromise.

Keys need to be kept strong and fresh. Key freshness means that keys are generated whenever a new session is started, and periodically renewed. A key must be generated specifically for the use that is intended, and the material that goes into calculating the key must be new. In addition, there must be no dependency between keys such that disclosure of a previous key compromises keys that are generated later. Key strength, which is usually a function of the number of bits in the key, must be high enough that the probability of a guessing attack or other compromise is very low. Since the limits of key compromise are changing all the time as computation power increases, protocol developers must be aware of the state of the art in cryptanalysis with respect to key length in order to make wise choices.

A key in a shared key security association is confidential material, and therefore it should not be divulged, even intentionally, to an entity that doesn't need to know the key. An "entity" here means either a software module on the same node for which the key was derived or another network entity entirely. An entity has access to a key if it has access to all the cryptographic material needed to derive the key. The concept of a cryptographic boundary is useful in limiting key access. A cryptographic boundary is a topological scope within which the key is known, but outside of which it is kept secret. A cryptographic boundary encompassing a secure hardware chip is more secure than a software module in the operating system kernel. Similarly, a cryptographic boundary encompassing a single node and the associated server is more secure than one consisting of the server and several other network entities like wireless access points. The smaller the cryptographic boundary, the easier it is to limit potential compr

mises, and to detect compromises when they occur.

Authorization checking is required, in addition to authentication. This prevents a terminal that can be authenticated from claiming a higher privilege or more services than it is entitled to. When more than one network entity is involved in the protocol, all must agree on the authorization for the requesting terminal.

Damage from key compromises should be limited. The compromise of a key is a serious problem, and, although well-designed security algorithms can prevent com- promise from passive or active eavesdroppers, compromises in other ways that do not involve an attacker just having access to network traffic are possible. For exam- ple, an attacker can get access to a key by stealing the physical hardware device and extracting the key from it. Propagation resistance has many implications, but one is that authenticating entities should never share secret material, and new keys should be derived every time a terminal moves from one authenticating entity to another.

No comments:

Post a Comment