What are algorithms? How are they important? An algorithm is a set of step-by-step instructions for a particular task for a computer. To implement a cryptography concept or mechanism such as encryption and hashing, you need to use an algorithm where a mathematical operation is performed on input data and keys. Now, can you recall some historic algorithms that we discussed in the previous lesson? Well, these are Caesar ROT13 algorithms. Let us now begin this lesson by exploring the different available algorithms for various modern cryptography mechanisms or concepts. The following screen explains the objectives covered in this lesson. After completing this lesson, you will be able to: • Identify and compare symmetric encryption algorithms, • Identify and compare public key or asymmetric encryption algorithms, • Identify and compare hashing algorithms, • Identify and compare transport encryption algorithms, • Identify and compare wireless encryption algorithms, and • Describe cipher suites and key stretching.

In this topic, you will learn about symmetric algorithms. For encrypting or decrypting data using symmetric encryption, a symmetric encryption algorithm is essential. Rather than building your own symmetric algorithm, you can select from several industry-standard algorithms using either stream or block ciphers. Symmetric cryptographic algorithms are faster, stronger, and have much shorter keys than asymmetric, and they are secure even with a smaller key size. Some of the most commonly used symmetric encryption algorithms are Advanced Encryption Standard, or AES; Data Encryption Standard, or DES; Triple Data Encryption Standard, or 3DES; Blowfish; TwoFish, Rivest Cipher 4, or R-C-4; Pretty Good Privacy; and One-time-pads, or O-T-P. Modern symmetric algorithms come with key lengths ranging from 40 to 256 bits. As a result, the key spaces for these algorithms range from 2 raised to 40 to 2 raised to 256 keys. Well, this range determines how vulnerable an algorithm is to a brute-force attack. A 40-bit key length is more vulnerable to 256-bit key length, as the key space is relatively less. Further, too short key lengths can have the full key space stored on a server cluster of an attacker, making it possible for the encryption to get cracked in real time. We shall start by talking about Data Encryption Standard and Triple Data Encryption Standard. During the mid-1970s, data encryption standard, or D-E-S, referred to a block cipher originally selected by the U.S. government as a standard for all government communications. It was a 56-bit encryption algorithm featuring several modes for security and integrity. The DES algorithm involves a sequence of permutations and substitutions of bits, along with the encryption key. The same key and algorithm are used for encryption and decryption. Cryptographers have examined DES for more than 35 years and have come across no significant flaws. Due to simple mathematical functions, the DES algorithm can be implemented easily in hardware. The DES algorithm uses a long sequence of XOR operations to form the cipher text. This procedure is performed 16 times for every encryption and decryption task. This means that the algorithm performs 16 rounds of encryption. These rounds involve re-arranging the bits’ order from the previous round and using the exclusive OR function for merging them with the key. The algorithm accepts 64 bits of plain text but uses a key of only a fixed length of 56 bits as only 56 bits actually have keying information. Rest of the 8 bits contain parity information for ensuring the accuracy of the other 56 bits. In case you are using DES with a weaker 40-bit key encryption, it means that the encryption key has 16 unknown bits of padding, thus making the key length as 56 bits and key strength as 40 bits. The algorithm comes with five modes of operation namely, Electronic Codebook or E-C-B, Cipher Block Chaining or C-B-C, Cipher Feedback or C-F-B, Output Feedback or O-F-B, and Counter or C-T-R. All these modes operate on plain text of 64 bits at a time for creating 64-bit cipher blocks. Electronic Codebook, or E-C-B, splits messages into blocks, each of which is encrypted separately. Here, two identical plaintext blocks, which when encrypted with the same key results in the same cipher text blocks, can allow an attacker to identify the traffic easily. For instance, an attacker can seize an ECB-protected login sequence of an administrator and then replay it. Well, this risk is overcome by Cipher Block Chaining, or C-B-C. DES Modes – CBC In CBC, a randomly initialized fixed-size vector value is used in the beginning for ensuring a unique message. Further, each 64-bit plaintext block passes through the exclusive OR or X-O-R mathematical function bitwise with the former cipher text block before being encrypted with a different key. Encryption of the same plain text block results in dissimilar cipher text blocks as the key keeps changing every time. However, this mechanism is weak against an extended brute-force attack. DES Modes – CFB Cipher Feedback, or C-F-B, converts a block cipher into a self-synchronizing stream cipher. Self-synchronizing means losing a part of the cipher text will make the receiver lose only that part of the message. DES Modes – OFB In case of Output Feedback, or O-F-B, each block cipher becomes a synchronous stream cipher. The key stream blocks are XORed with the plain text for generating cipher text. DES Modes – CTR Lastly, Counter, or C-T-R, produces the next key stream block by encrypting the succeeding values of a counter and keeping a track of a noun acting as the initialization vector. Although several government agencies are using DES for cryptographic applications, the algorithm was actually superseded by the Advanced Encryption Standard Algorithm in 2001. Do you know why? Let’s find out! Data Encryption Standard is actually considered insecure due to its small key size of only 56 bits. Recall the rule that we had learned in the previous lesson that more the key space or number of bits in a key, the better is the security. Today, the algorithm can be easily cracked in just a few minutes, through either an already computed hashing technique or brute-force attack. Therefore, consider the DES algorithm in scenarios where very short-term confidentiality is required. To be on the safer side, consider any other symmetric alternative to DES. One such improved and alternative version of DES is Triple DES.

Triple Data Encryption Standard, or 3-D-E-S, is an improved block cipher version of Data Encryption Standard. As the name indicates, it operates using three mathematical operations as well as three different 56-bit keys to deliver 168-bit encryption. This means Triple Data Encryption Standard increases the key length to 168 bits by implementing three different keys, each of 56 bits. It follows the technique of applying the DES algorithm thrice in a row to a plain text block. Triple DES uses the three-step procedure of Encrypt-Decrypt-Encrypt, or E-D-E, to encrypt plaintext. First, the message is encrypted with the help of the first key of 56 bits. Second, the data is decrypted through another 56-bit key. Last, the data is again encrypted using the third 56-bit key. If the first key and third key are equal, it results in 168-bit encryption, or else it results in a less-secure encryption of 112 bits. Because the triple DES algorithm is more secure than DES and is significantly harder to break than several other systems, it is still in use in government agencies. However, the Triple Data Encryption Algorithm is only three times more resistant to cracking attempts than DES. This means it is not completely immune to channel attacks such as brute-force. Further, it takes more time for encryption. Therefore, the American government has made Advanced Encryption Standard, or AES, its standard for encryption. It is now the preferred symmetric algorithm for government applications. Considered the American government standard for sensitive but unclassified data exchange, Advanced Encryption Standard, or A-E-S, provides a higher level of security than the former Data Encryption Standard, or DES, algorithm. This is perhaps because of higher key lengths or block sizes of 128, 192, and 256 bits, of which 128 bits is the default one. Developed by Joan Daemen and Vincent Rijmen, AES uses the Rijndael block cipher algorithm and has replaced DES and triple DES algorithms as the new standard for symmetric encryption. While the original specification supported encryption of only 128-bit blocks, Rijndael exceeded it for using a block size equal to any of the aforementioned key lengths. The selected key length determines the number of encryption rounds. For example, 128-bit keys need 10 rounds, 192-bit keys need 12 rounds, and 256-bit keys demand 14 rounds of encryption. Of all, Advanced Encryption Standard 256 qualifies for Mandatory Access Control classification as Top Secret for the United States government. As of early 2014, Advanced Encryption Standard has no known flaw or weakness and is yet to be cracked. It is perhaps the best encryption solution for most users and organizations. Selecting AES is better than other symmetric algorithms in terms of gaining long-term, reliable confidentiality and protection of data, whether stored or in transit. First, the AES algorithm has already replaced DES and triple DES algorithms, perhaps because of the higher key length, thus making AES stronger than DES and triple DES. Second, AES runs faster and is more efficient than the other two on comparable hardware. This makes AES more feasible for low-latency and high-throughput environments, particularly in case of pure software encryption. It is particularly designed for economical implementation in different sources of software or hardware on a range of processors. Despite having more benefits, you may find Triple DES is more trusted in terms of strength. This is because AES is a relatively new algorithm, which the cryptographers are yet to scrutinize in detail. This is where the golden rule of cryptography is applied, which is to use a proven and mature algorithm instead of a new and young one.

Let’s now explore the Blowfish and Twofish symmetric algorithms. Invented by Bruce Schneier, Blowfish refers to a symmetric block cipher alternative to the DES algorithm. The high-speed algorithm utilizes keys of variable length, ranging from 32 to 448 bits but operates on 64-bit blocks just like its predecessors. However, its span of variable length is something that makes it fall apart from others, ranging right from insecure 32 bits to highly strong 448 bits. While it is true that longer keys take longer encryption/decryption time, it has been established through time trials that Blowfish is faster than Data Encryption Standard, or DES, and International Data Encryption Algorithm, or IDEA. However, it becomes slow when keys are changed. Further, Blowfish encryption has a memory footprint of just over 4 kilobytes of RAM. Completing almost 16 rounds, the function of Blowfish divides the 32-bit input into four 8-bit blocks acting as input to the substitution boxes. The outputs are added and XORed to generate the ultimate 32-bit output. Each S or substitution box accepts 8-bit input and generates a 32-bit output that is replaced by the 32-bit output of another block. Because every time a new key is used for encryption, the algorithm is strong enough to be broken. Released for public use without a license, Blowfish encryption is built into several operating systems and other commercial software products. Consider using blowfish for encryption only if you are using key lengths of minimum 128 bits. However, you should avoid using Blowfish to encrypt files larger than 4GB due to the algorithm’s small 64-bit block size. Further, Blowfish is found to be vulnerable to attacks in case of weak keys as a class of weak keys already exists. This means you must select the keys carefully or switch to a more modern option such as AES or Blowfish's more modern version such as Twofish that has no weak keys at all. The non-patented Twofish algorithm, also invented by Bruce Schneier, is an improved version of the Blowfish algorithm. It is a block cipher operating on 128-bit blocks of plain text by using cryptographic keys that have a length of up to 256 bits. Compared to Blowfish, Twofish has a complex key schedule. If Twofish is available in a software product, it is equivalent to AES, ensuring that it is a secure solution. Twofish outperforms the AES algorithm in terms of speed, depending on the key length and hardware- or software-based encryption. Twofish is fast on both 8-bit and 32-bit processors including embedded chips and smart cards and in hardware. Moreover, the algorithm is flexible enough to be used in network applications, where the changing of keys is often, and in applications where little or no ROM or RAM is available. Both Blowfish and Twofish employ a Feistel network of 16 rounds, wherein in each round, half of the data block passes via the function, and is XORed with the remaining half. In each round, each of the two 32-bit words is split into four bytes. These bytes pass through four dissimilar substitution boxes to combine into a 32-bit word. At times, Twofish can be a better choice than the AES algorithm due to its distinct blend of speed and flexibility. In terms of flexibility, no other algorithm can trade off key-setup time or memory for encryption speed. We shall now learn about Rivest Cipher 4, or R-C-4, algorithm. Rivest Cipher 4, or R-C-4, is a 128-bit stream cipher algorithm that has key sizes between 40 and 2048 bits and only one round of encryption. Invented by Ron Rivest, the algorithm is also known as the Ron’s code or cipher. The algorithm is widely used for encrypting wireless traffic through Wireless Equivalent Privacy, or WEP, and WPA Wi-Fi Protected Access, or WPA, encryption. It is also used for encrypting Internet traffic through Secure Sockets Layer, or SSL, or Transport Layer Security, or TLS, protocol. Facilitating a highly secured transfer of a shared key, RC4 generates a pseudo random, and not a purely random, key stream. This stream is XORed with plain text and 32-bit integrity check value to create cipher text. The key stream comes from a pseudo-random number generator to which the combination of WEP key and 24-bit initialization vector acts as the input. This key stream as a bit sequence is of the same size as the combination of integrity check value and data. To create a payload for the wireless frame, the vector is added to the front of the cipher text that is an encrypted combination of the integrity check value and data.

While WEP with RC4 is popular for Web applications due to its simplicity and rate in the software, it is vulnerable to several attacks, such as plaintext attack and inductive attack aiming to recover the key stream. Further, RC4 in WEP has weak data integrity in the form of 32-bit ICV as its bits can be changed by hackers in the encrypted payload. Further, the initialization vector as a part of encryption key is only 24-bit, which is weak and indicates that such static vectors are reused using the same key. This makes it easy for the attackers to crack the WEP key by scrutinizing the repeating result, which only takes 5 to 10 minutes. However, these issues are resolved with RC4 in a WPA environment. The RC4 cipher itself runs quickly in software and is secure to some extent, although its implementation is not secured in a WEP environment. This is why RC4 is still a reliable method of encryption in use today; however, you should choose subsequent stronger and better versions, such as RC5 and RC6 for higher level of security in WEP environments. Let’s now explore the difference between WEP and WPA protocols for encryption. WEP, WPA, and WPA2 are Wi-Fi protocols of which WEP was designed to offer security as robust as a wired network. However, WEP has numerous security holes to allow protection breach by a hacker, although it keeps average unauthorized users away. To overcome these loopholes, WPA was introduced with integrity checks detecting whether an attacker had captured or altered the packets between the client and access point along with a per-packet key system being more secure than the WEP fixed key system. Due to security flaws, WEP has been already replaced by WPA and WPA2 protocols in most implementations. Let’s check out a few more differences between these protocols. WEP uses 64- or 128-bit encryption keys made up of a 24-bit initialization vector and a 40-bit key or 104-bit key, respectively. In short, it uses RC4 and a static key. However, WPA uses an RC4 algorithm with a 128-bit pre-shared key dynamically generated by Temporal Key Integrity Protocol, or TKIP, employing a per-packet key system. In this way, WPA provides more security despite using RC4. WPA2 replaces TKIP with CCMP or Cipher Block Chaining Message Protocol and RC4 with AES. It supports 128-bit, 192-bit, and 256-bit encryption. CCMP uses 48-bit initialization vector with 128-bit AES encryption, which indicates a larger initialization vector to increase the difficulty in cracking and reducing the risk of replay attack. WEP implements the 802.11 Wi-Fi standard, whereas WPA and WPA2 implement the 802.11i standard. However, WPA implements most, not all, of the latter standard for communicating older wireless devices. On the other hand, WPA2 implements the full standard and is incompatible with older devices. WEP involves static encryption keys, whereas WPA and WPA2 deal with a unique encryption key. In WEP, shared-key authentication (SKA) is used wherein everyone accessing a wireless network uses a pre-shared key, or PSK, already shared via an out-of-band communication. The same key value is used for both authentication and encryption, which is problematic. Under WPA and WPA2, the PSK is still a fixed value for all wireless users but can be a stronger one. Further, the key is not involved in key assignment for encryption. Talking about key distribution, it is automatic in WPA and WPA2 environments, while it is manually typed into each device in a WEP environment. WPA2 is theoretically not hackable, while WPA is. On the contrary, WEP has been practically broken. However, hackers have found an exploitable weakness in WPA2, in the year 2010. However, this weakness only allows unauthorized access as compared to what can happen via a wired network. Therefore, the weakness is not an actual encryption failure. As of 2011, WPA2 is perhaps the strongest available Wi-Fi security that has no weak keys and is resistant to hacker attacks. WEP requires the least processing power, whereas WPA needs significant processing power. Of all, it is WPA2 that demands the greatest processing power. WEP uses WEP key as authentication. However, WPA and WPA2 can use 802.1x packets through Extensible Authentication Protocol, or E-A-P, a highly secure authentication protocol supporting several methods of authentication, such as Kerberos, smartcards, and tokens. Let’s now compare these symmetric algorithms to determine which of the two is stronger and more reliable. We first compare the symmetric algorithms in terms of key length in bits. DES has a length of 64 bits of which 56 bits are usable and 3DES has a length of 168 or 112 bits. Similarly, AES and Twofish have lengths of 128, 192, and 256 bits. Further, Blowfish and RC4 have variable lengths of 32 to 448 bits and 40 to 1024 bits, respectively. Then, we compare the algorithms in terms of rounds. DES has 16 rounds; 3DES has 48 rounds; AES has 10, 12, or 14 rounds; Blowfish and Twofish have 16 rounds; and RC4 has only 1 round. Next, we compare the block sizes in bits. DES and 3DES have sizes of 64 bits; AES has a size of 18 bits; Blowfish has a size of 64 bits; Twofish has a size of 128-bit; and RC4 does not have a discrete block size. We now compare the encryption speed. DES and 3DES are very slow; AES is relatively faster but is dependent on key size; Blowfish is very fast; Twofish is fast; and RC4 is faster than AES. Finally, we compare the level of security. DES and 3DES provide adequate security, although prone to a few attacks. AES ensures excellent security but is vulnerable to side-channel attack and key recovery attack. Blowfish is highly secure with no attacks proving fruitful, whereas Twofish is just secure but vulnerable to related key attack and differential attack. Last, RC4 is less secure although fine for moderate number of sessions and older wireless devices not supporting higher or more secure protocols or encryption algorithms. From this comparison, we can conclude that Advanced Encryption Standard and Twofish are most reliable algorithms due to which they are the best for government, financial, and banking systems. Keeping in mind the fact that all keys need to be replaced even if one key is compromised, a symmetric algorithm may not be feasible for the Internet environment, although it is speedy. It is rather suitable for encrypting or decrypting personal or confidential data stored on drives and other local storage resources.

In this topic, you will learn about asymmetric algorithms. The two biggest advantages of asymmetric algorithms are that you can securely exchange the public keys and there is no need to have more keys with the increase in the number of users. This is exactly in contrast to algorithms with symmetric encryption. In the asymmetric system, you can freely share the public key while keeping the private key and you need to have only a single key pair for each user to be able to encrypt messages for all other users. However, the limitation of asymmetric algorithms is that they are slower than symmetric ones. Therefore, you should choose symmetric encryption in case performance is a key factor. There are not as many asymmetric algorithms as there are symmetric ones. Some of the widely used asymmetric algorithms are Rivest Shamir Adleman, or R-S-A; Diffie-Hellman, or D-H; Ephemeral Diffie-Hellman, or D-H-E; Elliptic Curve Diffie-Hellman, or E-C-D-H-E; El Gamal; and Pretty Good Privacy. Asymmetric cryptography is also termed Public-Key Cryptography, or P-K-C. However, not all public-key cryptography systems are asymmetric and that a few asymmetric systems are not pure public-key cryptography ones. Diffie-Hellman and ElGamal are not pure public-key cryptography systems as they are non-key-based systems. As the name indicates, RSA got its name from its three creators namely, Ron Rivest, Adi Shamir, and Leonard Adleman. The creators patented this algorithm and entered into a commercial venture called the RSA Security for designing RSA-based mainstream implementations. Rivest Shamir Adleman, or R-S-A, is the first asymmetric algorithm to implement encryption with digital signing, thus ensuring confidentiality, integrity, and authentication with non-repudiation. Since its inception in 1977, the algorithm is used widely in different environments and has become a de facto standard. Today, the algorithm is used in several networking environments, including in S-S-L data transmission, e-mail encryption, and Secure Shell, or S-S-H, remote connection. It is also used for exchanging public keys over the Internet. RSA is perhaps also the security backbone of several popular security infrastructures of Cisco, Nokia, and Microsoft. Although reliable and safe, RSA is slow in data calculation because of the big numbers involved. Due to this, it is implemented more commonly for safe delivery of symmetric keys in a hybrid cryptographic system using both symmetric and asymmetric algorithms. Further, the algorithm is prone to a few attacks but because of its continuing reliability and adequate security, it is used in environments demanding public-key cryptography for transmission or storage. There are cases where neither offline distribution nor public-key encryption exists for proper and safe exchange of keys. Two parties want to communicate but there is no physical means to exchange the keys and no public-key infrastructure is deployed. In such situations where communication over an insecure channel becomes essential, you would rely upon Diffie-Hellman algorithm, the most popular key-exchange algorithm. Whitfield Diffie and Martin Hellman introduced this algorithm meant only for creating and exchanging the symmetric shared key between two parties, not for encrypting or decrypting the data. The algorithm primarily aims to send the keys across public networks so securely that no third party can determine the shared key. This is the reason why the Diffie-Hellman algorithm is widely in use in several security protocols and products. The table below compares and contrasts RSA and Diffie-Hellman algorithms. First, RSA is an encryption algorithm, while Diffie-Hellman is a key-exchange algorithm not meant for encryption or decryption. RSA encrypts messages by using key lengths between 1024 and 3072 bits. RSA claims that 1024-bit keys can be cracked nowadays, 2048-bit keys shall remain strong until the year 2030, and a key of length 3072 bits will last beyond 2030. An RSA 1,024-bit key is fine for both exchanging keys and generating digital signatures while used for bulk encryption, whereas a 2048-bit key is ideal for securing a digital signature for an extended period. The Diffie-Hellman algorithm roughly features the same key strength as RSA for the keys of same sizes. It depends on the discrete logarithm problem related to the integer factorization problem equally responsible for RSA's strength. Therefore, a 3072-bit Diffie-Hellman key has strength equal to a 3072-bit RSA key. Third, the security of Diffie-Hellman is dependent upon the infeasibility of computing hard-to-solve discrete logarithms, while RSA’s security is dependent on the infeasibility of factoring.

Dr. T. El Gamal described how the Diffie-Hellman key exchange algorithm could be extended mathematically to support even encryption and decryption in a public-key cryptosystem. This is what the El Gamal algorithm does. This asymmetric logarithm includes the discrete log problem infeasibility along with the ability to encrypt and decrypt messages. This means it works exactly like the Diffie-Hellman algorithm but includes choosing a random ephemeral key called k existing only for that one session and implementing an additional step of encryption by multiplying modulo p. Knowing about El Gamal makes sense because Diffie-Hellman does not provide encryption, authentication, and non-repudiation through digital signatures. However, El Gamal with both encryption and signing algorithms ensure all these three pillars of security. The El Gamal cryptosystem is usually implemented in a hybrid cryptosystem wherein the message is encrypted using a shared symmetric key and then El Gamal encrypts that symmetric key. This is because El Gamal, being an asymmetric cryptosystem, is slower than symmetric ones at the same security level, hence it is faster to encrypt the symmetric key that is often smaller than the message size. El Gamal, however, has a major limitation of doubling the length of anything it encrypts. This leads to major hardship while encrypting big messages sent through a narrow-bandwidth channel. At the time of its release, El Gamal had a primary advantage over the RSA algorithm. It was open to the public for free use. Since 2000, RSA now is also open to the public. The table below compares and contrasts RSA and El Gamal algorithms. RSA supports key lengths greater than 1024 bits, whereas El Gamal has a limit of up to 1024 bits. RSA consumes more power, whereas El Gamal does not need that much power. RSA is not that efficient when it comes to hardware and software implementation, whereas El Gamal is faster as well as efficient. While RSA is only semantically secure if there is random padding, El Gamal is semantically secure out of the box. Semantic security means the adversary cannot obtain significant information about a message by using its cipher text and the public encryption key in a given scenario. Compared to RSA, Diffie-Hellman and El Gamal deliver better performance, especially through their elliptic curve variants. This is because these variants, which we will see later, use smaller fields, although the math is a bit more complex. Both RSA and Diffie-Hellman algorithms use 1024-bit keys but the National Institute for Standards and Technology, or N-I-S-T, recommended this length for use only till the year 2010. Further, these algorithms feature significant computational power in processing keys. So, it is possible you might shift to some other option, rather than relying upon RSA or Diffie-Hellman. Instead of switching to a different system, you can upgrade the existing cryptography system by implementing an ECC algorithm to generate keys by using the properties of the elliptic curve equation rather than relying on large prime numbers. Elliptic Curve Cryptography offers similar functionality to RSA but by utilizing smaller key sizes for ensuring the same level of security. It is a public- key encryption system based on elliptic curve theory wherein points on an elliptical curve are used together with the discrete logarithm problems on the elliptical curve and a point on it at infinity. ECC aims to generate smaller, faster, and more efficient cryptographic keys. Because computer scientists believe that it is harder to solve the elliptic curve discrete logarithm problem than both RSA’s prime-factorization and Diffie-Hellman’s standard discrete logarithm, better security is ensured. Further, ECC systems need less computing power than these algorithms, which means that they are ideal for use on lower-powered and devices with less memory capacity, such as tablets, mobile phones, and e-book readers. For example, a 1,024-bit RSA key is equal to a 160-bit ECC key in terms of security and functionality. Further, as the symmetric key size increases, the corresponding Diffie-Hellman or RSA key size also increases but at a higher rate than the ECC key size. This strongly favors the use of ECC in low-powered environments such as smart cards and wireless devices. However, it is vital to note that RSA is faster for encryption, while ECC is faster for decryption. In terms of algorithms, there are many variations of the ECC cryptography, such as Elliptic Curve Diffie-Hellman, or ECC-DH, and Elliptic Curve Digital Signature Algorithm, or ECC-DSA. Digital Signature Algorithm, or DSA, is an asymmetric algorithm used exclusively for signing. While DSA ensures faster signature generation, RSA ensures faster signature verification. The mathematical concepts of ECC are quite complex and beyond the scope of this course. Therefore, we will not delve deeper. Click here to know more about the detailed mathematics of elliptic curve cryptosystems. Adding an ephemeral key or session key to the algorithm of Diffie-Hellman makes it D-H-E or Ephemeral Diffie-Hellman despite the acronym order. Similarly, adding an ephemeral key to Elliptic Curve Diffie-Hellman algorithm makes it Ephemeral Elliptic Curve Diffie-Hellman or E-C-D-H-E. The ephemeral component in these algorithms ensures perfect forward secrecy. The standard Diffie-Hellman key exchanges use the same private keys each time the same parties are involved. However, with the Ephemeral Diffie-Hellman algorithm, a temporary DH key is created for every connection, which means that the same key shall never be used twice. This facilitates forward secrecy in which if the long-term private key is leaked, the subsequent exchanges are still secure. However, Ephemeral Diffie-Hellman does not offer authentication on its own due to change in keys every time. Its common application is in a TLS environment to ensure perfect forward secrecy by carrying out several rekeying operations in a session. Rekeying ensures no single key protection for any session. Ephemeral Elliptic Curve Diffie-Hellman algorithm implements perfect forward secrecy via elliptic curve cryptography. It is more efficient and secure than Ephemeral Diffie-Hellman because it is associated with computational burden and the difficult to solve elliptic curve discrete logarithm problem. Before moving onto the next set of algorithms, we shall explore the Pretty Good Privacy application.

Introduced by Phil Zimmerman, Pretty Good Privacy, or P-G-P, is an e-mail encryption application for e-mail security. It is usually used to digitally sign and encrypt e-mail messages. PGP uses symmetrical and asymmetrical means in its process, which makes this freeware quite competent. While encrypting, the process uses a public key along with a session key, a one-use random number, for generating ciphertext. The session key is added in the encrypted form into the public key and is sent along with the cipher text. At the receiver’s end, the private key fetches the session key after which both together decrypt the cipher text to fetch the e-mail document. As a public-private key system, PGP implements different encryption algorithms. The first version used RSA, while the second used the International Data Encryption Algorithm, or I-D-E-A, implementing a 128-bit key. I-D-E-A is similar to D-E-S in terms of speed and capability but is more secure. PGP can even use both, IDEA to generate a session key and RSA for generating the encryption key. An alternative to PGP is GNU Privacy Guard or GPG, a part of the GNU project and interoperable with PGP. GPG supports almost all symmetric algorithms along with the El algorithm. Right now, PGP is not a standard but an independent application with wide Internet support. Know More: GPG, as a free replacement for PGP, is available for download at www.gnupg.org. We now move onto the hashing algorithms. Just as symmetric and asymmetric encryption algorithms depend on mathematical functions for encrypting or decrying information, a hashing algorithm also depends on mathematics for generating hash values from the input. It should be noted that hashing itself does not add security to the information being sent from external attacks. While traversing the network, an attacker can intercept the message and change it only to recalculate the hash and append it to the packet. This means hashing only prevents accidental modifications, such as by a communication error, not deliberate changes. Because nothing unique is sent to the sender in the hash value, anybody can calculate a hash value for any data, provided the hash function known is precise. However, hashing ensures data integrity. According to the RSA Security, a cryptography hash function should accept the input of any length, give an output of a fixed length, and should be free of collision, meaning no two messages shall be processed to have the same hash value. Further, the function is one way, meaning one cannot determine the input with the given output. Perhaps, this is the real strength of hashing wherein it is mathematically impossible to convert a hash into its original data, thus ensuring a high level of security. Hashing is used to generate one-time responses to challenges in authentication protocols such as Challenge Handshake Authentication Protocol, or C-H-A-P, and Password Authentication Protocol, or P-P-P, and in Microsoft NT Domain. It is also used to offer proof of data integrity provided with digitally signed documents, PKI certificates, and file integrity checkers. Hashing is essential to provide proof of authenticity when used with a symmetric key of authentication, such as routing protocol or Internet Protocol Security, or IPSec. Some of the most popular hashing algorithms are Message Digest 5, or M-D-5; Secure Hash Algorithm, or S-H-A; RACE Integrity Primitives Evaluation Message Digest, or R-I-P-E-M-D; Hash-based Message Authentication Code, or H-M-A-C; New Technology LAN Manager, or N-T-L-M; and New Technology LAN Manager version 2, or N-T-L-M-v-2. Designed by Ron Rivest, Message Digest Algorithm 5, or M-D-5, is the newest version MD series, generating a 128-bit message digest or hash value regardless of the seed or input size. It implements a complex series of simple binary operations like rotations and XORs. The input string is initially padded to 448 bits after which the original string’s bit length is added as a 64-bit integer. In short, the input is transformed to a series of 512-bit blocks from which a 128-bit digest is computed after four rounds of binary computations. However, the algorithm is more complex than its former versions but that is exactly what is responsible for the greater security it offers. The algorithm is commonly used in checking the data integrity of the backup and other files, along with applications being downloaded from different locations on the Internet. The vendor usually displays the MD5 hash values on the site for checking the integrity of the downloaded data. The major weakness of this algorithm is that it is not very resistant to collision. This is why it is currently not so widely used. Moreover, MD5’s additional security features significantly decrease the speed of message digest production. Therefore, SHA 1 or 2 is a recommended alternative, although MD5 is in use in several operating systems and common software tools. Developed by U.S. National Institute of Standards and Technology, or N-I-S-T, and specified in the Federal Information Processing Standard (FIPS) 180, the Secure Hash Algorithm, or S-H-A, aims to ensure message integrity. This one-way hash function generates a 160-bit hash value for use with an encryption protocol.

SHA processes a message in 512-bit blocks. In case the message length is not a multiple of 512, SHA pads the message with extra bits until the length reaches the closest highest multiple of 512. SHA-1 is a revised version of S-H-A and rectified an unpublished SHA flaw. Designed similar to the MD4 hash family, SHA-1 accepts messages that have a length of below 2 raised to 64 bits and creates a 160-bit digest. The SHA 1 algorithm is a bit slower than MD5 but a 32-bit longer digest ensures more security against brute-force attacks and collision. This is why SHA-1 is preferred over MD5. However, performance is an issue, MD5 can be used as no critical flaws have been found in it yet, but do not expect any significant performance hike. SHA1 also has some weaknesses due to which SHA-2 was introduced with four variants: SHA-224 for generating a 224-bit digest using a 512-bit block size, SHA-256 for 256-bit digest through 512-bit block size, SHA-512 using a 1,024-bit block size, and SHA-384 using a 1,024-bit block size but is now truncated. The latter three are considered totally resistant to collision attacks. Because of this flexibility and security, SHA-2 is the most widely used SHA algorithm despite the fact that SHA-3 is now a standard. To summarize, here is the table comparing the different SHA algorithms along with MD5. From the table, you can conclude that SHA3 is more efficient to least rounds as well as more reliable to no collisions found. RACE Integrity Primitives Evaluation Message Digest, or R-I-P-E-M-D, algorithm is also based on MD4, just as SHA and MD5. Because its security was questionable, it is now replaced by RIPEMD-160 using 160 bits through 512 bits of processing. RIPEMD-160 was introduced as an alternative to SHA-1 but it is not that popular. There are other versions using 256 and 320 bits as well. Like MD5, there is no fixed maximum message size, but implements more rounds than MD5 and SHA-1. Nevertheless, SHA is still preferred over both hashing algorithms. You can consider RIPEMD as 160-bit version of MD5 and can be combined with SHA or other hash algorithms for higher security in modern applications such as online Bitcoin wallets. NTLM refers to a hash storage system of passwords, available on Microsoft Windows. It has two versions: NTLMv1 and NTLMv2. NTLMv1 is a challenge-response protocol just like CHAP for generating two responses sent back to the server issuing a random challenge. Two responses are only generated in MD4hash and LM hash if the user’s password is of maximum 14 characters or else only MD4 hash response is sent. NTLMv2 is also similar to version 1 but implements a more complex MD5 process. Both NTLM versions generate non-reversible hashes and are thus more secure than LM hashing. Nevertheless, reverse-engineering cracking mechanisms can exploit NTLMv1 or v2 passwords if they are below 15 characters and that the hacker has sufficient processing power and time. NTLM uses MD4 or MD5 hashing algorithms and are primarily used for authentication despite employing hashing functions. It is widely in use despite Kerberos being Microsoft’s preferred authentication protocol. The table given below shows a comparative analysis of LM, NTLM, and Kerberos authentication protocols in Windows. In terms of hashing, NTLMv2 and Kerberos use MD4 to generate a hash of length 128 bits. However, both differ in terms of response algorithm in use and in response value length. H-M-A-C or Hash-based Message Authentication Code is what we are going to explore in the next screen. The Hash-based Message Authentication Code, or H-M-A-C, algorithm uses a secret key and a hashing algorithm to compute the hash value known as Message Authentication Code, or M-A-C. It implements a partial digital signature for ensuring message integrity during transmission but does not guarantee non-repudiation like a digital signature because of the shared secret key. Upon decrypting the message digest if it is found that it successfully compares to a message digest formed at the receiver’ end, the message is discarded, as it has been altered in transit. In practice, HMAC is usually combined with a standard algorithm generating a message digest, such as SHA-2. This means that only communicating parties aware of the key can produce or verify the digital signature. HMAC is not typically a hashing option presented to the end user or even to an administrator. Rather, some cryptographic solutions are designed to take advantage of HMAC. For instance, the IPSec protocol uses it for alleviating the risk of data collision. Moreover, HMAC can be an appropriate option for applications using symmetric-key cryptography.

In this topic, you will learn about transport encryption protocols. Some of the most common types of traffic for encrypting include e-mail, FTP, and Telnet. While making a security plan, you would obviously wish to ensure the usage of the most secure communication protocols. Keeping this in mind, let us discuss the common transport protocols or applications used to encrypt communication in network traffic. These are Secure Socket Layer, or SSL; Transport Layer Security, or TLS; Internet Protocol Security, or IPSec; Secure Shell, or SSH; and HTTP Secure, or H-T-T-P-S. Secure Sockets Layer (SSL) establishes a secure connection between two TCP-based machines. Developed for Netscape browsers and now used even in Internet Explorer, SSL is the de facto standard for client/server encryption over the Web. It aims to ensure secure channels for an entire browsing session. The SSL protocol uses the handshake technique for establishing a session, wherein the number of steps depends on the inclusion of mutual authentication and on the combination of steps. Usually, the steps are between four and nine. SSL involves exchanging server digital certificates for passing RSA encryption/decryption parameters between the Web server and the browser. However, instead of RSA, SSL can even use ECC or DSA for a more secure and scalable environment. SSL establishes a session using asymmetric encryption and retains it using symmetric encryption. While accessing a site, the browser obtains the web server's certificate after which it retrieves the public key of the server from it. Then, the browser generates a random symmetric key and encrypts it by using the server's public key. The browser then sends this encrypted key to the server. Now, the server uses its own private key to decrypt the symmetric key after which the two systems exchange all information in future using the symmetric encryption key. This hybrid approach allows SSL to offer the benefits of both asymmetric and symmetric cryptography. As a security administrator, you should know that the clients must be able to accept the level of encryption for proper functioning of SSL. Modern browsers accept 128-bit encrypted sessions/certificates, although the earlier browsers used 40- or 56-bit SSL encryption. It is better to push all clients to modern browsers. Netscape subsequently released SSL versions 2 point 0 and 3 point 0. Although an industry standard, SSL’s use is declining because of the public desire for a 100% open-source alternative called Transport Layer Security, or TLS. Based on SSL, TLS is a newer security standard quickly surpassing SSL in terms of popularity. Just as SSL, Transport Layer Security, or TLS, protocol supports mandatory server authentication and optional client authentication. However, TLS is a security protocol expanding upon SSL. Many industry analysts foresee TLS as a SSL replacement in the future. Newer TLS versions are often referred to as SSL versions, such as TLS 1.0 is SSL 3.1 and TLS 1.1 is SSL 3.2. TLS, as a cryptographic protocol, uses an appropriate cipher suite for negotiating security settings. Such a suite is a mix of authentication, encryption, and MAC algorithms, as shown in the image. It is vital to note that TLS uses two key negotiation protocols—Diffie-Hellman Ephemeral and Elliptic Curve Diffie-Hellman Ephemeral—that ensure perfect forward secrecy.

Before going to the next protocol, let’s find out about cipher suites in detail. A cipher suite refers to a standardized group of security algorithms providing authentication, hashing, encryption, and message authentication code algorithms for defining the security parameters. Several algorithms can be used in cipher suites. For example, you can use key exchange algorithms such as Diffie-Hellman, RSA and ECDH. Similarly, you can use authentication algorithms such as RSA and ECDSA. A suite can also include encryption algorithms such as 3DES, AES, and RC4. Lastly, it can even encompass message authentication algorithms such as SHA or MD5. It is vital to note that not all ciphers or algorithms in a suite are strong or secure. Several older algorithms are known to have weaknesses or flaws. Instead of a cipher’s age, it is the protocol in use and the bit length of the key which determines how strong or weak a cipher is. Many vendors permit setting up cipher suite preferences on a server for specifying the strength level for client connections. For example, it can be in terms of Weak, Strong, and FIPS, where Strong enforces the use of at least 64-bit encryption algorithms and Weak is any algorithm smaller than 64 bits. Choosing FIPS ensures that hash, encryption, and key exchange algorithms are FIPS compliant, which can be AES, 3DES, and SHA1. Is there any way to convert a weak key into a strong one with a strong cipher? Yes! There is! Let’s check it out now! For potentially expanding a weak key or password to make it more secure against the brute-force attacks, key stretching is what you should prefer. It refers to a set of techniques involving hundreds of iterative computations for giving a key with more number of bits, generally by many orders of magnitude. Although the end user cannot determine this increased effort, it definitely makes reverse key engineering hard for hackers. Usually, key stretching is practically used for transforming a user's password into a key for encryption. A typical password is only up to 8 to 12 characters, indicating 64 to 96 bits. In case of a symmetric encryption, the key has to be at least 128 bits for sensible security. In this situation, the password runs via a series of hash operations of variable lengths to give a 128-bit key. Two methods are used for key stretching, namely Password-Based Key Derivation Function 2, or PBKDF2, and Bcrypt. PBKDF2 uses a hashing operation with a symmetric key, an HMAC, or an encryption cipher function on the password. This combined with a salt, referred to as secret data added in terms of bits. It is worthy to note that salting makes hashing a computationally intensive and more complicated process. On the other hand, Bcrypt uses an algorithm derived from Blowfish, which is converted to a hashing algorithm for hashing a password and adding salt to it. The Bcrypt technique also uses an adaptive function for augmenting iterations over time. However, it is relatively slow in process. Internet Protocol Security, or IPSec, as a security protocol offers authentication, confidentiality, and encryption over all Internet Protocol, or IP, traffic over the Internet. It does so through its open, modular framework, thus allowing manufacturers and software developers to design IPSec solutions for products from other vendors. From an application outlook, this protocol is used to establish a secure point-to-point link across a non-trusted network such as the Internet. For example, an IPSec connection can be used to communicate securely between two remote offices. Set forth by the Internet Engineering Task Force, or I-E-T-F, IPSec ensures a secure channel between two networks, routers, systems, or gateways. It can even connect individual computers, such as a workstation or a server. IPSec is incorporated into IPv6 and implements public-key cryptography for ensuring access control, encryption, and message authentication. It is important to note that when it comes to an algorithm for exchanging keys over an insecure medium, the choice is always Diffie-Hellman, unless the context is IPSec.

Considered highly secured, IPSec is quickly becoming a standard for encrypting channels of Virtual Private Networks, or VPNs. In fact, the primary use of this protocol is to create secure tunnels for VPNs, which means that IPSec operates in tunnel mode for gateway-to-gateway communication or in transport mode for peer-to-peer communication. In most cases, it is paired with Layer 2 Tunneling Protocol, or L-2-T-P protocol, to create difficult-to-read packets if intercepted by a third party. Operating at layer 3 of Open Systems Interconnection, or O-S-I, networking model, the IPsec protocol offers a comprehensive infrastructure for secured communication. It uses two primary protocols, namely Authentication Header (AH) and Encapsulating Security Payload (ESP), both running in either tunnel or transport mode. Operating with protocol 50, Authentication Header authenticates the sender with IPSec, while Encapsulating Security Payload running with protocol 51 encrypts packet data to provide confidentiality. While both IPSec and SSL are used for Internet traffic, they are significantly different protocols. Let’s now compare SSL with IPSec. SSL is for Web applications such as e-mail and file sharing, whereas IPSec is for all IP applications. SSL supports digital signatures, whereas IPSec uses both digital signatures and pre-shared keys. SSP offers moderate authentication by implementing one-way or two-way authentication but IPSec ensures strong authentication through two-way authentication via shared keys or digital signatures. Moreover, SSL carries insecure IP headers, whereas IPSec authenticates IP headers. SSL delivers moderate to strong encryption with a key length ranging from 40 to 256 bits, whereas IPSec ensures strong encryption with 56-bit to 256-bit keys. SSL or TLS operates on Transport layer of the OSI model, whereas IPSec operates on the Network Layer. Further, SSL encrypts application layer, whereas IPSec encrypts both TCP and Application layers. While SSL works from any computer that has a supporting browser, IPSec requires workstation configuration, which can be challenging for non-technical users. Moreover, any device can connect to SSL communication, which is not the case with IPSec where only configured devices can connect. Secure Shell, or SSH, is another tunneling protocol designed for Unix systems. However, today, it is also used in Windows environments. SSH is a good example of an end-to-end encryption method and securely replaces common, but insecure, Internet applications such as Telnet and FTP and Unix R-tools such as rshell, rcp, and rlogin. The key benefit is that SSH protects the information from being intercepted. Primarily used for interactive terminal sessions, SSH offers authentication and encryption services. SSH is frequently used with a terminal editor application such as Minicom on Linux, HyperTerminal in Windows, or PuTTY on both. The most common scenario for using SSH is to connect to a router or switch remotely for changing configuration settings. The handshake process is quite similar to that in SSL. Two versions of SSH exist, namely SSH1 supporting DES, 3DES, Blowfish, and IDEA algorithms but is now insecure and SSH2 with a drop for support for DES and IDEA but included support for other algorithms. SSH-2 is more secure with stronger authentication and encryption. Cryptographically, SSL and SSH both are secure protocols. So, what’s the difference? Let’s find out! First, SSL is mainly used for securely transmitting confidential information such as the ones in credit cards. On the other hand, SSH is predominantly used for executing commands securely across the Internet. Second, SSH comes with built-in key-pair authentication making it quite easy to implement. You need to exchange the key fingerprints out-of-band. However, SSL and TLS both come with a PKI through signed certificates, which makes implementation a bit cumbersome. Third, SSH usually operates on port 22, whereas SSL uses the HTTP port 443. Fourth, SSH is related to network tunneling, whereas SSL is associated with digital certificates for announcing client/server keys. Fifth, SSH comes with an array of protocols to determine what goes inside the tunnel such as for password-based authentication and multiplexing several transfers. Such facilities are not there in SSL. Last, SSH is more popular for Unix communication, whereas SSL or TLS is more popular for Windows communication. Hypertext Transport Protocol over SSL, or H-T-T-P-S, is the secure version of the most popular HTTP protocol. It is also known as Hypertext Transport Protocol Secure and implements SSL or TLS to encrypt the channel between the client and server, which otherwise is unsecured in case of HTTP. HTTPS uses port 443 and is now the de facto standard for online communication. Several e-business systems rely on this protocol for secure transactions. You can easily identify an HTTPS session by looking for the word “https” before the URL and a little lock icon in the browser.

Let us summarize the topics covered in this lesson. • WEP uses 64- or 128-bit encryption keys with RC4 encryption and is less secure than WPA using 128-bit pre-shared key by Temporal Key Integrity Protocol and RC4. • WPA2 replaces this protocol with Cipher Block Chaining Message Protocol and RC4 with AES, and supports up to 256-bit encryption. • DES is a 64-bit block cipher offering 56 bits of key strength, whereas 3DES comes with an effective key strength of either 168 or 112 bits. • AES uses key lengths of 128, 192, and 256 bits. • RSA is based on the infeasibility of factoring the product of prime numbers. • Diffie-Hellman algorithm is only for creating and exchanging the symmetric shared key over non-reliable networks. Let us summarize the topics covered in this lesson. • Elliptic curve algorithms provide more security and efficiency on even smaller processors than other algorithms using the same key length. • Both Ephemeral Diffie-Hellman used by TLS and Elliptic Curve Diffie-Hellman Ephemeral ensure perfect forward secrecy. • The most commonly used hash algorithms are 160-bit SHA and 128-bit MD5, although SHA is preferred over MD5. • TLS is more popular than SSL and uses cipher suites for authentication. • IPsec secures all IP and VPN communications either in transport or tunnel mode. • SSH has replaced Telnet and R tools of Unix for interactive terminal sessions. With this, we conclude this lesson, “Using Appropriate Cryptographic Methods in a Given Scenario.” In the next lesson, we will look at “Using appropriate PKI, certificate management and associated components in a Given Scenario.”

- Disclaimer
- PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.

A Simplilearn representative will get back to you in one business day.