Thursday, June 5, 2008

Final Assignment

The final reading assignment concerns itself with what has been hailed recently as the "future" of not only cryptology, but all of computing. Namely, the use of quantum mechanics to somehow create a quantum computer which operates on a completely different premise than that of normal computers. Though the quantum mechanics are somewhat complicated, the basic premise of the system is the utilization of the "spins" or orientations of subatomic particles to represent unique information. The challenge for this system of computing is that there is not currently a known way to consistently and accurately measure these subatomic particles without changing the states that they are in and thus changing the data.

If quantum computing can be realized, however, it represents an almost unimaginable leap in computing power because it allows for computations to be done parralell to one another rather than sequentially. Thus, if a problem can be broken into parts, each part can be computed simultaneously rather than in order to develop a massively faster machine. This would lead to the brute force attack becoming much more feasible and would effectively eliminate el gamal and RSA as viable cryptosystems. Quantum computers would, however, be able to use their new power towards the application of more advanced cryptosystems and the "battle between cryptology and cryptanalysis" would suppossedly continue.

Tuesday, June 3, 2008

26

Chapter five deals with the advanced encryption standard Rijndael which is an encryption system based on symmetric key standards meant to replace DES which has recently been deemed as too insecure for commercial use. Rijndael was one of many finalists basically elicited due to a competition. Thus, the algorithm is perhaps not perfect and, as the book mentions, the other four finalists may be used in future crypto-systems. Rinndael, aside from having difficult computation, is based on the application of rounds, 10 for a 128 byte, otherwise 12 or 14 for other variations. The application was somewhat difficult to understand because it involves basically four different encryptions: the three transformations and then an XOR operation.

The significant part of this reading is mainly that this is a departure from the public key cryptosystems and indicates that despite the effectiveness of RSA and el gamal systems, symmetric key cryptosystems are still more efficient and cost effective than the public key counterparts. Rijndael is a block-cipher, indicating that this is still the preferred method of encryption for symmetric systems and, hopefully will be improved upon to make a truly secure symmetric system other than a one time pad.

Sunday, May 18, 2008

Assignment 21

The first midterm was an interesting combination of the history of cryptography and an introduction to number theory. Though I am still somewhat disappointed that a misreading of the final question on the midterm resulted in a less than stellar grade, the first part of the class was somewhat disconnected. Euler's theorem and the Vigenere cipher, for instance, had almost no overlap and so the questions felt like an assortment of disconnected tidbits. For this midterm, I expect a similar approach to conceptual and application problems, but I also think that the questions will be more encompassing than the first.
The second part of the course so far has been almost exclusively dedicated to covering public key cryptosystems and I would imagine that a variety of questions would all relate directly to RSA systems, hash functions and discrete logarithms even if the questions did not explicitly state them. All of the number theory and theorems covered have basically been background information to allow us to appreciate that factoring large numbers and computing discrete logarithms is hard. Using this mathematical appreciation and basis, I imagine the test would compare public key cryptosystems and perhaps be more theoretical than the first. In any event, I am not prepared for it yet and am somewhat frightenned by the number of possible questions, though they appear to be much smaller than the number of calculations necessary to factor large prime numbers...

Thursday, May 15, 2008

assignment 20

Section 8.4 deals with the concept of Birthday attacks, which were briefly mentioned in the beginning of chapter 7 when introducing discrete logarithms and are also mentioned in chapter 9 (I did some perusing). At first glance birthday attacks seem like a counter intuitive notion, but the statistics behind them is remarkably simple. It just utilizes the probability of something not happening (the probability of not having the same birthday as someone else) over and over. The notion is made much less counter-intuitive when it is presented as the probability that any two units in a set n can match rather than checking if any of the units from set n match a particular unit of set n. The latter is the view that is probably intuitively understood, but when described appropriately it is easy to comprehend.

The birthday attacks' relevance to cryptology is not immediately apparent, but it can be used as a way of attacking discrete logarithms (by checking for matches between groups probabalistically) and, thanks to the perusing, it is also mentioned with regard to digital signatures. Unfortunately for the enthused, the Baby step Giant step algorithm is superior to a birthday attack for all intents and purposes regarding discrete logarithms, but as will presumably be seen, the birthday attack does have other uses including an interesting "did you know" amongst friends.

Tuesday, May 13, 2008

Assignment 19

Sections 8.1 and 8.2 introduced the concepts of has functions, which luckily I'd already encountered in PIC 10a so the concept was familiar. The cryptographic applications of hash functions were determined to be based on 3 principles, the third of which, that two messages that produce the same hash output is hard to find, is by far the hardest to verify as was seen by the example in section 8.2, where it was scarcely done. The most difficult part of these sections was visualizing a function that releagates a message of any length into a hash output of fixed length that gives meaning information. The example of a digital signature was the application that stuck out to me the most, but as far as cryptography is concerned, it seems infeasible to take a message m, send it to h(m) and then get m back out (which is the point) so as a cryptosystem hash functions seem unavailable.

As previously mentioned, one of the prominent uses of hash functions for cryptographic purposes, is in the form of a digital signature. If the 3rd principle is indeed satisfied, the hash function works as signature verification and error checking, both of which are paramount in cryptosystems sincce eve can presumably tamper with a message or possibly create her own.

Sunday, May 11, 2008

Assignment 18

Sections 7.3-7.5 elaborated on applications of discrete logorathims just like the ensuing sections in chapter six continued to elaborate on the different approaches to the RSA algorithm. The key part to all three sections was the difficulty in computing discrete logarithms just like the fundamental part of the RSA applications was the difficulty factoring large prime numbers. What struck me as interesting in this chapter was that, like the section on factoring large prime numbers, the computational diffie-hellman problem and decision diffie-hellman problem and eventually the elGamal cryptosystem all could be fundamentally boiled down to discrete logarithms but were different applications of that idea. The RSA algorithm on the other hand was much more straightforward in its application and the different approaches weren't different cryptosystems but instead were different approaches to computing the compositeness of a number (non primality). Probably the most difficult part of this reading, which was largely conceptual, was trying to find differences in security and application between the diffie-hellman and elGamal cryptosystem. I also had to reread the section on bit-commitment to convince myself that even if Eve had access to all of the communication between Alice and Bob, she would still have to compute discrete logs to find the key K.

These sections were important in a broad sense largely because they illustrated the idea that the RSA system is not the only application of a sort of one-way function for a public-key cryptosystem and that there are potentially many others. It's very possible that once technology, whether quantum computers or otherwise, can factor large primes or compute discrete logarithms for large primes, other one-way functions for public-key cryptosystems may be developed. On a side note, by far my favorite part of this reading was the mentioning basically that "this is not yet known" or "has not been developed" to describe the feasibility of computing discrete logarithms, basically implying that mathematics is still hard at work and can give us a "busy" signal every once in awhile.

Tuesday, May 6, 2008

Assignment 16

Sections 6.6 and 6.7 wrapped up the chapter on RSA cryptosystems, which much to my dismay, didn't lead to any breakable ciphertexts for homework assignements, by illustrating how two countries could be sure that each was basically being honest with its information transfer to the other and then by describing, in general, the concept of a public-key cryptosystem. These two sections were more conceptual than mathematical but it was difficult to imagine examples of trapdoors for one-way functions that weren't already given by RSA. I also didn't understand why section 6.7 didn't come as section 6.1, but that's irrelevant.

The most significant part of this reading was probably the illustration of the development of the signature for an RSA cryptosystem which I had been wondering about since the introduction of a public-key cryptosystem in the first week of class. The security of it is still not completely understood as to how Eve could not replicate what country B does, but the illustration of the signature is a critical step in the public key cryptosystem.

Sunday, May 4, 2008

Assignment 15

Section 6.5 was extremely short but illustrated an interesting history of the attempts to factor RSA.  The theory behind the algorithm that was used for the deciphering of n in the section was not proven, but it was interesting how 600 people used 1600 computers to gradually factor n over 7 months.  

Reading the supplement on wikipedia was also enlightenning because it illustrated how a lot of progress in math is accomplished: a challenge/problem is given and people attempt to solve or contribute to the solution.  Unfortunately in this case, the RSA factorization challenge appears to have been abandonned but hopefully other methods will be used.

Thursday, May 1, 2008

Assignment 14

Section 6.4 finally reached a topic that has been repeatedly brought up in class throughout the quarter: factoring numbers into primes. The methods in this section illustrate some of the attempts to find factorization techniques aside from the extremely slow method of dividing a number n by all primes up to n. These method's include Fermat's method as an older and slower method (originally this method was difficult to understand because it was ambiguous as to whether or not the B! represented a factorial or some other variable, even with the acceptence that it represents a factorial, the reasoning behind it is not adequately proven in the book). Factoring by methods in section 6.3 including the Miller-Rabin test and finally the Universal Exponent factorization model.

The most interesting piece of information in my mind was throughout the reading there were phrases such as "sometimes this allows us to factor n" and "if _____ are found by some reasonable method then there is a chance this will work." These phrases represent the fundamental problem faced by cryptographers and number theorists alike, namely factoring is slow and difficult to do and no one has yet come up with a solution to the problem. Perhaps quantum computing will provide an answer...

Tuesday, April 29, 2008

Assignment 13? 04/29/08

Section 12.2 on Enigma discussed perhaps the most famous application of cryptosystems in current knowledge and literature. The device itself is fairly simple operationally, it is a mechanical device that has one cog turn depending on the ones before it. The amazing security of the enigma is similar to the RSA system now in that brute force was difficult to crack it during WWII. The conceptualization of how to crack it was perhaps the only difficult part of the reading but due to previous experience studying Alan Turing and various cryptography texts the Enigma was broken due to a known plaintext attack in combination with the Turing Bombes.

Perhaps the most telling aspect of this reading is that what was once secure to brute force is now easily broken by more sophisticated computers. The RSA system may one day be thought to be as simple as Enigma is now. Another aspect of the Enigma story was Turing's brilliant use of the known plaintext to work out his massive bombe machines to maximum effect and greatly reduce the problem space.

Sunday, April 27, 2008

Assignment 12

Section 6.3 contained a few interesting theorems for testing whether or not a number is prime. The basic principle and Fermat's primality test were fairly simple and it was easy (given previous reading in chapter three) to see how each test determines the compositeness of a number (that it is composed of multiple prime factors) even without finding the prime factors of the number. The Miller-Rabin Primality test, however, was thoroughly confusing and, even after multiple reads, my imagination could not grasp why it works or doesn't work for deterimining the compositeness of a number other than it is based on the basic principle and fermat's primality test. Not only is the test long but a proof is not shown which may mean that the number theory behind it is not necessary for the course and that it should just be accepted as sound. Similarly, the Solovay-Strassen primality test was not formally prooved but is quite easier to grasp.

This section underlines the importance of primes in cryptology, and more generally, in number theory. When I was first introduced to the fundamental theorem of arithmetic, that every number is composed of one or more primes, it struck me as a novel concept that such an apparently abstract fact could be so "fundamental." After the little number theory we've learned and the extensive applications in cryptography, prime numbers are arguably the most important aspect of numbers universally speaking. What this section also indicates is the difficulty in determining whether or not a number is actually prime without trying all possible factors. All tests given in the section are sufficient for determining whether a number is composite or namely, not prime, but none were able to prove conclusively that a number is prime. This fact then closely ties probability to number theory and can only result in even more interesting results.

Thursday, April 24, 2008

Assignment 11

The most difficult part of assignment 11 was definitely the Jacobi and Legendre symbols, at first it seemed as though the Legendre symbol was just a continuation of the square-rooting section in modular arithmetic, but it soon developed into a new theorem without a clear-cut application apparent. The Jacobi symbols were more of a continuation of the Legendre symbol and were actually important because they related to the law of quadratic reciprocity which not only related to section 3.9 in roots of numbers but also might develop cryptographic apllications (or at least we hope so given its place in the book).

The most significant gem from the reading was actually not the material itself but a simple appreciate of the Jacobi symbols as a more complex form of the Legendre symbols much in the same vein as Euler's Theorem was a generalization of Fermat's theorem. It's common place in mathematics that the "easy" specific cases are often more generalized for universal application and it makes you wonder whether or not some "specific" cases for factoring numbers might soon generate a universal factoring algorithm that may make systems such as RSA obsolete.

Sunday, April 20, 2008

Assignment 10

The difficult part of this reading assignment, section 6.2, was very difficult to pin down, because I had trouble understanding all of the attacks on the RSA system. The first two attacks, while I did not understand the mathematics behind them completely, made sense that if a significant portion of either p or d is known the cryptosystem could be broken more easily. This can best be understood as a significant reduction of the problem space for a brute force attack. Since RSA's security is reliant upon the infeasability of such an attack, these make sense. The low-exponent and short plaintext attacks also incur reductions in the problem space and so make some sense as well, but the timing attack was thoroughly confusing.

Despite being the cause of the most confusion for me during the reading, the timing attack on the RSA algorithm is possibly the most significant attack presented because it shows that cryptanalysis must use everything available at its disposal to find a way to decrypt messages. Future solutions to cryptosystems, for instance, may rely upon flaws in technology or human implementation of mathematically secure encryption methods. It also begs the question as to whether or not a one-time pad is actually secure despite its theoretical unbreakability.

Sunday, April 13, 2008

The most difficult part of sections 3.4-3.6 for me was definitely the understanding of Euler's theorem primarily because the book progressed from one step to the next of the proof without what I felt was clear enough explanation or all of the steps to make sure that the notation and understanding of the next steps were clear. The large "pi" symbol, for instance, has always puzzled me and I was hoping for an explanation as to its meaning to no avail. The overall theory seems to be fairly easily understood where fermat's theorem states that a number to the power of a prime minus one is equal to one, but the proofs for that and then Euler's theorem were not clearly understood and did not enlighten the foundations of either. That being said, this is a critical component of public-key cryptography and I'm hoping it will be more clearly understood soon.

The overall theme of these three sections seemed to be finding shortcuts for complicated modular operations. Section 3.4, for instance, resulted in the application of systems for modular arithmetic which seem fairly obvious at first glance but have itneresting applications. Section 3.5 was as simple as its brevity but profoundly useful in reducing the number of operations for calculations of large exponentiation with modular operations which is especially important when designing computer software to tackle these areas. Section 3.6 built upon these ideas to present the foundations for public-key cryptosytems which are huge components for modern cryptography.

Thursday, April 10, 2008

Assignment 6

The reading for assignment 6, sections 2.9-2.11 brought up interesting applications of randomness and what it is to be truly random. Contrary to what may be the popular belief, most "random" generations are actually pseudo random based on some algorithm created by people. True randomness, the book explains, can be generated by flipping a coin millions of times, or by tracking thermal noise in semiconductors, but these approaches are not always practical or even feasible. The most difficult part of this reading, then, resulted when the book mentioned one way functions to generate a random number. The explanation was not clear and resulted in some confusion as to how the numbers are actually selected. The importance of this method for cryptographic keys is extremely important and a thorough understanding of how these numbers can be generated is necessary for not only understanding of the cryptography, but also for formulating possible attacks against it.

The most astounding part of this reading section was the revelation of the one-time key pad and its workings to reveal a theoretically unbreakable cipher. The existence of an unbreakable cipher in and of itself is remarkable as is the knowledge of it for some time. The security of it, of course, depends upon proper use and precautions but the mere existence of a theoretically unbreakable cipher seems to stand against the constant struggle between cryptographers and cryptanalysis and is intriguing by its own accord. The concept introduced in section 2.11 regarding speed vs. efficiency was also an interesting sidenote, but aside from the use of a binary number system, LFSR sequences struck me as less impressive as the one-time pad cipher.

Monday, April 7, 2008

Assignment 5

Reading assignment 5 was fairly straightforward and brief with regard to its sections, sections 2.5-2.8, but some confusion was elicited by the introduciton of block ciphers. Specifically, while the concept of blocks of text being encrypted together is straightforward, deciphering which are block and which are normal requires some thought. For instance, the enigma cipher encrypts one letter at a time, but if one letter is changed, the order of keys may also change. After awhile, however, these distinctions have become clearer and bear thinking about each time a new cryptosystem is encountered.

Upon reflection, perhaps the most important part of this reading was the simplest part: ciphertext does not have to be just letters or numbers, but can be anything as long as it is related logically to the plaintext and can be decrypted according to some procedure.  This was best evidenced in Sherlock Holmes stick figures which contained messges according to a key that assigned letters to arbitrary figures.  Such ciphers have the advantage of possibly being mistaken for non cipher-text and thus add to the security of the information they are hiding.  Due to Kerchoff's law, however, this factor of cryptography receives little attention.

Sunday, April 6, 2008

Assignment 4

The most difficult part concerning sections 2.3 - 2.4 for me was the understanding of the reasoning behind finding the key length concerning the Vinegere cipher. The book indicates that to find the appropriate key length for the Vinegere cipher, it is necessary to choose various shifts and then find the shift with the greatest "coincidences" pertaining to identical ciphertext letter matchups. This was confusing at first but was eventually explained using dot products later in the section. Despite this explanation, however, I'm still not entirely sure of the reasoning behind its usefulness and will hope for clarification in lecture.

The farthest-reaching and probably most relevant part of this reading was demonstrated in both the vinegere cipher and substitution cipher sections; namely that frequency analysis is a powerful device for cryptanalysis. The vast majority of ciphers prior to enigma were solvable in this fashion. This also explains why linguists and crossword enthusiasts were so successful at code-breaking.

Thursday, April 3, 2008

Assignment 3:
1: Difficult: The most difficult part of the reading for sections 3.3, 2.1-2.2 was present accross all three sections, dealing with the multiplicative inverse for congruent values. Due to the increased reliance upon mathematics in cryptography, it has become common place to denote the letters of the alphabet numerically given values from 0-25 for each position. Thus, to change the plaintext values to ciphertext values it is necessary to have some knowledge of modular arithmetic to operate on the symbols. This was fairly straightforward and intuitive, but the operations using the multiplicative inverse and the conditions for using it required multiple reads to get the hang of. The inability to divide when the divisor and modular number (size of the ring) did not contain a gcd equal to one was also an interesting result of modular arithmetic that will still probably result in programming errors in the future. Overall, the modular arithmetic was intuitive and fairly easily understood, but these two issues, the multiplicative inverse and the lack of "closed under addition" were the hardest to grasp.

2. Overall the sections, including the necessary information on modular arithmetic, were a generous and easily understood introduction to some of the simplest ciphers in use even from ancient times such as the caeser cipher. These ciphers, while extremely simple in their mathematical application, were a good starting-off point to the understanding of the overall aim of cryptograhpy such as the treatment of plaintext and ciphertext. Unlike the original example in class, unfortunately, they also will present all ciphertext sans spaces which makes even the simplest cryptanalysis more realistic.

Tuesday, April 1, 2008

Assignment 2

1. The most difficult part of the reading for sections 1.1-1.2, 3.1-3.2 was definitely the two sections on number theory presented in chapter 3. Each section began rather innocuously with a seemingly easy introduction to the material. In section 3.1, for instance, it began by introducing what a divisor is and what division can be represented at, but quickly developed into the Euclidian algorithm involving prime numbers and the greatest common divisor. The theorems themselves were fairly straightforward but the proofs often required multiple reads. After a few reads, the information was fairly straightforward and the only barrier to easy understanding remained the notation. The notation q1 ^b1 * q12^b2 * qt^bt was written q1q1...q1q2...qt for instance, which was written without explanation.

2. The introductory chapter, chapter 1, was the most interesting and enjoyable primarily because it not only presented an overview of the basic goals and methods of cryptography and cryptanalysis, but also mentioned frequency analysis, a one-time pad cipher, and quantom cryptography, among others, that I had previously come in contact with when reading Simon Sighn's "The Code Book." The introductory text was also fairly informative in eplaining the mathematical limits and accomplishments feasible by computers and the resulting impact on cryptanalysis which presented a realistic view on these processes. I also enjoyed the reference to Eve's possible goals as being dependent upon the level of evil she possesses.

Assignment 1

Introduction:
-I am a third year Cognitive Science student with a math minor.
-I have taken linear algebra, both lower and upper division, lower division differential equations and the history of mathematics.
-I'm taking this class, not only because I am a math minor, but because I've been interested in cryptography since the beginning of high school after reading "The Code Book" by Simon Sighn. I've read the book twice and this class was one of the motivating factors for actually becoming a math minor.
-I do not have any eperience with Maple, Mathematica, Matlab or Sage aside from listenning to my flatmates complain about their various engineering assignments. I have, however, taken two PIC classes in C++ and an introduction to internet programming class involving HMTL/php. I feel that I should be able to pick up one of the aforementioned programs fairly quickly to pick up the programming for the assignments.
-My worst ever math teacher was a teacher in high school for math analysis (precalculus) who maintained a terrible lack of enthusiasm for teaching and required neatness in printed notes more than understanding of material. My best math teacher was Mr. Matlack, my calculus teacher for my junior year of high school; he managed to demonstrate any concept beginning from the most fundamental aspects.
-I have duel citizenship with the United Kingdom and am an avid tennis player and swimmer.