Tuesday, April 29, 2008

Assignment 13? 04/29/08

Section 12.2 on Enigma discussed perhaps the most famous application of cryptosystems in current knowledge and literature. The device itself is fairly simple operationally, it is a mechanical device that has one cog turn depending on the ones before it. The amazing security of the enigma is similar to the RSA system now in that brute force was difficult to crack it during WWII. The conceptualization of how to crack it was perhaps the only difficult part of the reading but due to previous experience studying Alan Turing and various cryptography texts the Enigma was broken due to a known plaintext attack in combination with the Turing Bombes.

Perhaps the most telling aspect of this reading is that what was once secure to brute force is now easily broken by more sophisticated computers. The RSA system may one day be thought to be as simple as Enigma is now. Another aspect of the Enigma story was Turing's brilliant use of the known plaintext to work out his massive bombe machines to maximum effect and greatly reduce the problem space.

Sunday, April 27, 2008

Assignment 12

Section 6.3 contained a few interesting theorems for testing whether or not a number is prime. The basic principle and Fermat's primality test were fairly simple and it was easy (given previous reading in chapter three) to see how each test determines the compositeness of a number (that it is composed of multiple prime factors) even without finding the prime factors of the number. The Miller-Rabin Primality test, however, was thoroughly confusing and, even after multiple reads, my imagination could not grasp why it works or doesn't work for deterimining the compositeness of a number other than it is based on the basic principle and fermat's primality test. Not only is the test long but a proof is not shown which may mean that the number theory behind it is not necessary for the course and that it should just be accepted as sound. Similarly, the Solovay-Strassen primality test was not formally prooved but is quite easier to grasp.

This section underlines the importance of primes in cryptology, and more generally, in number theory. When I was first introduced to the fundamental theorem of arithmetic, that every number is composed of one or more primes, it struck me as a novel concept that such an apparently abstract fact could be so "fundamental." After the little number theory we've learned and the extensive applications in cryptography, prime numbers are arguably the most important aspect of numbers universally speaking. What this section also indicates is the difficulty in determining whether or not a number is actually prime without trying all possible factors. All tests given in the section are sufficient for determining whether a number is composite or namely, not prime, but none were able to prove conclusively that a number is prime. This fact then closely ties probability to number theory and can only result in even more interesting results.

Thursday, April 24, 2008

Assignment 11

The most difficult part of assignment 11 was definitely the Jacobi and Legendre symbols, at first it seemed as though the Legendre symbol was just a continuation of the square-rooting section in modular arithmetic, but it soon developed into a new theorem without a clear-cut application apparent. The Jacobi symbols were more of a continuation of the Legendre symbol and were actually important because they related to the law of quadratic reciprocity which not only related to section 3.9 in roots of numbers but also might develop cryptographic apllications (or at least we hope so given its place in the book).

The most significant gem from the reading was actually not the material itself but a simple appreciate of the Jacobi symbols as a more complex form of the Legendre symbols much in the same vein as Euler's Theorem was a generalization of Fermat's theorem. It's common place in mathematics that the "easy" specific cases are often more generalized for universal application and it makes you wonder whether or not some "specific" cases for factoring numbers might soon generate a universal factoring algorithm that may make systems such as RSA obsolete.

Sunday, April 20, 2008

Assignment 10

The difficult part of this reading assignment, section 6.2, was very difficult to pin down, because I had trouble understanding all of the attacks on the RSA system. The first two attacks, while I did not understand the mathematics behind them completely, made sense that if a significant portion of either p or d is known the cryptosystem could be broken more easily. This can best be understood as a significant reduction of the problem space for a brute force attack. Since RSA's security is reliant upon the infeasability of such an attack, these make sense. The low-exponent and short plaintext attacks also incur reductions in the problem space and so make some sense as well, but the timing attack was thoroughly confusing.

Despite being the cause of the most confusion for me during the reading, the timing attack on the RSA algorithm is possibly the most significant attack presented because it shows that cryptanalysis must use everything available at its disposal to find a way to decrypt messages. Future solutions to cryptosystems, for instance, may rely upon flaws in technology or human implementation of mathematically secure encryption methods. It also begs the question as to whether or not a one-time pad is actually secure despite its theoretical unbreakability.

Sunday, April 13, 2008

The most difficult part of sections 3.4-3.6 for me was definitely the understanding of Euler's theorem primarily because the book progressed from one step to the next of the proof without what I felt was clear enough explanation or all of the steps to make sure that the notation and understanding of the next steps were clear. The large "pi" symbol, for instance, has always puzzled me and I was hoping for an explanation as to its meaning to no avail. The overall theory seems to be fairly easily understood where fermat's theorem states that a number to the power of a prime minus one is equal to one, but the proofs for that and then Euler's theorem were not clearly understood and did not enlighten the foundations of either. That being said, this is a critical component of public-key cryptography and I'm hoping it will be more clearly understood soon.

The overall theme of these three sections seemed to be finding shortcuts for complicated modular operations. Section 3.4, for instance, resulted in the application of systems for modular arithmetic which seem fairly obvious at first glance but have itneresting applications. Section 3.5 was as simple as its brevity but profoundly useful in reducing the number of operations for calculations of large exponentiation with modular operations which is especially important when designing computer software to tackle these areas. Section 3.6 built upon these ideas to present the foundations for public-key cryptosytems which are huge components for modern cryptography.

Thursday, April 10, 2008

Assignment 6

The reading for assignment 6, sections 2.9-2.11 brought up interesting applications of randomness and what it is to be truly random. Contrary to what may be the popular belief, most "random" generations are actually pseudo random based on some algorithm created by people. True randomness, the book explains, can be generated by flipping a coin millions of times, or by tracking thermal noise in semiconductors, but these approaches are not always practical or even feasible. The most difficult part of this reading, then, resulted when the book mentioned one way functions to generate a random number. The explanation was not clear and resulted in some confusion as to how the numbers are actually selected. The importance of this method for cryptographic keys is extremely important and a thorough understanding of how these numbers can be generated is necessary for not only understanding of the cryptography, but also for formulating possible attacks against it.

The most astounding part of this reading section was the revelation of the one-time key pad and its workings to reveal a theoretically unbreakable cipher. The existence of an unbreakable cipher in and of itself is remarkable as is the knowledge of it for some time. The security of it, of course, depends upon proper use and precautions but the mere existence of a theoretically unbreakable cipher seems to stand against the constant struggle between cryptographers and cryptanalysis and is intriguing by its own accord. The concept introduced in section 2.11 regarding speed vs. efficiency was also an interesting sidenote, but aside from the use of a binary number system, LFSR sequences struck me as less impressive as the one-time pad cipher.

Monday, April 7, 2008

Assignment 5

Reading assignment 5 was fairly straightforward and brief with regard to its sections, sections 2.5-2.8, but some confusion was elicited by the introduciton of block ciphers. Specifically, while the concept of blocks of text being encrypted together is straightforward, deciphering which are block and which are normal requires some thought. For instance, the enigma cipher encrypts one letter at a time, but if one letter is changed, the order of keys may also change. After awhile, however, these distinctions have become clearer and bear thinking about each time a new cryptosystem is encountered.

Upon reflection, perhaps the most important part of this reading was the simplest part: ciphertext does not have to be just letters or numbers, but can be anything as long as it is related logically to the plaintext and can be decrypted according to some procedure.  This was best evidenced in Sherlock Holmes stick figures which contained messges according to a key that assigned letters to arbitrary figures.  Such ciphers have the advantage of possibly being mistaken for non cipher-text and thus add to the security of the information they are hiding.  Due to Kerchoff's law, however, this factor of cryptography receives little attention.

Sunday, April 6, 2008

Assignment 4

The most difficult part concerning sections 2.3 - 2.4 for me was the understanding of the reasoning behind finding the key length concerning the Vinegere cipher. The book indicates that to find the appropriate key length for the Vinegere cipher, it is necessary to choose various shifts and then find the shift with the greatest "coincidences" pertaining to identical ciphertext letter matchups. This was confusing at first but was eventually explained using dot products later in the section. Despite this explanation, however, I'm still not entirely sure of the reasoning behind its usefulness and will hope for clarification in lecture.

The farthest-reaching and probably most relevant part of this reading was demonstrated in both the vinegere cipher and substitution cipher sections; namely that frequency analysis is a powerful device for cryptanalysis. The vast majority of ciphers prior to enigma were solvable in this fashion. This also explains why linguists and crossword enthusiasts were so successful at code-breaking.

Thursday, April 3, 2008

Assignment 3:
1: Difficult: The most difficult part of the reading for sections 3.3, 2.1-2.2 was present accross all three sections, dealing with the multiplicative inverse for congruent values. Due to the increased reliance upon mathematics in cryptography, it has become common place to denote the letters of the alphabet numerically given values from 0-25 for each position. Thus, to change the plaintext values to ciphertext values it is necessary to have some knowledge of modular arithmetic to operate on the symbols. This was fairly straightforward and intuitive, but the operations using the multiplicative inverse and the conditions for using it required multiple reads to get the hang of. The inability to divide when the divisor and modular number (size of the ring) did not contain a gcd equal to one was also an interesting result of modular arithmetic that will still probably result in programming errors in the future. Overall, the modular arithmetic was intuitive and fairly easily understood, but these two issues, the multiplicative inverse and the lack of "closed under addition" were the hardest to grasp.

2. Overall the sections, including the necessary information on modular arithmetic, were a generous and easily understood introduction to some of the simplest ciphers in use even from ancient times such as the caeser cipher. These ciphers, while extremely simple in their mathematical application, were a good starting-off point to the understanding of the overall aim of cryptograhpy such as the treatment of plaintext and ciphertext. Unlike the original example in class, unfortunately, they also will present all ciphertext sans spaces which makes even the simplest cryptanalysis more realistic.

Tuesday, April 1, 2008

Assignment 2

1. The most difficult part of the reading for sections 1.1-1.2, 3.1-3.2 was definitely the two sections on number theory presented in chapter 3. Each section began rather innocuously with a seemingly easy introduction to the material. In section 3.1, for instance, it began by introducing what a divisor is and what division can be represented at, but quickly developed into the Euclidian algorithm involving prime numbers and the greatest common divisor. The theorems themselves were fairly straightforward but the proofs often required multiple reads. After a few reads, the information was fairly straightforward and the only barrier to easy understanding remained the notation. The notation q1 ^b1 * q12^b2 * qt^bt was written q1q1...q1q2...qt for instance, which was written without explanation.

2. The introductory chapter, chapter 1, was the most interesting and enjoyable primarily because it not only presented an overview of the basic goals and methods of cryptography and cryptanalysis, but also mentioned frequency analysis, a one-time pad cipher, and quantom cryptography, among others, that I had previously come in contact with when reading Simon Sighn's "The Code Book." The introductory text was also fairly informative in eplaining the mathematical limits and accomplishments feasible by computers and the resulting impact on cryptanalysis which presented a realistic view on these processes. I also enjoyed the reference to Eve's possible goals as being dependent upon the level of evil she possesses.

Assignment 1

Introduction:
-I am a third year Cognitive Science student with a math minor.
-I have taken linear algebra, both lower and upper division, lower division differential equations and the history of mathematics.
-I'm taking this class, not only because I am a math minor, but because I've been interested in cryptography since the beginning of high school after reading "The Code Book" by Simon Sighn. I've read the book twice and this class was one of the motivating factors for actually becoming a math minor.
-I do not have any eperience with Maple, Mathematica, Matlab or Sage aside from listenning to my flatmates complain about their various engineering assignments. I have, however, taken two PIC classes in C++ and an introduction to internet programming class involving HMTL/php. I feel that I should be able to pick up one of the aforementioned programs fairly quickly to pick up the programming for the assignments.
-My worst ever math teacher was a teacher in high school for math analysis (precalculus) who maintained a terrible lack of enthusiasm for teaching and required neatness in printed notes more than understanding of material. My best math teacher was Mr. Matlack, my calculus teacher for my junior year of high school; he managed to demonstrate any concept beginning from the most fundamental aspects.
-I have duel citizenship with the United Kingdom and am an avid tennis player and swimmer.