Don't like this style? Click here to change it! blue.css

LOGIN:
Welcome .... Click here to logout

Quick overview of crypto-related algebra

Overview: In this first lesson I'm going to give you a whirlwind tour of the mathematics you'll need as we go through the course. This is the lesson that has all of the algebra/number theory. By the time we're done we'll have seen a huge set of valuable tools. If you don't understand it all perfectly that's OK, come back and master at your own pace. Explore the internet for clues, etc. etc.

Part 1: Arithmetic with a modulus and GCD

These are interesting things that have been thought about by humans for thousands of years. The ideas transcend our culture, language, and most of our history. They also happen to be among the most useful algorithms ever designed. Still the gold standard for elegance, speed, and utility the Euclidean algorithm is by far my favorite algorithm out there.

Quotient Remainder Theorem

The logic of modular arithmetic began with the following idea of divisibility:

Quotient Remainder Theorem: For every integer \(a\) and positive integer \(b\) there exists unique integers \(q\) and \(r\) such that: $$ a = b \cdot q + r, \; 0 \leq r \lt b $$

This is the classic math version of remainders and it's important to note that the output is always positive.

Try it out: When \(a = 93\) and \(b = 11\) what is the unique value of \(q\) and \(r\)? (\(r\) is called the remainder when 93 is divided by 11 and \(q\) is the quotient when 93 is divided by 11.) q=8, r=5

Try it out with negatives: What about when \(a = -93\) and \(b = 11\) what is the unique value of \(q\) and \(r\)? (Recall that \(r\) has to be non-negative.) (The answer is in the HTML source. r is 6, q is -9)

Thinking about a modulus

In this little part I want to show you several ways to think about equivalence modulo a number. It is more intense than you probably need and I would encourage you to skim it first then think about examples then read it again a little deeper. I want to emphasize that this is my attempt to explain something common in an uncommon way and if it's just too much to comprehend in one sitting that's OK. There is an escape hatch which is that "mod" means "remainder". Think of this as a red pill/blue pill moment and not something that will make or break your grade. You DO have to handle the basic arithmetic, you DON'T have to see the infinite sets (but it's fascinating and maybe easier once you do).

The first bit of crypto math is modular arithmetic. We write things like \( 23 \equiv 2 \pmod{7}\), which is read out loud as "23 is equivalent to 2 mod 7". Many people think of \(x \pmod{p}\) as the remainder when \(x\) is divided by \(p\). That's not totally right but it's OK.

I see working "modulo" something as pretending like that something is \(0\). I might even say to a business partner that "this deal is good modulo some small details". We can really say that \(a \equiv b \pmod{q}\) when \(a - b\) is a multiple of \(q\). So \(123 \equiv 13 \pmod{11}\) because \(123 - 13 = 110 = 11 \cdot 10\). Now I've heard folks describe the remainder of \(x\) mod \(p\) which really does refer to the unique number between \(0\) and \(p-1\) which is equivalent to \(x\).

Let me give you some other ways to see things, we can look at modular arithmetic as arithmetic that ignores adding and subtracting the "modulus". So in arithmetic modulo \(13\) I can say that 53 is equivalent to 53-13 = 40 which is equivalent to 27 which is equivalent to 14 which is equivalent to 1 which is equivalent to -12 which is equivalent to -25 and so on. In fact \(53 \equiv \{ 53 + k \cdot 13 \, | \; \forall k \in \mathbb{Z} \} \) (you can read that as "13 is equivalent to the set of all numbers of the form 53 plus an integer multiple of 13").

There is an infinite sequence of numbers that are equivalent to \(53 \pmod{13}\), but out of that infinite sequence there is exactly one number which is positive and smaller than 13. In this case 1. In a math setting (not with all company for sure) I would call 1 the canonical representative of the equivalence class containing 53 modulo 13. Nutty huh? Well seeing modular arithmetic as working with entire sets of numbers (infinite sets at that) helps explain the notation, the reasoning, and the idea behind the scenes. Another notation that pops up when talking about these infinite sets is the BAR notation, \( \overline{53} = \overline{1} \). In that case we use the BAR to mean the set of all numbers that are equivalent to 53 modulo 13, which is why we don't need the "equivalent" sign, it really is equality then. In practice you forget the sets and jump to the remainder when you divide by \(p\) …

KATA: Become awesome at modular arithmetic

Try it out: convince yourself that \(\overline{x} + \overline{y} = \overline{x+y}\) and \( \overline{x} \cdot \overline{y} = \overline{x \cdot y} \). You might want to practice with actual values, so if you look at the set of all numbers which have remainder 3 modulo 7 and the set of all numbers which have remainder 4 modulo 7 then adding one number from each set is sure to be in the set of all numbers which have remainder 7 modulo 7 (which is better written as 0 modulo 7). Likewise if you multiple any number from both sets you'll get something with remainder 5 modulo 7. Hence you can write \( 3 + 4 \equiv 0 \pmod{7}\) and \(3 \cdot 4 \equiv 5 \pmod{7}\).

Forget the infinite: now just use the simple rules of remaindering to answer the following, what is: \( 52 - 18 \pmod{17}\), \(1432651 + 873231 \pmod{100} \), and what about \( 923 \cdot 81245 \pmod{10} \)?

Repeat After Me: I'm going to calculate \(3^{17}\) modulo \(11\) outloud using the smallest numbers I can. Then you do \(2^{33}\) modulo \(13\) at your table.

Greatest Common Divisor and the Euclidean Algorithm

We will revisit this algorithm when we get to modular inverses a little later so that will be our eventual goal.

For now we want to know what the largest number that perfectly divides two input numbers \(a\) and \(b\).

For instance \( \operatorname{gcd}(56, 42) = 14 \), because \(56 = 14 \cdot 4,\) and \(42 = 14 \cdot 3\). Importantly, 56 is not divisible by 3.

There is an alternative definition of the GCD which is the smallest non-zero number in the set \( \{ s \cdot a + t \cdot b \, | \; \forall s,t \in \mathbb{Z} \} \) (read as "the smallest non-zero linear combination of \(a\) and \(b\)"). We will find value in knowing \(s\) and \(t\) later.

The one insight needed to pull this off is that if \(a = b \cdot q + r\) then \(\operatorname{gcd}(a, b) = \operatorname{gcd}(b, r) \), since something that divides \(r\) and \(b\) must divide \(a\) (and something which divides \(a\) and \(b\) must divide \(r\)). This seems dry but when you think about it, if you add two things that are divisible by 7 then the sum is divisible by 7 too (and it works the other way too, if 7 divides \(a\) and \(b\) then it has to divide \(r\) ).

Now, here is the world's greatest algorithm for solving this problem, the GCD, generically:

  
gcd(a,b):
if b == 0:
  return a
else:
  return gcd(b, a mod b)

It is elegant, simple, recursive, and also lightning fast (takes about as many iterations as the number of digits).

Implement it: in Python implement this algorithm. Use def gcd(a,b): and then run it with inputs 56, 42 and inputs 8, 13 and inputs 1071, 462.

Count the iterations: use a global variable to show the number of times gcd calls itself (first line of your program can be global counter). Grab a couple of fibonacci numbers from the internet and see that those make for the worst-case performance.

A Modular Arithmetic Flag

Overview This part gives you as much information as I think you'll possibly want from the ideas of Group Theory. It's a bit dense and if you need to skip it over until you have a contextual reason to come back I'll understand. At it's core these ideas are the cornerstone of "abstract algebra" which is Math's version of cubism. Everything in here is valuable, but, for this course in particular, we need to get the idea that elements in groups (e.g. all integers) have "orders" (i.e. cycle lengths) which let us use modular arithmetic to speed up big computations. This is going to be exploited to build both cool crypto schemes and cool attacks.

Basic Group Theory

Let's do some quick definitions: a Group is a set \(G\) and a binary operation \(*\) with the following four properties:

The operation could be addition, multiplication, composition, rotation, or any other action that consumes two things and returns one.

If the group also satisfies the rule that \(x * y = y * x, \forall x,y \in G \) then we call the group Abelian (commutative).

In an additive group we write \(0\) instead of \(e\), in a multiplicative group we write \(1\) instead of \(e\).

For example, the set \(G = \{0, 1, 2\}\) is a group under the operation "addition modulo 3". You can verify this, \(0+0=0, 0+1=1, 0+2=2, 1+0=1, 1+1=2, 1+2=0, 2+0=2, 2+1=0, 2+2=1\). If you look through that you'll see that adding any two things in \(G\) makes another element in \(G\). It has an identity 0, \(x+0 = 0+x = x\) for each \(x\). You'll also see that every element has an "inverse" which sends it back to 0, \(0+0=0, 1+2=0, 2+1=0\). The associative part happens for free because of addition being addition, but you could check just for fun if you want. This example group is often written \( \mathbb{Z}_3 \) (which I read "Z three" or "Z mod 3").

Samples: Take the integers \(\mathbb{Z}\). Are they a group under addition? Are they a group under multiplication? How about \(\mathbb{Q}\), the rational numbers (all fractions of integers)? How about the \(\mathbb{R}\) real numbers?

Modular examples: How about the numbers modulo a prime: are they a group under addition? Are they a group under multiplication? What if we toss out zero?

Modular Examples Part 2: Take the set \(\{0, 1, 2, 3, 4, 5\}\) mod 6. Find all subsets of those numbers which form groups under multiplication modulo 6.

So when it comes to \(\mathbb{Z}_m^*\) the multiplicative group of numbers mod \(m\), they are the sets of all numbers co-prime to \(m\). (How many are there?) Sometimes you see this group written \(\mathbb{Z}_m^{\times}\). The set \(\{0, 1, 2\}\) is not a group under multiplication modulo 3 since 1 is the identity element but \(0 \cdot x = 0\) and there can't be any element that would send 0 to 1. (No inverse for 0 under multiplication modulo 3.) BUT the group \(\{1,2\}\) is a group under multiplication modulo 3 (\(2 \cdot 2 = 1\), 2 is its own inverse).

Orders of Elements

Now let's start to explore what happens when you use a single element to generate more elements by just combining it with itself. As an example, if you look at the number 4 modulo 9. Now start multiplying it by itself, this makes a sequence, \(4, 4 \cdot 4 = 7, 7 \cdot 4 = 1, 1 \cdot 4 = 4, \ldots\). So this set is \(\{1, 4, 7\}\) which I would denote as \( \langle 4 \rangle \). It is also a group. Generally, we call this the "cyclic subgroup generated by \(g\)" written, \(\langle g \rangle\). Also, any group which can be generated by one element is called a cyclic group.

To put this more specific, for any integer \(k\) lets denote \( g^k := g * g * \cdots * g \), \(k\) times over. BTW \(g^0 := e \). Sometimes in an additive group we write \(k \cdot g\) rather than \(g^k\). Then the cyclic group generated by \(g\) is \( \langle g \rangle = \{g^k \mid \forall k \in \mathbb{N} \}\).

So here is something interesting: every time you have an element \(x \in G\) then the set of elements generated by \(x\) forms a subgroup of \(G\). That is, a subset of \(G\) which is itself a group under the same operation of \(G\). So every element of a group generates a cyclic subgroup of that group.

There is another important theorem known as Lagrange's Theorem (so important ZZ-Top wrote a song about it) which says the following: For any subgroup \(H \leqslant G\) the size of \(H\) divides the size of \(G\)! That is \(|H| | |G|\) (means "the size of H divides the size of G").

The proof of this is not so bad. Basically a subgroup is closed under the operation in question, so if you have some element NOT in the subgroup you can add it (multiply it) to every element in the subgroup and get another set not equal to the subgroup which has the same size. Repeat the process and you'll chunk all of the elements (partition) in \(G\) into "cosets" of \(H\). (Cosets are a lot like the equivalence classes from modular arithmetic, think \(2\mathbb{Z} + 1\) (the odd numbers) as the coset of evens (a group), add 1 to each element in that set and you get a different set that is not a group.)

Feel it out: Take the numbers mod 23, there are 22 numbers which form the multiplicative group. Write a script which calculates the size of the subgroup generated by 1 to 22 and see that all orders are 2, 11, and 22.

We call the size of the subgroup generated by \(x \in G\) the "order" of \(x\).

If we put this all together we get that \(x^{|G|} = 1\) for every \(x \in G\)!!!! (Think out why this is true, answers in the HTML.) Any element generates a subgroup of some size, call it k. That number k divides the size of G. So the element to the |G| is really that element to the k several times over, and every time that power gets to k were back at 1. It cycles just right.

Put a different way, you just proved Fermat's little theorem: \(a^{p-1} \equiv 1 \mod{p}\).

Play with powers: Compute all numbers which can be written in the form \(2^{x} \mod{31}\), now do the same with \(7^{x} \mod{31}\).

Overview: One of the best super-powers of the math we've done so far is to compute the inverse of a number modulo a prime (or some other number). So in this part we'll introduce what that means and show you two ways to pull it off.

What is an inverse?

For an integer \(a\) and a modulus \(m\) we want to find a number \(n\) (we can call it \(a^{-1}\) such that \(n \cdot a \equiv 1 \pmod{m}\). Let's start doing it by hand:

Simple inverses: Find the multiplicative inverse of 3 modulo 10. Now find the inverse of 16 modulo 19. In different terms, find an \(x,y\) such that \(3x \equiv 1 \pmod{10}\) and \(16y \equiv 1 \pmod{19}\).

Inverses modulo a prime

We saw in the last section that for every number \(a\) and prime \(p\) that \(a^{p-1} \equiv 1 \pmod{p}\). So if you think about that, we now have a trick for computing the inverse of \(a\) modulo \(p\) since \(a \cdot a^{p-2} = a^{p-1} \equiv 1\). So when the modulus happens to be a prime we can just compute \(a^{p-2} \pmod{p} \).

In Python the command for that is pow(a, p-2, p).

Let's try that out, but keep an eye on performance. As we grow the size of \(p\) how well does this perform?

Inverse mod small p: Use \(p = 101\) and \(a = 14\). Use pow(a, p-2, p) to find an inverse and confirm that a * a_inv % p is 1.

Inverse mod large p: Here is a really large prime: p=6768924473842051155435121 calculate the inverse of a=102721840089015263978980446 modulo \(p\) using pow(a, p-2, p). Confirm that a * a_inv % p is 1. Try the trick for several random values of \(a\).

Speed evaluation: Was there a difference in performance between the two computations?

I'll show you how it works in the next part.

Inverse mod a non-prime

Sub-target: I want you to be able to calculate the inverse of 1103515245 modulo 2**31 (it shows up later in this module so we need to build the skill now).

So in this case it's a bit much to do by hand without some more tricks.

Extending GCD:

As I said in an earlier part, the smallest linear combination of \(a\) and \(b\) is the GCD. So whenever two numbers have GCD == 1 that means that there are two integer weights \(s,t\) such that \(s \cdot a + t \cdot b = 1\). This is interesting on its own but notice what happens when you look at that equation mod \(a\) now squint to look at it mod \(b\).

What this means is that \(s\) is the inverse of \(a\) modulo \(b\) AND \(t\) is the inverse of \(b\) modulo \(a\). So to find the inverse when we have a non-prime modulus we can use the extended GCD to find \(s\) and \(t\).

Enough mathematicians have looked at this to find a straight-line approach ( Wikipedia's Extended Euclidean algorithm page ) so let's look at the secret sauce.

At each iteration we moved from \(a = q_0 \cdot b + r_0\) to \(b = q_1 \cdot r_0 + r_1 \).

The idea is that we can keep track of \(s_i, t_i\) such that \(r_i = s_i \cdot a + t_i \cdot b\).

At each iteration let r[i] = r[i-1] % r[i-2] and q[i] = r[i-1] / r[i-2] (integer not float).

So extend this to include s[i] = s[i-2] - q[i-1]*s[i-1] and t[i] = t[i-2] - q[i-1]*t[i-1]

Build it: write an extended GCD in python which returns d,s,t such that d == s*a + t*b .

Use it: Now calculate the inverse of 1103515245 mod 2**31.

And Now Your CTF Problem:

Overview: We extend the ideas of the previous parts into some classical number theory that also happens to be practical for cryptographers. In this case we will learn to compute Euler Totient Functions and use them to speed up modular inverse computations. Also I'll show you why raising numbers to large powers is not actually expensive when you're working with modular arithmetic.

So here is a beautiful idea which we will see in several formats in the class:

$$ a^{\phi(M)} \equiv 1 \mod{M} $$

Where \(\phi(M)\) is the Euler Totient function of \(M\), which counts the positive numbers smaller than \(M\) which are co-prime (GCD(M, x) == 1) with \(M\).

For instance \(\phi(10) = 4\) because 1, 3, 7, 9 are all co-prime with 10.

What does this gain us?

Well when \(M\) is a prime number, \(\phi(p) = p-1\), which makes this equation look like:

$$ a^{p-1} \equiv 1 \mod{p}$$

That is Fermat's little theorem!

Play with this: write a method to calculate \(\phi(n)\) the hard way (gcd of each number to \(n-1\)).

Heuristic check of Euler: select some random integers mod \(n\) and raise them to \(\phi(n)\).

Aside: speed powering

In this course we are often playing with the cost of brute force computation. One notion which has always been valuable when doing computational number theory is quick exponentiation modulo some integer.

Here is the idea: multiplication mod M prevents numbers from growing too large. If my goal is to compute \(a^{1028} \mod{101}\) then I can start with \(a\), compute \(a^2, a^4, a^8, a^{16}, a^{32}, a^{64}, a^{128}, \ldots \) and do \(\log_2{1028}\) multiplications not 1028 multiplies. This is called repeated squaring.

Python does a built-in modular exponentiation which you invoke with pow(a, exp, mod) .

Repeated squaring: calculate \(5^{2048} \mod{709}\) by repeated squaring (answer is 396). (Do this by hand.)

Micro Adjusts: now calculate \(5^{2049} \mod{709}\) and \(5^{2080} \mod{709}\) using the same notion. That is, think about how these other numbers differ from the perfect power of 2. Can you generalize the idea into a clever algorithm for writing your own pow(a, b, m)?

Main goal:

So an alternate way to find the inverse of a number modulo another is to use Euler's theorem in the following way:

$$ a^{\phi(n) - 1} \equiv a^{-1} \mod{n}$$

General Euler Totient

So to take full advantage of this trick (and see how it helps when we get to RSA and such) you need to see the general version:

If you know the prime factorization of your modulus (a big if) then computing euler totient works like this:

$$ \phi(p_1^{k_1} \cdots p_n^{k_n}) = \phi(p_1^{k_1}) \cdots \phi(p_n^{k_n})$$

Where \(\phi(p_i^{k_i}) = p_i^{k_i} - p_i^{k_i-1} = p^{k_i-1}\cdot (p_i - 1) \).

For instance \(\phi(3 \cdot 5 \cdot 2^3) = \phi(3) \cdot \phi(5) \cdot \phi(2^3) = 2 \cdot 4 \cdot 2^2 = 32\).

Compute Phi: what is \(\phi(2^{31})\)?

Put it together: Now what is the inverse of 1103515245 mod 2**31 using this trick.

Self-guided: Now try this trick with some small moduli and small integers a until you feel confident that you've got it.

A Phi Based Flag:

Overview While this isn't a math course and I'm not expecting any number theory background there are many valuable lessons that a practitioner of crypto needs to know. This page is dedicated to giving you my wish list of math skills that help with crypto. Also some helpful advice.

First of all, minimize the algebra in your encryption schemes . We will see analytic attacks that should make you feel nervous about leaning on number theory problems without a proof of security. If for no other reason there are patterns in algebra that we have yet to exploit so we should proceed with caution.

Two, the choice of modulus really matters . When it comes to RSA and Diffie-Hellman (in Module 5) if you choose your primes carelessly you will pay the price.

Third, algebra is powerful . Public key crypto is a revolution and it only works by having the very structure we fear to share with attackers.

Ok, so with that in mind I want to make sure you've got these number theory facts under your belt. We roughly cover them in these notes but this list is for you to reference and google and add to your vocabulary.

Contemplate and eventually know the following things:

An Abstract Algebra Flag: