Control of the internet


As winamp says, the number of pairs is huge – about^{0.3\times%20n} for an n-bit key … the number for a 256 bit key is bigger than the number of atoms in the observable universe, and we use much bigger keys than that. So you can’t store and index all the keys, and the derivation of the private key from the public key is based on an algorithm that is computationally infeasible to reverse (so far :nin ).


And 5 years ago I would’ve thought this impossible. Now probably doable within 10 years.
Not so many 1024 bit keys afaik.


I don’t think you’re comprehending the number. If every atom in the observable universe somehow contained another universe of the same size, and every atom in each of those universes contained yet another universe of the same size, and you could store a key in every atom of every one of those … you’d still be a factor of a trillion trillion trillion trillion trillion short.

Computationally, if you could derive a key in one machine instruction, and every atom in all those universes was a supercomputer equal to our entire world’s computing power, and they’d been calculating by brute force since the universe began, you’d still be a factor of a trillion trillion short.

#154 … cant-hack/


:laughing: No I wasn’t comprehending. Less commenting after 8DD I think.


Feds Asked Yahoo For Data 12,444 Times In First Half Of Year … lf-of-year


Fair enough :slight_smile:

What about 128-bit keys, though? Is that feasible?

Or perhaps they’ve found a way to reverse engineer the algorithm, not to get an exact number, but to get a range so you drastically reduce the number of keys you have to generate to crack a specific pair? Say down to a few billion or so possibilities - that’s only a few seconds work. Given that public key encryption relies on two primes to be the factor, that reduces the number of operations you have to do. You can probably make some assumptions on the size of the prime numbers too, further limiting the range - this post appears to offer a brute force method: … ng-to.html that sufficient scale would cover?

If that was the case, once you’ve cracked it, you have it stored and can read the messages at will and the sender/receiver are no wiser. So the ‘trick’ would appear to be to reduce the possible combinations you need to try to crack a particular public key.

Mind you, this is well beyond my grade-C A-level maths :smiley:


There are several well-known optimizations to help you factorize such numbers more quickly. There are also ASICs available to speed it up considerably. However, last time I checked nothing comes close to allowing you to do it in any reasonable time for commonly-used public key sizes, let alone real-time decryption. (To digress for a second, public key is only used to authenticate TLS web connections, not encypt them – you don’t need public key stuff for that since far more elegant and efficient solutions exit).

We should assume that the NSA (famously the world’s largest employer of mathermaticians) has some non-public optimazations. The question is, do they improve things by 10% (a massively impressive feat for a single organisation, but not enough to get you to real-time decryption) or 90%? Who knows.

To go back to the implementation hacks I mentioned, my money would be on the NSA putting significant resources into this kind of thing. If you can slightly weaken an implementation – for example, if you can get the server to poorly choose some of its initial vectors – you might be able to make the key crackable in real-time with the NSA’s resources. This is kind of what happened with the OpenBSD hack – IIRC the code change caused one of the initialisation vectors to be transmitted in the clear along with the setup messages, which made the connection crackable.

But again, it would be far easier to just ask Verisign for a copy of the keys…


Yes, but how many prime number factors are we talking about? Is that a storable number of keys? The system I work on does 75,000 transactions a second, with each transaction about 2 million instructions doing about 100 I/Os - so reasonably heavy-duty transactions.

Precisely - a logical weakness (perhaps the idea that prime numbers are safer because they are rare?) is also a likelier hole than a mathematical one. The high-end mathematicians I was in college with were deeply into their philosophy…

Yep, I found an article earlier that reckoned that there were a number of key generation methods that were not nearly random enough with some of them easily crackable. If you know what ‘thing’ was used to build the key pair, you might get an advantage to working out the parameters it works under.

I’m kinda guessing that most of the stuff they really want to crack is not Verisigned…


That’s a nice toy :smiley:

So you’re not even in the ballpark.

You talk about factoring a 768-bit number – in 2009 it took a team of researchers 2 years to factor one such number. And it gets exponentially harder as you add bits.

No, that’s not the way it works. Prime numbers are not safer because they are rare – they’re used because a product of two primes has a unique non-trivial factorisation, which is not the case for non-prime numbers. So it just wouldn’t work for non-prime number.

They aren’t even that “rare”, really, depending on how you look at it. At first glance they intuitively seem to be rare compared to non-prime (composite) numbers. But actually it is correct to say that there are as many prime numbers as non-prime numbers.

It’s not that hard to prove, since there are infinitely many of them (and both sets are the same size, called Aleph-zero – assuming we’re talking about the natural numbers 0,1,2,3).

Anyway, you’ll quickly get yourself tied up into knots throwing about words like “rare” when comparing two sets of infinitely many numbers.

Edit: removed reference to rationals as they’re also countable


But talking about reductions of 90% in a 1024-bit key encryption - are 10% of the factors in a 1024-bit number prime? My limited brain would say, probably not. So how long would it take to calculate all the primes (a one-time job) and how many of them are there? Enough that a billion transactions a second could quickly crack?

PS thanks for the explanation of why primes are used, I’d forgotten that, but have remembered that I used to know it, so while stupid, I am not senile :smiley:


But, there will only be 2 factors in the 1024-bit number that are used as a crypto key, by definition.

If you mean, why not keep a list of all primes <= 1024 bits, see ps’s explanation above.


About 1 in 700 numbers in a 1024 bit number are prime. … -1024-bits

If you have a public key, you know that it is one factor of a 1024 bit number, so you can work out that the other number must be between a certain range, no? If you are looking for the factors, you are not looking for all prime numbers in a 1024 bit number, but more likely all the prime numbers in two 512 bit numbers (roughly speaking? I’m making the assumption that both prime numbers in a 1024 bit public key are roughly the same length, that they are both ‘large’).

Another vast gap in my knowledge - how are the prime numbers calculated? Aren’t the large primes already known and public before they are used in keys? I’m presuming that anything that I can generate on my PC in the minutes it takes to set up, for example, a PGP key is worthless?

Stepping back a bit from 1024 bit, how does 128 bit look?

To me, it looks like brute force is going to manage to crack 1024 - … 4-bit-rsa/


PS this is also an interesting and well written introduction…


On the lighter side…


Excuse the klunky iPad typing here. The public key is the composite number, not one of the primes. The task of cracking it boils down to finding its unique pair of prime factors. There are a few really interesting properties of primes. First, the density of primes is predictable but their distribution is not. So, for instance, we know that there will be more primes between 2 and 100 than between 1002 and 1100 – the former range has a higher density of primes. We have a formula to calculate it. But that in no way helps us find the values of specific primes. Btw, you said 1/700 of 1024-bit numbers were prime. So the number of 1024-bit numbers is ten to the power of three hundred and something (see previous post). And the number of primes in that range is ten to the power of three hundred and something minus two point something. (Assume you know how this works with logs, saves me typing). IOW, the stuff about all those atoms and universes and gargantuan numbers is barely affected at all by the density of primes compared to if every number was prime. And as an example of how the density doesn’t help with the distribution, consider the prime doubles. These are pairs of primes separated by an interval of two, e.g. 3 and 5, 5 and 7, 11 and 13, 17 and 19, 29 and 31, etc. As you go to higher numbers, there are less prime doubles, just as there are less primes in general as the density falls. But you never run out of them – there is an infinite number of prime doubles. So up in the 1024-bit numbers even though you know that only 1 in 700 numbers is prime on average, you never know that the next one isn’t just an interval of two away.

Another amazing thing about number theory is that there are tests of primality that do not involve finding prime factors. And these are computationally tractable algorithms. So I can (relatively) quickly tell you that a number is prime or composite, even though I cannot quickly tell you its prime factors if composite. The implication of this for cryptography is that the key generation process is straightforward, even though key cracking is fiendishly difficult.


Thanks you - I’m heading for the beakthrough being a weakness (a predictability) in how the primes are calculated, rather than a breakthrough in factoring. Despite a lot of supposition, the reading I did over the weekend of those who do maths for a living didn’t credit a factoring breakthrough.

Of course, that could be a technicians blind spot :slight_smile:


But you see, it doesn’t matter how the primes are calculated. As long as they are, in fact, prime, and are sufficiently large, you’re good to go. You could look them up in a book and that would be fine (obviously assuming you don’t re-use the same pair all the time). In practice it’s not that hard to find a couple of big primes computationally, as PS says.

Primes are one of the more magical elements of mathematics that are reasonably accessible without a whole lot of background, but the magic can be pretty counterintuitive. PS’s explanation is a good start.


It’s mad Ted. Obama learns what the NSA is doing from Edward Snowden and then goes and asks the NSA about it.


It does matter, though, they have to be random primes and the resultant public key has to be unique. If the prime generation is predictable for a given piece of software, that is a huge weakness in it that is exploitable.

One thing I note is that the NSA don’t recommend RSA for US government communications… it rather suggests they think it insecure…