RSA-ish system with a custom exponentiation function.

The first thing I notice is probably that the prime generation is obviously very weak:

r = getRandomNBitInteger(nbit >> 1)
s = getRandomNBitInteger(nbit >> 3)
p, q = r**5 + s, s + r
if isPrime(p) and isPrime(q):

nbit is originally 512, so r becomes 256-bit and s only 64-bit. But uhh, checking the actual provided numbers in the given output.txt did not match, it would have to be a 128-bit r and 64-bit s. Ehh, OK, so the output wasn’t generated by the script, nice.

But anyway, assuming the generation was still the same, it’s easy to factor.

For this you could use whatever stock Coppersmith implementation from off the shelf, but where’s the fun in that? The numbers are actually weak enough that you can factor it by hand, which is way more fun! We don’t even need to brute force.

Pretend and get the first approximation of . This is really the upper bound of . We could technically pretend but it’s not necessary. Let’s just keep it simple.

ri = iroot(n,6)[0]

Observe that this is also the approximation of (here a lower bound).

Observe what decreasing (i.e. approaching the true ) does to our approximate :

sage: print( [n//(ri-k)**5 - n//(ri-k+1)**5 for k in range(10)] )
[5, 5, 5, 5, 5, 5, 5, 5, 5, 5]

What does it all mean, Clarice? What does it mean!?

Observe that . I.e. the fraction is really close to this integer.

Observe what happens when we ignore the least significant :


I said observe!!

Phew! The error in our is actually related to the hidden by . So let’s get real snug and tight to it.

sage: max(((ri-y)^5 * ((ri-y) + 2 + 6*y) - n).roots(RealField(600),False))

(The here is from the observation earlier, where the integer bound happens, as .)

So trying directly leads to factoring, for example: .

Who needs stupid Sage scripts.

Anyway, the task, yeah. Factoring was the easy job, I just made it unnecessarily difficult, and factoring doesn’t solve the task yet1 because the exponentiation operation performed looked like this:

def lag(k, a, n):
  s, t = 2, a
  if k == 0:
    return 2
  r = 0
  while k % 2 == 0:
    r += 1
    k //= 2
  B = bin(k)[2:]
  for b in B:
    if b == '0':
      t = (s * t - a) % n
      s = (s **2 - 2) % n
      s = (s * t - a) % n
      t = (t** 2 - 2) % n
  for _ in range(r):
    s = (s ** 2 - 2) % n
  return s

And then the encryption something like:

r = getRandomRange(2, n)
enc = (lag(r, a, n), (lag(r, y, n) + lag(e, FLAG, n)) % n)

Where a, e, and y is known.

Uh huh. I’m embarrassed to say it took me a good two or three hours of playing with this operation to figure out that lag() calculated Lucas numbers of the second kind . The should have tipped me, but instead I found out by reading about Chebyshev polynomials when I finally cowered before OEIS. Really need to get better at this kind of math rev.

However the most embarrassing thing was that the task calculated “some” d as , and deriving y from it… Which I somehow blocked from my mind. At one point I also calculated this “mysterious” d, just to derive my own y to double check the given y. It wasn’t until after I’d gone to the trouble identifying the sequence that I tried plugging this d into lag() and found that it did indeed work just like a regular RSA inverse. Jesus Christ. I don’t even.

Pretty cool overall, even if yet again the “cool” operation turns out not to really matter, but I learned some new tricks about the Lucas sequences at least.

  1. Well, it actually does, but I didn’t know that.