BalsnCTF 2020

General Comments {#balsn2020}

Another great CTF!

But yet again I have some Debbie Downer comments.

Disclaimer: this is clearly a top-tier CTF, close to ~100 points, so my main criticisms below are more subjective and personal than objective. I will be playing the part of Wolfgang, so I might seem a lot more negative than what is deserved.

The difficulty of certain tasks seemed to come from them simply being the composition of several sub-tasks that in some cases weren‘t even related. I am not a fan of that at all. It‘s exhausting and it feels like a really cheap way to increase difficulty. Happy Farm is the obvious example1. Instead of having tasks T1, T2, T3 that are all fairly independent problems, they‘re composed or chained in some way to make a much more work-intensive task. This joined task can even appear more difficult than the most difficult in the set2, yet it won‘t feel that rewarding.

That‘s just a strong personal preference, perhaps other people think it‘s cool and the bee‘s knees. I preferred N1CTF2020 for example, which had a similar difficulty level, but where none of the tasks seemed “artificially“ difficult by composition or layering.

import random
random.seed(int(input()))

assert \
b"""As an extreme example, what if every category was just a single chained
monster-task? No flag until you do them all. I personally am /much/ more
comfortable on the other end of this spectrum, where it's a series of smaller
bite-sized puzzles. Composing small but hard-ish puzzles isn't /that/ difficult.
Here's one, for example.""" == random.getrandbits(8*328).to_bytes(328, 'big')

Another consequence is one ends up with less number of tasks total, so there‘s less to choose from or pick what order to do things in.

A lot of the frustration behind the above rant comes from all the time I wasted on Patience I, which I only got a partial solution to, and the exhaustion that set in from solving the three-in-one task Happy Farm . As it were I felt really burned out and didn‘t do anything at all productive on Sunday, except to assist poiko with one of his solves. Well. I also discovered several mice in my apartment this weekend, which was…not fun.

Anyway, aeshash and IEAIE were both really cool, and I look forward to actually doing them outside of the CTF.

We ended up placing 12th, so we just missed the mark for the coveted 13th position.

Happy Farm

A layered crypto task. It‘s simply the concatenation of three totally different crypto tasks. Unlike some other such problems (I seem to remember some chained RSA problem from either a Dragon CTF or a DefCon qualifier?) the systems here aren‘t even related. In my opinion it really should have been given as three separate tasks, even if they were capped at a third of the points.

But as it were, this took too much time and exhausted most of my energy/hype.

Also, the task had the added fun flavour of the code being about seeds and fertilizers to grow onions, drawing numbers as ASCII-art bulbs and so forth. It stopped being fun after the first subtask.

Task 1: AES CBC

After spending some time reading the code (which is a hot mess) this task becomes pretty simple. All you need to know is how CBC mode works with AES.

Your have an oracle you can query twice. You give it (n, iv, data) with n=0..8999 and it gives back for an unknown key . That is: AES is initialized in CBC mode with the unknown k and the given iv. It then iteratively encrypt data n times. The goal is to find for some known . You may not give it to encrypt directly.

So first you give it something like (8999, siv[0]^sdata[0] || siv[1:], b'0' || sdata[1:]). The xor is just to make the data not match sdata, but it will be undone by the CBC mode. Then you give it (1, response[-16:], response) (last block is the IV for next block) and the result is the answer.

Task 2: RSA and Euler hacks

The server sets , , and . is composed of two 512-bit primes. Note that modulus is initially unknown. Note also that is “sort-of“ like .

The serves gives you and asks for an integer . It then gives you , which is “sort-of“ like taking the cube root times.

Finally you can feed it a full pair and it will calculate but with some of the low-order bits hidden. This whole thing was a bit convoluted. There‘s ASCII drawings of onions that you use to figure out where the missing bits are and blah blah. You basically get minus some “small“ random number on the order of 2^332^-ish (if I recall correctly). And a final caveat: the you give in this step has to be less than or equal to the you gave initially.

The goal is to find .

First, to do anything, we need to find , which we can do with what I call Euler hacks. We know that contains as a factor. This number is too big to try to factor out directly, but we have another number that also has as a factor. Because is “sort-of“ like iteratively taking the cube root of (which was itself a cube, because ), if we repeatedly cube it back up and subtract the known remainder we‘ll get a factor of , for example in . This last number can‘t be calculated directly if the is large, but we can do it under which will preserve the factor of . I suspect the caveat mentioned above with is simply there to force you to give a large initial to make this step a little bit harder? Finally we take the gcd of these numbers and get n (possibly multiplied by some small factor).

nk = 2**(3*3*11*31) - s
nj = pow(r, 3**8997, nk) - 2**(11*31) # I gave j=8999
n = gcd(nk, nj)

# The exponents are factored becaue I thought maybe the fact that 1023
# contained a factor of 3 would be relevant, but it wasn't.

Anyway, with known this reduces to a lattice/Coppersmith-type problem. In the second step I gave and got . In the last step I gave again but now using as the base number (i.e. a cube root of ), which gives me back as an “approximation“ of the target number .

The mask for the unknown bits in this approximation is

mask = 0xf00000000ffff0000ff00ffff00ffff00fffffffffffff000ffffffffffffffff00fffffffffffff0ff

Because I can use this equation and Sage‘s .small_roots(). I didn‘t really think about doing anything else, I just wanted it to end. However, the missing bits are not contiguous: if treated as a contiguous block then it‘s maybe a little bit over what a standard out-of-the-box Howgrave-Coppersmith implementation can handle. I wanted to avoid introducing more variables for the bit pattern and I also didn‘t want to sit there and have to fiddle with magic parameters hoping to get lucky. But “brute forcing“ the first hex character seemed to be enough. E.g. in sage:

print("small rootsin': ")
for i in range(16):
  print(i, end=' ', flush=True)
  rewt = ((c + i*2^328 + x)^3 - b).small_roots(X=2**300, epsilon=0.05)
  if rewt:
    print('thank god... You may now have a snack')
    break

So y + i*2^328 + rewt[0] is the solution.

Task 3: Borked RCA

Thankfully, this last layer was very easy. By now I was really sick of this whole task.

Long story short, you can query an oracle four times, giving it a k each time (k has the usual constraints). It then does something like this:

def blah(secret, k):
  L,R = secret[:len(secret)//2], secret[len(secret)//2:]
  for _ in range(k):
    L,R = R, bytes_xor(L, self.rc4_encrypt(R))
  return L+R

And gives you the result. The goal is to find the result of blah(secret, 9000**3).

rc4_encrypt is supposed to be a standard RC4 implementation initialized with an unknown key. (I.e. it generates len(plaintext) bytes of stream cipher output and xors it into the plaintext.) But if you‘re lucky like me, you notice this little nugget right away, before you‘ve even parsed what the task is about:

def swap(self, a, b):
  a, b = b, a

I think I actually laughed when I saw this innocent little method. It‘s trying so hard not to be noticed. Aww, it‘s so cute.

And but so. The RC4 implementation never really swaps elements around. Which means it will have a really short order. Let‘s look at it:

i = (i + 1) % 256
j = (j + s[i]) % 256
self.swap(s[i], s[j]) # lol!
output.append(s[(s[i] + s[j]) % 256])

Every 256 iterations, the i will be 0 and j will be sum(range(256))%256 == 128. Meaning that every 512 iterations we‘ll have and it will repeat.

OK. So. It‘s trivial to find secret: just give it k=0 and it won‘t encrypt it. I think secret was 240 bytes long, so to find the order of blah we just need to align things…lcm with the order…hm divide by 2…blah blah boring details. Long story short, blah(secret, 9000**3) == secret. It‘s just what came out of the first oracle query. Almost a bit of an anti-climax, really.

The Last Bitcoin

Proof-of-work Python script that asks you to find a string X such that sha256(random_token + X) starts with 200 zero bits.

That part is straightforward and clearly impossible.

Then you notice that if you give it such a string it exits with error code 1 indicating success. Python also exists with 1 if it gets an uncaught exception. And it will give such an exception if you give it non-ascii input. So I simply typed æ and got the flag.

The Danger of Google‘s Omnipotence

This is really poiko‘s solve, he did the reverse, provided all the data, and so on, I just helped with the last part in figuring out how to reduce the magic operation to something tractable.

The problem at bird‘s eye is, I was told, something like this:

k = 0xfad9c53c828be5dafc765d4a52a54168442b6f57569db5f320a45d0d0e39d92d04284087fe2e36da1375d55e6e4e9f746cf9d9916c791e0467dc0aedf77581d7e1ab342f99e49f4c684fd7424e806cc2fb1dd54c614487b6a3909dc469f76eb8df050f3928d4c371d8aace5c81fbb1e467b987ec5ae1f5ecd0b8ffe69369edc9
flag = <some 2^16 array> # secret
A = <some 2^16 array> # known

B = flag
for _ in range(k):
  B = strange_matmul(B, A)

# B is output here

Reversing strange_matmul and figuring out what it does is the main problem, which poiko already did. But as a last step we‘d need to strength-reduce strange_matmul to some actual math, like a real matrix multiplication for example, and then we have B = flag * A^k which can hopefully be inverted.

poiko gave me sort-of pseudocode for strange_matmul:

def strange_matmul(m, n):
    out = []
    for i in range(256):
        for j in range(256):
            val = 0
            for k in range(256):
                v1 = m[i*256+k]
                v2 = n[k*256+j]
                if v1 == 0 or v2 == 0:
                    continue
                val = strange_add(val, a_inv[(a[v1] + a[v2]) % 7**3])
            out.append(val)
    return out

Where strange_add adds the numbers as if they were in base-7 but without using carry, and a and a_inv were some lookup tables he also shared. I don‘t remember if they were in the code or something he constructed.

Anyway, so it looks like a normal matrix multiplication, but the element-wise operations aren‘t carried out using normal arithmetic. So we both played around a little with a and a_inv and there were definite group-like properties. I even wrote a little utility class so I could do arithmetic with such numbers directly and play with them in the REPL. At first it “felt“ like a commutative additive group, with 0 almost being the identity except for 0 + 0 = 1 which meant it wasn‘t really associative either, because . a was a permulation, but a_inv was not as a_inv[0] == 1 when it seemed like it should have been 0. This makes sense from the above code where 0 is avoided. Anyway, from the strange_add I strongly suspected it was a mapping into the åker3 where the numbers represented the coefficients of a qudratic polynomial over . If so, what‘s the modulus?

in should become where .

>>> -fux(7)**3
[6 0 4]

fux was my utility-class, and fux(7) should represent x, so the modulus is . (It turns out to also be the default one that sage gives when constructing GF(7^3), so that‘s probably its origin.) With this intuition behind it the math seems to work out.

So, we had 256x256 matrices over , and calculating the inverse is straightforward but boring sage stuff. It would likely have taken many minutes though, but one thing I lucked out4 on was I did .multiplicative_order() on A^-1 which sage found quickly as 342. So I could shave several minutes off the calculation by doing flagimg = B*(A^-1)^(k % 342).

Anyway, flagimg was a 256x256 noisy image showing a readable flag.

aeshash & IEAIE & Patience I

I feel these deserve an entry even though I failed all of them, but I did spend time on them. I probably didn‘t praise the first two tasks in this list enough. They are what I consider actually good crypto tasks of high difficulty (unlike Happy Farm). Especially aeshash! aeshash‘s task author deserves some real kudos.

For either task I had no prior experience and tooling, so I knew it‘d be a lot of work for me. I made poiko waste some of his rev-time to make sure I got a aeshash into Python, which I felt sort of bad about… Because I only really started to scratch the surface of them. And then for most of Sunday I didn‘t really have any energy left to continue. However they‘re the kind of tasks that makes me want to solve them in my leisure time, which is a real treat.

IEAIE was some image encryption scheme using a logistic map and permuations. It was very sensitive to differential analysis. I did a few things, was able to reveal a bunch of key-state, but ran into some floating point rounding headaches in my offline tests. I hate floating point numbers, so I took a sanity break from it and never really got back. For once the relevant papers are given in the task description so it‘s not really a Google-the-Paper challenge.

aeshash involved leaking state in a three-round pseudo-AES (each round using initial state as round-key) and then targeting a specific output. Literally: an AES hash. I have no idea about this one, which is all the more exciting. Despite me usually solving the crypto challenges I‘ve never worked with crypto profesionally or academically, so I lack a ton of basic knowledge and experience, such as breaking weakened AES, which is probably like Crypto 101. But I figured it will be a good place to start learning.

I also sunk a ton of time into Patience I doing proper OpenCV for the first time in my life… God, this task sapped me. The composed text which I suspect is one third of the flag(?) was easy, I also found this strange Morse-ish sequence but was unable to guess what it really was, noted the spikes in the lighting but had no idea what they meant either. In the end I even started to fear the minute little camera shakes were also part of it. Please God. Maybe all of these things played together in some beautiful and harmonious way in the end, but really it just felt like three concatenated subtasks that just all had to do with video?

Addendum: having seen spoilers on how Patience I is solved (but not the others), I can at least say I don‘t feel bad anymore for not figuring out this task. It‘s a bad one. Though I was wrong about the “subtasks“ being separate.

The first string you find, the easy part, overlay all the LEDs, was _q!fitlboEc. I mistakenly thought it might be flag[x::3] or something like that. (Thus why I thought the three parts were separate.) But apparently it‘s an alphabet of sorts, which you index using…I don‘t know, it was unclear exactly what. And the timing pattern was Morse, but the international extended version with punctuation, making P=B_=_Q=D which is apparently supposed to mean “replace ‘b‘ with ‘p‘, and ‘q‘ with ‘q‘“?… The logic or motivation behind any of these things totally escapes me.

I even said to poiko while I was working on this task, thinks like “hm, there‘s a timing pattern here, it feels a lot like Morse, but I‘m actually going to be a little bit disappointed if that turns out to be true…“ and “it can‘t be too escape room-ish, right? It‘s Balsn, they‘re good, they wouldn‘t make something like that“ and so on. Disappointed!

babyrev

Again, poiko solved this one, but I helped out a tiny little bit at the end, so I feel comfortable writing what it was about.

Apparently a scala program that seemed to construct an infinite stream by doing what I would write in Haskell as:

w = [[0,1,2,3], [0], [0], [0]]
stream = [0] : map (concat . map (w !!)) stream

It makes an infinite stream of lists like [0] : [0,1,2,3] : [0,1,2,3,0,0,0] : [0,1,2,3,0,0,0,0,1,2,3,0,1,2,3,0,1,2,3] : .... It sums all the sublists in this stream, extracts index 60107, and takes its remainder under . It converts the resulting number to 4 bytes which becomes an xor-key for flag data.

Immediately I smelled a recurrence relation for the sum. Tried and true OEIS tells the truth: 5, and so, starting from and (for the sum of these lists) gives all that‘s needed. It can be calculated linearly in a list, or logarithmically by exponentiation on a 2x2 matrix.

  1. I suspect Show your Patience and Intelligence I is another one, but since I only partially solved it, I couldn‘t really say outright. See also https://github.com/BookGin/my-ctf-challenges/tree/master/balsn-ctf-2019/images-and-words from last year‘s BalsnCTF, which—don‘t get me wrong—is a really cool and mind-blowing task, but you need three fairly independent key insights to solve it. Although I guess in web this kind of stuff is more like the standard way of doing things?

  2. because the domain knowledge required for the entire thing grows linearly with unrelated sub-tasks added. This is another reason why I don‘t like mixed challenges. Crypto behind web, misc behind rev, etc. It feels like a sort of “miserly“ way of forcing the solve count to stay low even though the subtasks are themselves easy or unoriginal. Generalists and people with “broad“ CTF-knowledge are already greatly rewarded and have a (fair!) advantage over more specialized noobs (like me), but this doubles down on that advantage on the task level.

  3. Norwegian for ‘field‘

  4. I say I lucked out because my sage seems to hit a bug when I do A.multiplicative_order() directly. It just hangs and churns CPU for over a minute, so most likely I would have given up on this avenue. Who knows why sage does what it does sometimes. :its_a_mystery:

  5. looking up on OEIS felt like cheating but it saved a couple of minutes. It‘s clear in hindsight, as the list goes: A, B, BAAA, BAAABBB, BAAABBBBAAABAAABAAA and so on.