//Floating Point Guardian
Challenge: floating-point guardian (AI category)
Author: tryhard
Service: ncat --ssl floating.chals.nitectf25.live 1337
Flag format: nite{...}
>Summary
This challenge implements a small, deterministic neural network in C that acts as a gatekeeper. The program asks for 15 floating-point answers and computes a probability; if the probability equals a hard-coded target (within a small epsilon), it prints the flag.
To solve it we: (1) reverse-engineered the network from the provided src.c, (2) implemented the same forward pass locally in Python, (3) searched for an input vector that produces the target probability, and (4) submitted those inputs to the remote service to obtain the flag.
>What the binary does (analysis)
Key points from src.c:
- Input: 15 double values.
- Three layers: Input→Hidden1(8)→Hidden2(6)→Output(1).
- Activations on input elements depend on index modulo 4:
- i%4 == 0:
xor_activate(x, key)— convertsxto a fixed-point integerlong_val = (long)(x * 1_000_000), XORs withkey(per-input constant), then converts back to double by dividing by1e6. - i%4 == 1:
tanh(x) - i%4 == 2:
cos(x) - i%4 == 3:
sinh(x / 10.0) - All hidden activations use
tanh. - Output uses a linear combiner +
sigmoid. - Target probability is
TARGET_PROBABILITY = 0.7331337420andEPSILON = 1e-5. The program checksfabs(probability - TARGET_PROBABILITY) < EPSILON.
Important observation: the indices with xor_activate are quantized to micro-units (1e-6) and then XORed with fixed integer keys — those inputs are effectively discretized and can only change output when the integer representation int(x * 1e6) changes.
This mix of discrete (xor) and continuous (tanh/cos/sinh) inputs informed our solving strategy.
>Solving strategy
- Re-implement the forward pass in Python so we could quickly evaluate candidates without recompiling the binary.
- Compute the desired pre-sigmoid value (logit):
z = logit(TARGET) = ln(TARGET / (1 - TARGET)) ≈ 1.0105805171
We need the linear output (before sigmoid) to be as close to this target logit as possible.
- Start from a reasonable initial guess where many activations are near neutral:
- For
cosinputs (i%4 == 2) usex = π/2socos(x) ≈ 0. - For
xorinputs (i%4 == 0) start withx = key / 1e6soxor_activate(x, key)keeps the value nearly unchanged (the integer is exactly the key and XOR cancels to 0, giving small activation). - For others use small values (0).
- Use a randomized local search / hillclimbing procedure that:
- Adjusts continuous inputs by small gaussian noise.
- Adjusts
xorinputs by integer increments (because those are discrete in steps of 1e-6) to account for the XOR behavior. - Keeps best candidate (minimizing |prob - TARGET|) and reduces step-size over time.
This approach found an input vector that produces the required probability within the epsilon.
>Solver scripts included in this repo
solve_local.py— reproduces the forward pass in Python and runs the randomized search/hillclimb to find inputs producing the target probability.solve_remote.sh— a small helper showing how to send the found inputs to the remote service usingncat --ssl.
Both are included verbatim below and in the repo files.
>solve_local.py (full solver)
#!/usr/bin/env python3
# solve_local.py
# Reimplementation of the model's forward pass and a randomized search to match the target probability.
import math
import random
TARGET = 0.7331337420
EPS = 1e-7
random.seed(1)
XOR_KEYS = [0x42, 0x13, 0x37, 0x99, 0x21, 0x88, 0x45, 0x67,
0x12, 0x34, 0x56, 0x78, 0x9A, 0xBC, 0xDE]
# weight and bias tables copied exactly from src.c
W1 = [
[0.523, -0.891, 0.234, 0.667, -0.445, 0.789, -0.123, 0.456],
[-0.334, 0.778, -0.556, 0.223, 0.889, -0.667, 0.445, -0.221],
[0.667, -0.234, 0.891, -0.445, 0.123, 0.556, -0.789, 0.334],
[-0.778, 0.445, -0.223, 0.889, -0.556, 0.234, 0.667, -0.891],
[0.123, -0.667, 0.889, -0.334, 0.556, -0.778, 0.445, 0.223],
[-0.891, 0.556, -0.445, 0.778, -0.223, 0.334, -0.667, 0.889],
[0.445, -0.123, 0.667, -0.889, 0.334, -0.556, 0.778, -0.234],
[-0.556, 0.889, -0.334, 0.445, -0.778, 0.667, -0.223, 0.123],
[0.778, -0.445, 0.556, -0.667, 0.223, -0.889, 0.334, -0.445],
[-0.223, 0.667, -0.778, 0.334, -0.445, 0.556, -0.889, 0.778],
[0.889, -0.334, 0.445, -0.556, 0.667, -0.223, 0.123, -0.667],
[-0.445, 0.223, -0.889, 0.778, -0.334, 0.445, -0.556, 0.889],
[0.334, -0.778, 0.223, -0.445, 0.889, -0.667, 0.556, -0.123],
[-0.667, 0.889, -0.445, 0.223, -0.556, 0.778, -0.334, 0.667],
[0.556, -0.223, 0.778, -0.889, 0.445, -0.334, 0.889, -0.556]
]
B1 = [0.1, -0.2, 0.3, -0.15, 0.25, -0.35, 0.18, -0.28]
W2 = [
[0.712, -0.534, 0.823, -0.445, 0.667, -0.389],
[-0.623, 0.889, -0.456, 0.734, -0.567, 0.445],
[0.534, -0.712, 0.389, -0.823, 0.456, -0.667],
[-0.889, 0.456, -0.734, 0.567, -0.623, 0.823],
[0.445, -0.667, 0.823, -0.389, 0.712, -0.534],
[-0.734, 0.623, -0.567, 0.889, -0.456, 0.389],
[0.667, -0.389, 0.534, -0.712, 0.623, -0.823],
[-0.456, 0.823, -0.667, 0.445, -0.889, 0.734]
]
B2 = [0.05, -0.12, 0.18, -0.08, 0.22, -0.16]
W3 = [[0.923], [-0.812], [0.745], [-0.634], [0.856], [-0.723]]
B3 = [0.42]
# Activations:
def xor_activate(x, key):
long_val = int(x * 1_000_000)
long_val ^= key
return long_val / 1_000_000.0
def forward(inputs):
# hidden layer 1
h1 = [0.0] * 8
for j in range(8):
for i in range(15):
mod = i % 4
if mod == 0:
a = xor_activate(inputs[i], XOR_KEYS[i])
elif mod == 1:
a = math.tanh(inputs[i])
elif mod == 2:
a = math.cos(inputs[i])
else:
a = math.sinh(inputs[i] / 10.0)
h1[j] += a * W1[i][j]
h1[j] += B1[j]
h1[j] = math.tanh(h1[j])
# hidden layer 2
h2 = [0.0] * 6
for j in range(6):
for i in range(8):
h2[j] += h1[i] * W2[i][j]
h2[j] += B2[j]
h2[j] = math.tanh(h2[j])
out = sum(h2[i] * W3[i][0] for i in range(6)) + B3[0]
prob = 1.0 / (1.0 + math.exp(-out))
return prob
# Randomized search/hillclimb
def search(iterations=120000):
# initial guess: set cos inputs to pi/2 (cos≈0), xor inputs near key/1e6
inp = [0.0] * 15
for idx in [0, 4, 8, 12]:
inp[idx] = XOR_KEYS[idx] / 1_000_000.0
for idx in [2, 6, 10, 14]:
inp[idx] = math.pi / 2
best = inp[:]
best_prob = forward(best)
best_err = abs(best_prob - TARGET)
scale = 1.0
for step in range(iterations):
cand = best[:]
idx = random.randrange(15)
if idx % 4 == 0:
# discrete step in micro-units
delta = random.randint(-30, 30)
cand[idx] = max(0.0, cand[idx] + delta / 1_000_000.0)
else:
cand[idx] += random.gauss(0, scale)
prob = forward(cand)
err = abs(prob - TARGET)
if err < best_err:
best_err = err
best = cand
best_prob = prob
if step % 20000 == 0 and step > 0:
scale *= 0.7
return best, best_prob, best_err
if __name__ == '__main__':
best, prob, err = search()
print('best prob:', prob)
print('err:', err)
print('\\nInputs (Q1..Q15):')
for v in best:
print(repr(v))
>solve_remote.sh (how to send inputs to remote)
#!/usr/bin/env bash
# Replace HOST and PORT if different
HOST=floating.chals.nitectf25.live
PORT=1337
# The exact inputs we found (one per line):
cat <<EOF | ncat --ssl $HOST $PORT
0.000107
-3.158916950799659
1.5707963267948966
0.2950895004463653
0.00006
0.010848294004179542
1.7320580997363282
0.07781283712174002
0
0
1.5707963267948966
1.40376119519013
0.000169
0
1.5707963267948966
EOF
>Results
- Local: compiled and ran
src.cwith the found inputs and observedMASTER PROBABILITY: 0.7331338299, and the binary reachedprint_flag(). - Remote: sent the same inputs to
ncat --ssl floating.chals.nitectf25.live 1337and received the flag:
nite{br0_i5_n0t_g0nn4_b3_t4K1n6_any1s_j0bs_34x}