Finding Vulnerabilities Through Fuzzing

Overview

Fuzzing can feel a bit front‑loaded: you may spend time wiring harnesses and running campaigns without immediately finding exciting new bugs, especially on hardened or well‑tested targets. That’s normal, and it's one reason the next week on patch diffing often feels more directly "practical" — many companies already run large fuzzing setups and need people who can understand and exploit the bugs those systems uncover. Still, working through this week is important: it teaches you how fuzzers actually discover real vulnerabilities, so when you later triage crashes or study patches, you'll have a solid intuition for how those bugs were found and how to reproduce them.

Prerequisites

Before starting this week, ensure you have:

  • A Linux virtual machine (Ubuntu 24.04 recommended) with at least 8GB RAM and 8 cpu cores

  • Basic understanding of C/C++ programming

  • Familiarity with command-line tools and debugging (GDB basics)

  • Understanding of memory corruption vulnerabilities (from Week 1)

Day 1: Introduction to Fuzzing

Success Criteria:

  • AFL++ compiles and installs without errors

  • Both fuzzing sessions start successfully

  • You can see the AFL++ status screen showing paths found, crashes, etc.

  • Check out/crashes/ directory for any discovered crashes

Troubleshooting:

  • If afl-clang-fast not found: Check /usr/local/bin/ is in PATH

  • If compilation fails: Ensure LLVM 19 is properly installed (clang-19 --version)

  • If fuzzer doesn't start: Check CPU scaling governor (echo performance | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor)

Real-World Impact: AFL++ Finding CVE-2024-47606 (GStreamer)

Background: AFL++ and similar fuzzers are actively used to find vulnerabilities in production software. Let's examine a real case from Week 1.

Case Study - CVE-2024-47606 (GStreamer Signed-to-Unsigned Integer Underflow):

  • Discovery Method: Continuous fuzzing campaigns by security researchers using AFL++ on media parsers

  • The Bug: GStreamer's qtdemux_parse_theora_extension had a signed integer underflow that became massive unsigned value

  • Attack Surface: MP4/MOV files processed automatically by browsers, media players, messaging apps

  • Fuzzing Approach:

    1. Target: GStreamer's QuickTime demuxer (qtdemux)

    2. Seed corpus: Valid MP4 files from public datasets

    3. Instrumentation: Compiled with AFL++ and AddressSanitizer

    4. Mutation strategy: Structure-aware (understanding MP4 atoms)

    5. Result: Heap buffer overflow crash after ~48 hours of fuzzing

Why Fuzzing Found It:

  • Rare Input Combination: Required specific Theora extension size values that underflow

  • Static Analysis Limitation: Signed-to-unsigned conversion buried in complex parsing logic

  • Code Review Miss: Integer arithmetic looked correct without considering negative values

  • Automated Testing Gap: Unit tests didn't cover malformed Theora extensions

The Discovery Process:

Key Insight: Fuzzing excels at finding edge cases in complex parsers that humans would never manually test. The combination of:

  • Coverage-guided mutation (AFL++ exploring new code paths)

  • AddressSanitizer (detecting memory corruption immediately)

  • Persistent fuzzing (running for days/weeks)

...makes it more effective than manual testing for this vulnerability class.

Key Takeaways

  1. Fuzzing finds real vulnerabilities: Not just theoretical crashes, but exploitable bugs in production software

  2. Coverage-guided fuzzing is powerful: AFL++ intelligently explores code paths rather than random mutation

  3. Sanitizers are essential: ASAN, UBSAN turn subtle bugs into immediate crashes

  4. Time matters: Many bugs require hours/days of fuzzing to discover

  5. Seed corpus quality affects results: Starting with valid inputs helps reach deeper code paths

Discussion Questions

  1. Why did fuzzing find CVE-2024-47606 when code review and unit testing didn't?

  2. What advantages does coverage-guided fuzzing have over purely random fuzzing?

  3. How do sanitizers (ASAN, UBSAN) enhance fuzzing effectiveness?

  4. What types of vulnerabilities are fuzzing best suited to find? What types does it miss?

  5. How can seed corpus selection impact fuzzing effectiveness?

Day 2: Continue Fuzzing with AFL++

  • Goal: Understand and apply advanced fuzzing techniques.

  • Activities:

    • Reading: Continue with "Fuzzing for Software Security Testing and Quality Assurance" (From 3.3 to 3.9.8).

    • Real-World Examples:

    • Exercise:

      • Experiment with different AFL++ options (for example, dictionary-based fuzzing, persistent mode).

      • Running AFL++ with a real-world application like a file format parser to mimic real-world scenarios.

      • Optionally, target an image or media parser so you can practice finding heap overflows and out-of-bounds reads similar to the libWebP and GStreamer bugs from Week 1.

Expected Outputs:

  • AFL++ status screen showing increasing coverage

  • Crashes appearing in fuzz/image/out/Master/crashes/ or fuzz/image/out/Slave1/crashes/

  • AddressSanitizer reports for memory corruption bugs

What to Look For:

  • Crashes with SIGSEGV or SIGABRT signals

  • AddressSanitizer reports showing heap buffer overflows, use-after-free, etc.

  • Unique crash signatures (different stack traces)

Troubleshooting:

  • If compilation fails: Check that all dependencies are installed

  • If no crashes found: Let fuzzer run longer (hours/days for real targets)

  • If crashes are false positives: Review ASAN options and adjust

Real-World Campaign: Fuzzing Image Parsers

Case Study - CVE-2023-4863 (libWebP Heap Buffer Overflow):

From Week 1, you learned about this critical vulnerability. Let's understand how fuzzing could have (and did) discover similar bugs.

  • The Target: libWebP image decoder, used by Chrome, Firefox, and countless applications

  • Why It's Fuzzing-Friendly:

    • Pure input-to-output: takes file bytes, produces image

    • No network/filesystem dependencies

    • Deterministic execution

    • Complex parsing logic with many edge cases

Fuzzing Campaign Strategy:

What Fuzzing Discovered:

In the real CVE-2023-4863 case:

  1. Initial crash: Heap buffer overflow in BuildHuffmanTable()

  2. Root cause: Malformed Huffman coding data caused out-of-bounds write

  3. ASAN output: Immediate detection of corruption with exact location

  4. Exploitability: Function pointer hijack possible via heap corruption

Why This Bug Survived Testing:

  • Unit tests: Covered valid WebP files, not malformed Huffman tables

  • Static analysis: Complex pointer arithmetic hard to verify

  • Code review: Bounds check looked correct in isolation

  • Fuzzing advantage: Generated millions of mutated WebP files, including edge cases

Parallel Fuzzing for Speed:

Corpus Management and Seed Selection

Why Seed Quality Matters:

Building Effective Seed Corpus:

Key Takeaways

  1. Image parsers are prime fuzzing targets: Complex, widely-deployed, handle untrusted input

  2. OSS-Fuzz prevents 0-days: Continuous fuzzing finds bugs before attackers

  3. Parallel fuzzing scales linearly: 8 cores = ~8x throughput

  4. Corpus quality > corpus size: Minimized, diverse seeds outperform large random corpus

  5. Dictionaries accelerate discovery: Format-aware tokens reach deeper code paths faster

Discussion Questions

  1. Why are image/media parsers particularly well-suited for fuzzing compared to other software?

  2. How does corpus minimization improve fuzzing efficiency without losing coverage?

  3. What trade-offs exist between fuzzing speed (lightweight instrumentation) and bug detection (heavy sanitizers)?

  4. Why did OSS-Fuzz find bugs in libwebp that years of production use didn't reveal?

  5. How can you determine if a fuzzing campaign has reached diminishing returns and should target a different component?

  6. How can you improvearrow-up-right fuzzing speed?

Day 3: Introduction to Google FuzzTest

  • Goal: Understand in-process fuzzing with FuzzTest and how to turn unit tests into coverage-guided fuzzers that actually find memory corruption bugs.

  • Activities:

    • Reading: Continue with "Fuzzing for Software Security Testing and Quality Assurance" (From 4.2.1 to 4.4).

    • Online Resources:

    • Exercises:

      1. Set up FuzzTest in a small CMake project and run a trivial property-based test.

      2. Use FuzzTest + AddressSanitizer to rediscover a simple heap buffer overflow (Week 1 vulnerability class).

      3. Extend the fuzz target to cover a small parser-style function, similar to the image/format parsers from Days 1–2.

Why FuzzTest in a vulnerability-focused course?

FuzzTest is a unit-test-style, in-process fuzzing framework from Google that:

  • Integrates with GoogleTest: You write TEST and FUZZ_TEST side by side in the same file.

  • Uses coverage-guided fuzzing under the hood (libFuzzer-style) but hides boilerplate harness code.

  • Works great for libraries and core logic (parsers, decoders, crypto helpers) where you already have unit tests.

  • Is ideal for CI: The same binary can run fast deterministic tests or long-running fuzz campaigns depending on flags.

Where AFL++/Honggfuzz are great for whole programs and black-box binaries, FuzzTest shines when you have source code and want to fuzz individual C++ functions directly.

Lab 1: Set up FuzzTest and run a basic property

You should see FuzzTest/libFuzzer-style statistics (executions per second, coverage, etc.). For a correct property like integer commutativity, the fuzzer should not find crashes.

Lab 2: FuzzTest to find a heap buffer overflow

Now turn FuzzTest onto a deliberately vulnerable function that mimics a classic stack / heap buffer overflow from Week 1.

Expected result: After a short time, FuzzTest should report a crash with an AddressSanitizer message similar to:

At this point you can:

  • Open first_fuzz_test.cc and fix the bug by adding a length check (for example, only copying up to sizeof(header)).

  • Rebuild and re-run the fuzz target to confirm the crash is gone.

This is exactly the same pattern as our AFL++ labs: fuzzer + sanitizer → crash → root cause → fix, but now entirely inside a unit test binary.

Lab 3: Fuzzing a small parser-style function

To connect FuzzTest to the real-world parsers from Days 1–2, fuzz a tiny length-prefixed parser that can easily go wrong if you mishandle integer arithmetic.

What to look for:

  • If you intentionally break the length check inside ParseMessage (for example, remove the if (len > input.size() - 1) guard - 3 lines), FuzzTest + ASAN/UBSAN should quickly find crashes or undefined behavior.

  • Try modifying the parser to add more fields (flags, type bytes, nested length fields) and see how FuzzTest finds edge cases you did not think about.

Key Takeaways

  1. FuzzTest brings fuzzing into your unit tests: You can turn GoogleTest-style tests into coverage-guided fuzzers with FUZZ_TEST, using the same build system and test runner.

  2. Sanitizers are critical: Combining FuzzTest with ASAN/UBSAN turns memory bugs (overflows, UAFs, integer issues) into immediate, reproducible crashes.

  3. Great fit for parsers and core logic: Short, pure C++ functions (parsers, decoders, protocol handlers) are ideal FuzzTest targets, similar to the real-world parsers from Days 1–2.

  4. Properties > examples: Expressing invariants like “never crash” or “length field matches payload” lets the fuzzer explore inputs you would never hand-write.

  5. Same workflow as other fuzzers: Regardless of tool (AFL++, Honggfuzz, FuzzTest), the basic loop is still fuzz → crash → triage → exploitability → fix.

Discussion Questions

  1. In what situations would you prefer FuzzTest over a process-level fuzzer like AFL++ or Honggfuzz, and why?

  2. How would you go about converting an existing GoogleTest regression test into an effective FUZZ_TEST that can find new bugs, not just regressions?

  3. Which vulnerability classes from Week 1 (e.g., buffer overflows, integer overflows, UAF) are especially well-suited to FuzzTest, and which are harder to reach with this style of in-process fuzzing?

  4. How could you integrate short, time-bounded FuzzTest runs into a CI pipeline without making builds too slow, while still having longer campaigns on dedicated fuzzing machines?

  5. When writing properties like LengthFieldRespected, what kinds of mistakes in the property itself might cause you to miss real bugs or report lots of false positives?

Day 4: Introduction to Honggfuzz

Success Criteria:

  • Honggfuzz compiles and installs successfully

  • OpenSSL builds with fuzzing support

  • Fuzzers compile without errors

  • Honggfuzz starts and shows coverage statistics

What to Look For:

  • Coverage metrics increasing over time

  • Crashes in the working directory

  • Different crash types (heap overflow, use-after-free, etc.)

Note: Real OpenSSL fuzzing often runs for days/weeks. For this exercise, run for at least 30 minutes to see initial results.

Real-World Impact: Honggfuzz Finding TLS Vulnerabilities

Case Study - Heartbleed-Class Bugs in TLS Implementations:

While Heartbleed (CVE-2014-0160) predates modern fuzzing tools, similar vulnerabilities continue to be found through continuous fuzzing campaigns.

Why TLS is Hard to Fuzz:

  • Stateful protocol: Must complete handshake before reaching deep logic

  • Cryptographic operations: Random values, signatures, MACs

  • Multiple versions: TLS 1.0, 1.1, 1.2, 1.3 with different code paths

  • Extensions: ALPN, SNI, session tickets, early data, etc.

Honggfuzz Advantages for Network Protocols:

Real Bugs Found by Protocol Fuzzing:

From OpenSSL and other TLS implementations:

  • Buffer overflows in certificate parsing: X.509 extension handling

  • Use-after-free in session resumption: Ticket lifetime management

  • Integer overflows in record layer: Length calculations

  • State confusion bugs: Unexpected message ordering

Example: CVE-2022-0778 (OpenSSL Infinite Loop):

Fuzzing vs Real-World Exposure:

Metric
Production Use (10 years)
OSS-Fuzz (1 year)

Total connections

Billions

0 (pure fuzzing)

Unique inputs tested

~1,000 (typical sites)

Trillions

Edge cases covered

<1%

>90%

Bugs found

~5 (via exploits)

~50

Key Insight: Fuzzing explores input space breadth that production traffic never reaches.

Key Takeaways

  1. Honggfuzz excels at complex targets: Multi-threaded, persistent mode, hardware-assisted coverage

  2. Protocol fuzzing requires stateful harnesses: Must reach deep code paths beyond initial parsing

  3. Continuous fuzzing prevents regressions: OSS-Fuzz runs 24/7, catches new bugs in code changes

  4. Cryptographic code is fragile: Parsers for ASN.1, X.509, PEM frequently have bugs

  5. Timeout detection finds DoS bugs: Infinite loops, algorithmic complexity issues

Discussion Questions

  1. Why does fuzzing find TLS bugs that years of production use don't reveal?

  2. What makes protocol fuzzing (TLS, HTTP/2, DNS) more challenging than file format fuzzing?

  3. How does hardware-assisted coverage (Intel PT) improve fuzzing effectiveness?

  4. What are the limitations of fuzzing for finding cryptographic vulnerabilities vs implementation bugs?

Day 5: Introduction to Syzkaller

Success Criteria:

  • Kernel compiles successfully with KASAN and KCOV enabled

  • VM image creates without errors

  • VM boots and is accessible via SSH

  • Syzkaller manager starts and shows web interface

  • Web interface displays fuzzing statistics

Expected Outputs:

  • Web interface showing: exec total, crashes, coverage, etc.

  • Crashes appearing in workdir/crashes/ directory

  • Kernel oops messages in VM logs

Troubleshooting:

  • If kernel doesn't boot: Check QEMU/KVM is enabled (lsmod | grep kvm)

  • If syzkaller can't connect: Verify SSH key permissions (chmod 600 trixie.id_rsa)

  • If no crashes: Let run longer - kernel fuzzing takes time

  • Memory issues: Reduce VM count in config if system runs out of RAM

Real-World Impact: Syzkaller's Contribution to Kernel Security

Case Study - CVE-2022-32250 (Linux Netfilter Use-After-Free):

From Week 1, you learned about this vulnerability. Here's how syzkaller discovered it:

  • Target: net/netfilter/nf_tables_api.c - Linux firewall subsystem

  • Discovery Date: May 2022

  • Fuzzing Duration: ~72 hours from code introduction to crash

  • Root Cause: Reference counting error in stateful expression handling

The Discovery Process:

Why Syzkaller Found It:

  1. Syscall coverage: Tests all netfilter operations systematically

  2. Sequence exploration: Tries millions of syscall orderings

  3. State tracking: Maintains kernel state across operations

  4. KASAN integration: Immediate detection of memory corruption

  5. Reproducibility: Generates C reproducer for developers

The Reproducer (simplified):

Impact: Local privilege escalation from any user to root on systems with unprivileged user namespaces (default on Ubuntu, Debian). Public exploit available within weeks.

Case Study - CVE-2023-32629 (Linux Netfilter Race Condition):

  • Target: net/netfilter/nf_tables_api.c - Same subsystem, different bug

  • Bug Class: Race condition in batch transaction handling

  • Discovery: Syzkaller's multi-threaded syscall fuzzing

  • Impact: Container escape + privilege escalation

How Syzkaller Finds Race Conditions:

Syzkaller's Advantages for Kernel Fuzzing:

  1. Syscall descriptions: Domain-specific language for kernel APIs

  2. Coverage-guided: Tracks code coverage to explore new paths

  3. Multi-threaded: Finds race conditions naturally

  4. VM-based isolation: Kernel crashes don't affect fuzzer

  5. Reproducers: Automatic generation of minimal C reproducers

  6. Bisection: Automatically finds introducing commit

Analyzing a Syzkaller Bug:

Key Insight: Kernel attack surface is massive. Syzkaller's systematic approach finds bugs that would take years of manual testing.

Key Takeaways

  1. Syzkaller revolutionized kernel security: Found 4,500+ bugs that manual testing missed

  2. Syscall fuzzing requires domain knowledge: Must understand kernel APIs to fuzz effectively

  3. Race conditions need parallel execution: Multi-threaded fuzzing essential

  4. VM isolation is critical: Kernel crashes would kill the fuzzer otherwise

  5. Reproducers enable fixing: Minimal C programs allow developers to debug quickly

Discussion Questions

  1. Why has syzkaller found thousands of kernel bugs that years of production use didn't reveal?

  2. How does syzkaller's syscall description language enable effective kernel fuzzing?

  3. What makes race condition detection particularly valuable in kernel fuzzing?

  4. Why are networking subsystems (netfilter, inet) the most frequent sources of vulnerabilities?

  5. How do user namespaces make kernel vulnerabilities more dangerous by increasing exploitability?

Day 6: Crash Analysis and Exploitability Assessment

Understanding Crash Analysis Tools

Case Study 1: Analyzing Heap Buffer Overflow

Scenario: Fuzzing discovered a crash in an image parser. Let's perform complete analysis.

ASAN Output Analysis:

Interpreting the ASAN Report:

  1. Bug Type: heap-buffer-overflow

  2. Operation: WRITE of size 512

  3. Location: vuln_parser.c:16 in build_huffman_table()

  4. Allocation: 256-byte buffer at line 12

  5. Overflow: Writing 512 bytes into 256-byte buffer = 256 bytes overflow

Root Cause Analysis:

Exploitability Assessment:

Exploitability Classification: EXPLOITABLE

Reasoning:

  1. Attacker controls overflow size: table_size from input

  2. Attacker controls overflow data: codes array content

  3. Heap corruption possible: Can overwrite adjacent objects

  4. Exploitation path:

    • Overflow into adjacent heap object

    • Corrupt function pointer or vtable

    • Hijack control flow

    • Execute arbitrary code

Real-World Example: Similar to CVE-2023-4863 (libWebP Heap Buffer Overflow) from Week 1.

Case Study 2: Use-After-Free Analysis

ASAN Output:

Exploitability Assessment:

Exploitability Classification: EXPLOITABLE (Verified via manual analysis)

[!NOTE]: Automated tools like CASR may label this NOT_EXPLOITABLE because ASan instruments the function pointer read before the call. Manual verification (as shown above) proves control flow hijack is possible.

Exploitation Strategy:

  1. Heap grooming: Allocate/free to position objects

  2. Reclaim freed memory: Allocate object of same size (requires bypassing ASan quarantine in lab)

  3. Control freed memory contents: Fill with attacker data

  4. Trigger UAF: Call call_handler()

  5. Function pointer hijack: handler->process points to attacker-controlled address

  6. Result: Arbitrary code execution

Real-World Example: Similar to CVE-2024-2883 (Chrome ANGLE UAF) from Week 1.

Case Study 3: Integer Overflow Leading to Heap Corruption

Analysis:

Root Cause:

  1. Integer Overflow: width * height overflows 32-bit integer range, wrapping to 0.

  2. Under-allocation: malloc(0) allocates a tiny chunk.

  3. Logic Mismatch: Loop uses proper 64-bit bounds (or nested loops), iterating 4 billion times.

  4. Heap Corruption: Loop writes far beyond the allocated chunk.

Exploitability Assessment:

Exploitability Classification: EXPLOITABLE

Exploitation Strategy:

  1. Heap Grooming: Allocate a sensitive object (e.g., a structure with a function pointer) immediately after the vulnerable 0-byte allocation.

  2. Trigger Overflow: Send input with dimensions 0x10000 * 0x10000 to cause integer overflow -> malloc(0).

  3. Overwrite: The loop writes attacker data (fake_data) into the adjacent sensitive object.

  4. Hijack: Trigger the use of the corrupted object (e.g., call the function pointer).

Building Proof-of-Concept Exploits

Heap Overflow Example

Use-After-Free Example

Expected console output:

To force an outright crash (showing instruction-pointer control), change the spray in attacker_groom_heap() to:

Running ./vuln_uaf_nosan now ends with a segfault at 0x4141414141414141, demonstrating control-flow hijack without AddressSanitizer interfering.

Key Takeaways

  1. Sanitized vs non-sanitized builds: Use ASan/UBSan/KASAN builds for triage and root-cause, then switch to no-sanitizer builds (with knobs like ASAN_OPTIONS or MALLOC_CHECK_) to study realistic heap behavior and exploitation.

  2. Automated ratings are heuristics: CASR’s Type/Severity fields are a starting point only; Case Study 2 showed a UAF rated NOT_EXPLOITABLE even though a function pointer hijack is clearly possible.

  3. Crash location vs root cause: Tools often stop at the first invalid access (e.g., a read from freed memory) while the real exploit primitive (e.g., control-flow hijack) may be one instruction later.

  4. Exploitability hinges on control: In all three case studies, exploitation becomes realistic when the attacker controls sizes (length, dimensions) and data that drive allocation and memory writes.

  5. Systematic PoC development: The path is always fuzz → crash → triage (ASan + CASR) → root cause → minimal reproducer → exploit PoC (heap metadata or function pointer overwrite).

Discussion Questions

  1. In your own workflow, when would you prefer to keep AddressSanitizer enabled, and when would you switch to a no-sanitizer build while evaluating exploitability?

  2. How could CASR’s NOT_EXPLOITABLE rating for the UAF case mislead a less experienced analyst, and what manual checks (in GDB) prevent that mistake?

  3. In the integer-overflow case, which variables and addresses would you inspect in GDB to confirm both under-allocation and the ensuing heap overwrite?

  4. How does the exact crash site (e.g., first invalid read vs later jump through a corrupted function pointer) change your assessment of exploitability and which tools notice it?

  5. How do modern mitigations (ASLR, DEP, hardened allocators, CFI) interact with the exploitation strategies you used in Case Studies 1–3, and what extra steps would be needed in a real target?

Further Reading

Blog Posts and Case Studies

Practice Targets

Day 7: Fuzzing Harness Development and Real-World Campaigns

What is a Fuzzing Harness?

A fuzzing harness is the code that:

  1. Receives fuzzer-generated input

  2. Prepares that input for the target API

  3. Calls the target functionality

  4. Handles errors/cleanup

Example - Bad Harness vs Good Harness:

Case Study: Writing Harness for JSON Parser

Target: json-c library (real-world JSON parser)

Harness Design Principles Applied

  1. In-process execution: LLVMFuzzerTestOneInput - no fork/exec overhead

  2. Direct API targeting: Calls json_tokener_parse_ex directly

  3. Coverage maximization: Exercises multiple code paths (objects, arrays, serialization)

  4. Proper cleanup: Frees allocated memory to avoid OOM

  5. Sanitizer-friendly: Works with ASAN/UBSAN for bug detection

Case Study: Fuzzing Archive Extractors

While CVE-2023-38831 was in closed-source WinRAR, let's fuzz open-source alternatives with similar architectures.

What This Campaign Targets:

  1. Format parsing bugs: TAR, ZIP, RAR, 7z, etc.

  2. Compression algorithms: gzip, bzip2, lzma, zstd

  3. Path traversal: Symlink/hardlink handling (like CVE-2023-38831)

  4. Metadata parsing: Timestamps, permissions, extended attributes

  5. Memory corruption: Buffer overflows in decompression routines

Expected Findings (based on real OSS-Fuzz results):

  • Integer overflows in size calculations

  • Path traversal via symlinks

  • Buffer overflows in compression codecs

  • Use-after-free in error handling paths

Key Takeaways

  1. Harness Design is Critical: Efficient harnesses (in-process, persistent) significantly outperform naive fork()/exec() wrappers.

  2. Target Logic, Not Just I/O: Good harnesses bypass CLI parsing to exercise core API logic directly (e.g., json_tokener_parse_ex).

  3. Seed Corpus Quality: A diverse, minimized corpus of valid inputs accelerates code coverage discovery.

  4. Sanitizers Enable Detection: Memory bugs (ASAN) and undefined behavior (UBSAN) are only found if the harness is compiled with them.

  5. Continuous Integration: Tools like OSS-Fuzz automate the "find → fix → verify" loop, preventing regressions in evolved code.

Discussion Questions

  1. Why is an in-process harness (like LLVMFuzzerTestOneInput) orders of magnitude faster than a file-based CLI wrapper?

  2. How does defining a proper seed corpus (e.g., valid JSON/ZIP files) help the fuzzer penetrate deeper into the target's logic?

  3. What are the risks of "over-mocking" in a harness (e.g., bypassing too much initialization) versus "under-mocking" (doing too much I/O)?

  4. How do you handle state cleanup in a persistent-mode harness to prevent false positives from memory leaks or global state pollution?

  5. Why is it important to fuzz different layers of an application (e.g., the compression layer vs. the archive parsing layer) separately?

Week 2 Capstone Project: The Fuzzing Campaign

  • Goal: Apply the week's techniques to discover and analyze a vulnerability in a real-world open source target or a complex challenge binary.

  • Activities:

    • Select a Target:

      • Choose a C/C++ library that parses complex data (e.g., JSON, XML, Images, Archives, Network Packets).

      • Suggestions: json-c, libarchive, libpng, tinyxml2, mbedtls, or a known vulnerable version of a project (e.g., libwebp 1.0.0).

    • Harness Development:

      • Write a LLVMFuzzerTestOneInput harness or an AFL++ persistent mode harness.

      • Ensure the harness compiles with ASAN and UBSAN.

    • Campaign Execution:

      • Gather a valid seed corpus (from the internet or by generating samples).

      • Minimize the corpus using afl-cmin or afl-tmin (or libFuzzer's merge mode).

      • Run the fuzzer for at least 4 hours (or until a crash is found).

    • Triage and Analysis:

      • Deduplicate crashes.

      • Use GDB and ASAN reports to identify the root cause (e.g., Heap Overflow, UAF).

      • Determine exploitability (Control of instruction pointer? Arbitrary write?).

    • Report:

      • Document the target, harness code, campaign commands, and crash analysis.

Deliverables

  • A fuzzing_report.md containing:

    • Target Details: Project name, version, and function targeted.

    • Harness Code: The C/C++ harness you wrote.

    • Campaign Stats: Fuzzer used, duration, executions/sec, and coverage achieved.

    • Crash Analysis: ASAN output, GDB investigation, and root cause explanation.

    • PoC: A minimal input file that triggers the crash.

Looking Ahead to Week 3

Next week, you'll learn patch diffing - analyzing security updates to understand what was fixed and discovering variant vulnerabilities. You'll see how fuzzing discoveries lead to patches, and how analyzing those patches can reveal additional bugs.

Last updated