Skip to content

Improve project profile, adds more information and visual cleanup. #2162

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
72 changes: 71 additions & 1 deletion tools/web-fuzzing-introspection/app/static/assets/db/oss_fuzz.py
Original file line number Diff line number Diff line change
Expand Up @@ -180,6 +180,23 @@ def get_introspector_report_url_debug_info(project_name, datestr):
datestr) + "all_debug_info.json"


def get_introspector_report_url_fuzzer_log_file(project_name, datestr, fuzzer):
return get_introspector_report_url_base(
project_name, datestr) + f"fuzzerLogFile-{fuzzer}.data.yaml"


def get_introspector_report_url_fuzzer_program_data(project_name, datestr,
program_data_filename):
return get_introspector_report_url_base(project_name,
datestr) + program_data_filename


def get_introspector_report_url_fuzzer_coverage_urls(project_name, datestr,
coverage_files):
prefix = get_introspector_report_url_base(project_name, datestr)
return [prefix + ff for ff in coverage_files]


def extract_introspector_debug_info(project_name, date_str):
debug_data_url = get_introspector_report_url_debug_info(
project_name, date_str.replace("-", ""))
Expand Down Expand Up @@ -281,6 +298,59 @@ def get_fuzzer_code_coverage_summary(project_name, datestr, fuzzer):
return None


MAGNITUDES = {
"k": 10**(3 * 1),
"M": 10**(3 * 2),
"G": 10**(3 * 3),
"T": 10**(3 * 4),
"P": 10**(3 * 5),
"E": 10**(3 * 6),
"Z": 10**(3 * 7),
"Y": 10**(3 * 8),
}


def get_fuzzer_corpus_size(project_name, datestr, fuzzer, introspector_report):
"""Go through coverage reports to find the LLVMFuzzerTestOneInput function. The first hit count equals the number inputs found."""

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if it would be better to name it "LLVMFuzzerTestOneInput hit count" in the report or something like that instead of "corpus size/entries"? It there was a tooltip showing what it is it would be nice too I think. It's just that I'd interpret "corpus size/entries" differently if I saw it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do you interpret "corpus size/entries"? This count is the number of inputs that are used for coverage measurement after the fuzzer completes, so the final number of inputs the fuzzers ends up with, which to me is the same as the number of corpus entries.

I agree that "LLVMFuzzerTestOneInput hit count" is closer to what is measured but I find it also more confusing. Maybe just "Hit Count" and have a description? While, I still like corpus size, I'm happy to change it if we can come to some decision.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the final number of inputs the fuzzers ends up with, which to me is the same as the number of corpus entries

I think that happens when fuzz targets are fine. When they start to OOM/timeout the number of inputs go down while the corpus size that can be measured by downloading the corpus stays the same (at first at least). I think "the number of inputs processed by the fuzz target" (or something like that) would be more clear. And when it goes down it can be combined with the data from google/oss-fuzz#13103 for example to make it easier to figure out where those drops can come from.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, yes, now I see what you mean. However, I don't think that the downloadable corpus count will usually match this number, it is de-duplicated, while this number is the number of inputs the fuzzer deemed interesting. So they are seed corpus vs corpus.

I feel like "the number of inputs processed by the fuzz target" is also confusing, as this can be understood as the number of executions by the fuzzer during normal execution and not during coverage.

What do you think of "Executed Coverage Inputs" or just "Coverage Inputs"?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like "the number of inputs processed by the fuzz target" is also confusing

Agreed.

What do you think of "Executed Coverage Inputs" or just "Coverage Inputs"?

It sounds good to me.

FWIW "corpus_size" is shown by OSS-Fuzz elsewhere:
Screenshot 2025-05-29 at 16 15 49
so "coverage inputs" would maybe make it easier to tell those things apart in different places.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the name and added a description.

if introspector_report["MergedProjectProfile"]["overview"][
"language"] != "c-cpp":
return None

metadata_files = introspector_report[fuzzer]["metadata-files"]

fuzzer_program_coverage_urls = get_introspector_report_url_fuzzer_coverage_urls(
project_name, datestr, metadata_files["coverage"])

for url in fuzzer_program_coverage_urls:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like we should be able to avoid this loop, do we not have the data already available at this point?

Copy link
Contributor Author

@phi-go phi-go Jun 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm probably missing something but are you thinking that this is to get the normal per-target coverage report (https://storage.googleapis.com/oss-fuzz-coverage/sudoers/reports-by-target/20250603/fuzz_iolog_legacy/linux/report.html)? Because this function is getting the meta coverage file(s) (https://storage.googleapis.com/oss-fuzz-introspector/sudoers/inspector-report/20250603/fuzz_iolog_json.covreport). While I can find mentions of "covreport", those seem to be during the analysis stage but I don't see it used or available as part of the html report generation.

I'm using the covreport file to easily get the execution count for LLVMFuzzOneInput, which admittedly is a bit hacky but should be much faster than getting the correct file/line and parsing the per-target coverage report again, which would be needed as per line counts are not available.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On another note, do you know what the result of this code is for Non C/CPP projects?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair point, this file list seems to be empty for all other languages. I'll add a check to only do this for C/CPP projects in any case, I would expect the format to be different if they were ever created.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, let me just go through this and see if we can get the number from some of our existing data. Otherwise this lgtm

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added the check for the language, I think only checking for "c-cpp" should be fine at this point but I'm not completely sure.

found = False
try:
cov_res = requests.get(url, timeout=20).text
for ll in cov_res.splitlines():
if found:
# Letters used is implemented here:
# https://github.com/llvm/llvm-project/blob/7569de527298a52618239ef68b9374a5c35c8b97/llvm/tools/llvm-cov/SourceCoverageView.cpp#L117
# Used from here:
# https://github.com/llvm/llvm-project/blob/35ed9a32d58bc8cbace31dc7c3bba79d0e3a9256/llvm/tools/llvm-cov/SourceCoverageView.h#L269
try:
count_str = ll.split("|")[1].strip()
magnitude_char = count_str[-1]
if magnitude_char.isalpha():
magnitude = MAGNITUDES[magnitude_char]
count = float(count_str[:-1])
else:
magnitude = 1
count = float(count_str)
return int(magnitude * count)
except:
# Something went wrong, maybe another file has correct data.
break
if ll == "LLVMFuzzerTestOneInput:":
found = True
except:
return None


def extract_new_introspector_functions(project_name, date_str):
introspector_functions_url = get_introspector_report_url_all_functions(
project_name, date_str.replace("-", ""))
Expand Down Expand Up @@ -372,7 +442,7 @@ def extract_introspector_report(project_name, date_str):
introspector_report_url = get_introspector_report_url_report(
project_name, date_str.replace("-", ""))

# Read the introspector atifact
# Read the introspector artifact
try:
raw_introspector_json_request = requests.get(introspector_summary_url,
timeout=10)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -325,7 +325,7 @@ def extract_code_coverage_data(code_coverage_summary):
return line_total_summary


def prepare_code_coverage_dict(
def prepare_code_coverage_data(
code_coverage_summary, project_name: str, date_str: str,
project_language: str) -> Optional[Dict[str, Any]]:
"""Gets coverage URL and line coverage total of a project"""
Expand Down Expand Up @@ -472,7 +472,7 @@ def extract_local_project_data(project_name, oss_fuzz_path,
project_name
}

code_coverage_data_dict = prepare_code_coverage_dict(
code_coverage_data_dict = prepare_code_coverage_data(
code_coverage_summary, project_name, '', project_language)

if cov_fuzz_stats is not None:
Expand Down Expand Up @@ -737,7 +737,7 @@ def extract_project_data(project_name, date_str, should_include_details,
'project_name': project_name
}

code_coverage_data_dict = prepare_code_coverage_dict(
code_coverage_data_dict = prepare_code_coverage_data(
code_coverage_summary, project_name, date_str, project_language)

per_fuzzer_cov = {}
Expand All @@ -748,10 +748,19 @@ def extract_project_data(project_name, date_str, should_include_details,

amount_of_fuzzers = len(all_fuzzers)
for ff in all_fuzzers:
try:
fuzzer_corpus_size = oss_fuzz.get_fuzzer_corpus_size(
project_name, date_str.replace("-", ""), ff,
introspector_report)
except:
fuzzer_corpus_size = None

try:
fuzzer_cov = oss_fuzz.get_fuzzer_code_coverage_summary(
project_name, date_str.replace("-", ""), ff)
fuzzer_cov_data = extract_code_coverage_data(fuzzer_cov)
if fuzzer_cov_data is not None:
fuzzer_cov_data['corpus_size'] = fuzzer_corpus_size
per_fuzzer_cov[ff] = fuzzer_cov_data
except:
pass
Expand Down Expand Up @@ -919,8 +928,36 @@ def extend_db_timestamps(db_timestamp, output_directory):
json.dump(existing_timestamps, f)


def per_fuzzer_coverage_has_degraded(fuzzer_data: List[Dict[str, Any]],
project_name: str,
ff: str) -> List[Dict[str, str]]:
"""Go through the fuzzer data and find coverage drops."""

def get_url(date):
report_url = oss_fuzz.get_fuzzer_code_coverage_summary_url(
project_name, date.replace('-', ''), ff)
report_url = report_url[:-len('summary.json')] + 'index.html'
return report_url

res = []
for yesterday, today in zip(fuzzer_data[:-1], fuzzer_data[1:]):
if yesterday['percentage'] - today[
'percentage'] > FUZZER_COVERAGE_IS_DEGRADED:
res.append({
'before_date': yesterday['date'],
'before_url': get_url(yesterday['date']),
'before_perc': yesterday['percentage'],
'current_date': today['date'],
'current_url': get_url(today['date']),
'current_perc': today['percentage'],
})

return res


def per_fuzzer_coverage_analysis(project_name: str,
coverages: Dict[str, List[Tuple[int, str]]],
per_fuzzer_data: Dict[str, List[Dict[str,
Any]]],
lost_fuzzers):
"""Go through the recent coverage results and combine them into a short summary.
Including an assessment if the fuzzer got worse over time.
Expand All @@ -932,34 +969,47 @@ def per_fuzzer_coverage_analysis(project_name: str,
# at per fuzzer coverage, which is should already be normalized to what
# can be reached.
# TODO What would be a good percentage to mark as coverage degradation,
# taking 5% for now but should be observed, maybe per it should be
# taking 5% for now but should be observed, maybe it should be
# configurable per project as well.
results = {}
for ff, data in coverages.items():
for ff, data in per_fuzzer_data.items():
if len(data) > 0:
values = [dd[0] for dd in data]
dates = [dd[1] for dd in data]
latest_date_with_value = next(dd[1] for dd in reversed(data)
if dd[0] is not None)
percentages = [dd['percentage'] for dd in data]
dates = [dd['date'] for dd in data]
totals = [dd['total'] for dd in data]
covered = [dd['covered'] for dd in data]
corpus_size = [dd['corpus_size'] for dd in data]
latest_date_with_value = next(dd['date'] for dd in reversed(data)
if dd['percentage'] is not None)
if latest_date_with_value is not None:
report_url = oss_fuzz.get_fuzzer_code_coverage_summary_url(
project_name, latest_date_with_value.replace('-', ''), ff)
report_url = report_url[:-len('summary.json')] + 'index.html'
else:
report_url = None
max_cov = max(values[:-1], default=0)
avg_cov = round(statistics.fmean(values), 2)
current = values[-1]
max_cov = max(percentages[:-1], default=0)
avg_cov = round(statistics.fmean(percentages), 2)
current = percentages[-1]
try:
days_degraded = per_fuzzer_coverage_has_degraded(
data, project_name, ff)
except:
days_degraded = []
results[ff] = {
'report_url': report_url,
'report_date': latest_date_with_value,
'coverages_values': values,
'hashed_name': str(hash(ff)),
'coverages_perc': percentages,
'coverages_totals': totals,
'coverages_covered': covered,
'coverages_corpus': corpus_size,
'coverages_dates': dates,
'max': max_cov,
'avg': avg_cov,
'current': current,
'has_degraded':
'max_has_degraded':
(max_cov - current) > FUZZER_COVERAGE_IS_DEGRADED,
'days_degraded': days_degraded,
'got_lost': ff in lost_fuzzers,
}
return results
Expand Down Expand Up @@ -999,7 +1049,18 @@ def calculate_recent_results(projects_with_new_results, timestamps,
except:
perc = 0

per_fuzzer_coverages[ff].append((perc, do))
per_fuzzer_coverages[ff].append({
'corpus_size':
cov_data['corpus_size'],
'covered':
cov_data['covered'],
'total':
cov_data['count'],
'percentage':
perc,
'date':
do
})
except:
continue

Expand Down Expand Up @@ -1411,6 +1472,7 @@ def setup_webapp_cache() -> None:
os.mkdir("extracted-db-archive")

db_archive.extractall("extracted-db-archive")

logger.info("Extracted it all")

# Copy over the files
Expand Down
3 changes: 2 additions & 1 deletion tools/web-fuzzing-introspection/app/webapp/models.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,8 @@ def has_introspector(self) -> bool:
return self.introspector_data is not None

def has_recent_results(self) -> bool:
return self.recent_results is not None
return self.recent_results is not None and sum(
len(ff) for ff in self.recent_results) > 0


class DBTimestamp:
Expand Down
Loading