CVE Process Requirements for AllScenariosOS

Update: Our vulnerability process has been approved. You can access a FAQ.

This document contains the requirements for a CVE process to be defined for AllScenariosOS. Please add your comments, additional ideas and difficulties you can think about.

What is the CVE process for

The CVE process describes the way of handling security vulnerabilities. It should define clearly who is doing what.

Standards

Our process should be aligned with industry best practices. Examples of those practices include OSS vulnerability guide, OpenChain standard when available, OpenSSF recommendations.

The process should include all elements from CII Best Practices

Reporting a security issue to us

The project should have a way to report a security issue in the code owned by the project or in the upstream code.

There should exist a security mailing list with limited audience, that should be used for anyone (developer, user, security researcher) to report security issues or to discuss security issues in the project. This would be used by upstream projects for notifications, and security researchers used to this way of reporting issues.

We can also have a way to report a security issue using a bug tracker, with a way to mark such issue as security related and confidential. Issues reported this way should be visible to a limited audience (same as the issues on the mailing list) until eventually reclassified.

There should be a way to report an issue in a confidential way (not using a public ticket).

Where the issues come from?

Security issues may come from different sources:

  • Reported to the security list by anyone
  • Found by the project developers
  • Entered initially in the project bug reporting system and re-classified as a security issue
  • Received from upstream projects as updates
  • Received as a new CVE list
  • Notified from upstream projects (possibly under an embargo)

Security issue lifecycle

Typically the process for security bugs / CVEs is divided into fours phases:

  1. Monitoring (meaning, us looking at upstream and CVE databases and mailing list - we should look into joining those mailing lists by appointing a security owner - since some CVEs fall into embargo so only companies participating get to know of a CVE and fix it before it is disclosed to the public)
  2. Assessment, i.e. whether or not all scenarios is is impacted by a CVE
  3. Remedy, meaning fixing such CVE during dev and in particular LTS
  4. Notification, meaning a dashboard that lists all CVEs being monitored, assessed, fixed (or not required to be fixed) and delivered (i.e. a commit ID or a given release tag).

Embargoes

Embargo is a period of time until the security issue is made public. The time should be used to provide a patch and deploy it if possible. The patch during the embargo should contain no information or suggestion about the security issue, it should be just describing what the fix is.

We can do a fix at the source level on an item under embargo. In this case the developer should mention only what is fixed, not include any reference to the CVE. A fix might have a title like ‘fix a NULL pointer dereference in module X’. When the embargo is over, we publish the advisory and can (but do not have to) point to the actual commits. We can distribute patches if necessary.

Issues from upstream projects may come to ASO during the embargo period. In this case they should be handled according to the rules of the upstream project (eg. duration, type of people who can be notified etc).

ASO project may decide on embargoes in the issues we’re upstream for if the security impact is judged important enough. The embargo period should be long enough to allow fixing, but also motivating to do it in time.

Security vulnerabilities are special classes of bugs

Security vulnerabilities are special bugs. The bug handling process applies to them (with some changes). For example, the SLA that applies to bugs should apply to security issues in general, too. SLAs will then map to priority classes (which we’ll need to define).

CVEs when released get a CVVS score representing the issue severity, with 10 being the maximum. We will define a way to map CVSS scores to big priorities. For example, mapping CVEs scored 9 and 10 in the CVSS score scale to a P1 (critical) bug for All Scenarios OS - CVEs scored 7 and 8 could be mapped to high, etc.

Reviewing bugs for security issues

General bugs should be reviewed for security issues and may be reclassified as security issues. If they are one, they should follow the security process from that point.

Response time

All notifications of security issues should be reviewed and responded in limited time. We should define the time and the organization that allows that review (composition of the team).

Handling upstream and downstream

The project must handle issues from both upstream and downstream. It should be tracking released CVEs to include the fixes if necessary (a process similar to the one of Debian) could be put in place.

Issue severity

Security issues might have different severity levels. The project should define its levels and actions related to each level. For example, every issue classified as ‘high’ could get an announcement, is fixed in 7 days. On the other hand, ‘low’ severity issues may be fixed by periodic updates.

Releasing security announces

The project should release security announcements for issues that need to be fixed immediately (as defined above in the issue priorities). It should also provide a list of issues fixed with each release.

Releasing product status

We can think of publishing via a RSS feed or a periodical bulletin the stats in a report showing which products are affected by which issue. It might be done in the security announcements.

Some examples of public facing dashboards bringing it all together: Windriver’s dashboard, Debian security and Debian tracker.

Getting CVE numbers

We need a way to get CVE numbers. This will be either direct by becoming a CVE Numbering Authority (CNA) on our own, or indirectly by requesting numbers in another CNA.

The role of users

Users, the vendors of devices using AllScenariosOS are an essential part of the process. They should have representatives in the security committee.

They should also have a (secure) way to receive the information about urgent updates in advance, for example when the issue is under embargo.

References: https://forum.ostc-eu.org/t/cve-processes-of-open-source-projects/

Bug bounty
We might consider creating a bug bounty program.

2 Likes

This looks promising and in line with industry std security processes. I would make sure it does align with OpenSSF / OpenChain CVE handling specs in case they go for the creation of an ISO / IEEE standard we can apply to.
I agree that CVEs and security bugs are special bugs - so SLAs would be mapped to bug priority classes - i.e. critical, high, medium, low - we might need to look into the best number of classes/priorities based on similar projects. What I am trying to say is that SLAs apply to bugs, security ones being a special class of bugs.
Coming to security bugs / CVEs - severity is defined by a CVSS score if I recall - I think it is a number on a scale from 1 to 10 - 10 being the most severe. Let us assume we have defines four all scenarios os bug priority classes with related SLA - then we could map CVSS scores to such priority classed, i.e. mapping CVEs scored 9 and 10 in the CVSS score scale to, say, a P1 (critical) bug for all scenarios - CVEs scored 7 and 8 could be mapped to high, etc.

Typically the process for security bugs / CVEs is divided into fours phases: 1 monitoring (meaning, us looking at upstream and CVE databases and mailing list - we should look into joining those mailing lists by appointing a security owner - since some CVEs fall into embargo so only companies participating get to know of a CVE and fix it before it is disclosed to the public) 2 assessment, i.e. whether or not all scenarios is is impacted by a CVE 3. remedy, meaning fixing such CVE during dev and in particular LTS and 4. notification, meaning a dashboard that lists all CVEs being monitored, assessed, fixed (or not required to be fixed) and delivered (i.e. a commit ID or a given release tag). One can think of publishing via a RSS feed or a periodical bulletin the stats in a report. For reference, example of a public facing dashboard that brings it all together: https://support2.windriver.com/index.php?page=cve

We should consider meeting the requirements for the CII Best Practices badge

Can we rephrase that to say we can file confidential issues on the project repositories instead? This will save us from having to deploy and operate a mailing list and will have largely the same result.

How do you want this to work in a case where OSTC maintains an open-source, source distribution and the patch (and the bug) is under embargo? Is there an assumption we will do binary releases to fix any issues? Will we distribute patches to (some) list of downstream consumers?

We need a mailing list for reporting/getting information about upstream issues. This way a project (for instance Zephyr) can add us to their distribution list. Same for the general oss-distro list.

I agree that we can offer a possibility for a user to open a confidential ticket in our bug tracking system for the projects we’re upstream. I think, however, that a mailing list should still be a possibility as security researches are used to this method.

We can do a fix at the source level on an item under embargo. The only thing is not to mention that this patch is a fix for a CVE. The description should just include what is fixed, for example ‘fix a NULL pointer dereference in module X’. When the embargo is over, we publish the advisory and can (but do not have to) point to the actual commits at this point. We can distribute patches if necessary.

1 Like

Thanks for explaining this. I think we need to get @maciej.sawicki involved in this.

@marta Are we considering (for now or later) joining some CVE program to become a CNA (e.g. here)?

1 Like

Yes, we’ll need the DevOps view on what is possible/easy to set up. And how to set up permissions.

Yes this is necessary that we have a possibility to reserve CVE numbers. This can be either directly (if we become a CNA) or indirectly if we reserve from a 3rd party.

I’m sorry that I’m joining a little late to the party. I know that there was some discussion about that but I would like to add my $0.02. GitLab has a confidential_issues (Confidential issues | GitLab) feature. I think we should use it. In my opinion this is the most convenient way to report and tack security issues.

One may argue that this is a technical detail and we can agree on that later but I noticed that during the discussion a security mailing list was mentioned. In my opinion we shouldn’t add “security mailing list” to requirements since:
a) it would be rather a specification for “communication channel for reporting security issue” rather then requirement itself
b) whole “mailing list in OSTC infra” topic is another dissuasion and I guess it would be wise to postpone it now.

We should consider meeting the requirements for the CII Best Practices badge

Agree that we should take care of that, but for me it’s more related to software integrity then CVEs.

And one bonus question do we plan to have a bug bounty program?

Yes, this is a possible technical solution for reporting bugs to us.

A mailing list might not be a strong requirements for reporting bugs in issues we’re upstream. However, I do not see it feasible to convince ALL of our upstream projects (like clang, zephyr, the Linux kernel…) to create tickets in our system when there’s a bugfix we should grab. Those things are distributed by mailing lists and some of those are confidential.

More of a general part of quality/security policy, but the CII badge has a number of criteria on CVE handing.

To be seen. A bug bounty program is useful before a release when all known issues are fixed, so we still have some time to think about it.

I think that if we consider having it we should take care of that sooner then later. I guess securing budget for such program may take some time.

Could you share some more information about a way of interacting with upstream projects, please?

When I was writing my previous comment I was thinking about simple scenario:
0. people find a bug in ASOS,

  1. they reports it to as.

For upstream projects my understanding was that there are some mailing lists/trackers that we have to observe. If we create a mailing list how do we plan to encourage upstream project maintainers to join it and notify us about the bug?

About bug bunty: added to the first post.

They vary. Often upstream projects have a private list of projects downstream to notify, for example in the case of Zephyr it is described in the documentation: Security Vulnerability Reporting — Zephyr Project Documentation

There is also a private list oss-distros and we should apply to be a member at some point: mailing-lists:distros [OSS-Security] It allows to synchronize embargoes and fixes of issues affecting multiple distros.

Our goal will be to subscribe our internal security list to all those upstream lists and oss-distros.

CII checklist (from BadgeApp)

Change Control:

Release notes - release_notes_vulns

  • The release notes MUST identify every publicly known run-time vulnerability fixed in this release that already had a CVE assignment or similar when the release was created. This criterion may be marked as not applicable (N/A) if users typically cannot practically update the software themselves (e.g., as is often true for kernel updates). This criterion applies only to the project results, not to its dependencies. If there are no release notes or there have been no publicly known vulnerabilities, choose N/A. {N/A justification}

Reporting (whole category):

Bug-reporting process

  • The project MUST provide a process for users to submit bug reports (e.g., using an issue tracker or a mailing list). {Met URL} [report_process]
  • The project SHOULD use an issue tracker for tracking individual issues. [report_tracker]
  • The project MUST acknowledge a majority of bug reports submitted in the last 2-12 months (inclusive); the response need not include a fix. [report_responses]
  • The project SHOULD respond to a majority (>50%) of enhancement requests in the last 2-12 months (inclusive). [enhancement_responses]
  • The project MUST have a publicly available archive for reports and responses for later searching. {Met URL} [report_archive]

Vulnerability report process

  • The project MUST publish the process for reporting vulnerabilities on the project site. {Met URL} [vulnerability_report_process]
  • If private vulnerability reports are supported, the project MUST include how to send the information in a way that is kept private. {N/A allowed} {Met URL} [vulnerability_report_private]
  • The project’s initial response time for any vulnerability report received in the last 6 months MUST be less than or equal to 14 days. {N/A allowed} [vulnerability_report_response]

Security (partial, most item not related to CVEs):

Publicly known vulnerabilities fixed

1 Like

Following a discussion about the practical infrastructure aspects:

  • We need to find out which bug system to use. Confidential issues in GitLab aren’t great as they are visible to everyone with Reporter status or higher

  • Find a way to handle private pipelines, including CI. This is for people working on the fixes. During the development the repository might include the exploit, more verbose comments etc - it will be cleaned up with the final (public) patch. We also need a way to test the patch without sending the confidential branch around online.

  • Find a way to handle the key used for the security team. Every member should be able to decrypt and it would be ideal to avoid regenerating the key when someone leaves/joins the team.

  • Frequency of the maintenance releases.

  • We need to be able to generate a maintenance release at any moment

We might consider using an interchange format to security issues like A shared vulnerability format for open-source packages - Google Docs

Or maybe just use mail address (alias? sec@some.org) which creates private tickets for each report automatically (based on sender domain/list-id/etc). It should make triaging/handling of such issues easier and improve tracking/accountability. Pushing upstream members to create ticket in all downstream gitlabs is not realistic (IMO).

Some people dislike emails and would like to be able to file an issue without one :slight_smile: Also for tracking and state handling tickets are good. As we need this workflow already we can also open it for the external reports.

It would be good to automate tickets and it is the mid-term plan. If you have an existing software/tool in mind, very much interested.

[quote]Pushing upstream members to create ticket in all downstream gitlabs is not realistic (IMO).
[/quote]
100% agree. The ticket system will likely work fine for direct reporting to our distro. Upstream issues will come typically via some email.

Right. It’s exactly what I meant. Upstream projects use email to send notification so they can continue doing so (except few like Apache foundation (they provide downstream with bugzilla account)). Automation creates tickets in gitlab using emails/bugs/whatever as a source of information while users have possibility to report security issues directly (with gitlab) and/or email.

1 Like

We’re totally in line here. Great :slight_smile: