Bug process: requirements

Update: Our vulnerability process and bug policy have been approved. You can access a FAQ.

Last update: 22 August 2021

This thread is going to collect requirements for the bug process. It already includes elements from discussions held up until this point.

Bugs sources

  1. external upstream components
  2. own findings (QA)?
  3. partners findings
  4. CVE vulnerability (as upstream)
  5. CVE vulnerability (as downstream)
  6. community

Bug database
Whichever the source of the bug report, it is put in the centralized bug database.

Bugs should be reported against a specific module, but there should be a possibility to link it to another module later. For example, if the initial reporter marked another category, the actual source was found out to be different during a bug fix development etc.

A bug report should include one issue only. If it includes multiple ones, it should be split, and this could be done at any stage of the bug handling.

Flow

  1. Bug registration in a system (with source, type)

  2. Are formal reqs meet, categorization, impact analysis, decision for acceptance or rejection => analysis done by an experienced team member

  3. Assigning and prioritizing the bug (may have per-project policy or severity)

    • zero-bug policy, so bugs handled before next features
    • bug is included as a part of sprint backlog (based on capacity available)
  4. Bug is solved, backporting to LTS and / or other branches

  5. Bug status / documentation / release notes / security advisory is updated.

  6. Bugs’ fixes upstreamed

Bug classification

It should be possible to change the category of the bug, for example change a bug to a security issue or a feature request.

Bug severity
Bugs can be classified on one of the defined severity levels.

  • proposed 3 levels: minor, normal, critical

Need to define what classifies as a minor, normal, critical bug.

Open question: how many bug severity levels do we need?

Open question: do we need an SLA for bug fixes (and for which packages?) For example:

  • 7 days for critical bugs
  • 1 month for normal bugs
  • 2 months for minor bugs

The SLA should correspond to the release cycle. For example, if there is a bugfix release every month, the time to fix a normal priority bugs might be of one or two bugfix cycles.

Tracking
The bug should have a tracking regarding the versions it is present and the versions where it was fixed.

Bugfixes frequency
We should be releasing a bugfix versions in regular intervals. However, in case of a high impact bug (eg. security issue) we must be able to release a new bugfix release at any moment.

Versioning scheme
Our versioning scheme should handle unlimited number of bugfix releases.

1 Like

I agree with all of above. One small thing that was a little bit confusing for me was “In the next step, two possibilities:”.

I was going comment that these approaches are not mutually exclusive but rather depend on bug severity, but it’s mentioned in the next part of the original post.

There are some open questions like: “Do we need an SLA for bug fixes?” or “How many bug severity levels do we need?”

In my opinion there is no single good answer for them. SLA is more related to a business commitment. This requirement should come product team. Engineering team should inform product team whether this requirement is feasible taking in account all limitations.

Nice writeup! Some comments:

We can’t fix every bug, unfortunately. :frowning_face:

We need a more detailed definition of minor, normal, critical.

e.g. Remote execution vulnerabilities or anything that allows attackers without specialized access or user interaction to exploit a bug would be critical.

Similarly, we will probably need to identify a small subset of packages that will get an SLA. For the other packages that are part of the Yocto universe and outside this subset, we might only prioritize critical bugs and have a best-effort SLA for normal and minor bugs.

Is it possible to have an objective criteria for bug triaging?

In your experience, do working exploits make for good test cases that can be added to our test suite?

I haven’t checked in a while (since launchpad.net, really), but is it possible to track a bug in an upstream project tracker locally simply by linking to that bug in our tracker?

We can try to define them as objective as possible, there will be still some unclear margin.

When it is a well known issue, after the embargo (during embargo it might be a test case in the private branch) it might be a good test case. To be seen case by case.

GitLab has external trackers feature gitlabhq/external-issue-tracker.md at master · gitlabhq/gitlabhq · GitHub

External issue trackers are rather limited. It’s not much more than a glorified link.

I’m afraid that in 80-90% of cases, we won’t be able to asses impact unless we discover the root cause. From my experience most of the (nasty) bugs show off when they are not expected and / or enough analyzed, so we may reconsider wording here or rely only on categorization

We have limited skills and resource (taking into account whole s/w stack depth), so if there is any SLA considered, IMO, it should be limited only to our expertise and specialities. Otherwise, we won’t be able to guarantee any fixes within given amount of time.

You can see the first draft of the policy here: contributing/bug_policy.rst: Add the bug policy (!183) · Merge requests · OSTC / OHOS / docs · GitLab