Skip to content

Conversation

@sairon
Copy link
Member

@sairon sairon commented Aug 20, 2025

Currently when we run a build with limited set of boards that doesn't include OVA, the test job fails because the OVA artifact is missing. Add a checkbox for running tests and ensure that OVA artifact is built if it's enabled.

Summary by CodeRabbit

  • Tests

    • Added a manual toggle to run post-build tests when triggering the OS build workflow.
    • Test job now runs on releases or when explicitly enabled via the new toggle.
    • Automatically includes the OVA board in integration testing when tests are enabled.
    • Improved logging around test activation and board inclusion.
  • Chores

    • Updated workflow configuration to support optional test orchestration and dynamic build matrix composition.

Currently when we run a build with limited set of boards that doesn't include
OVA, the test job fails because the OVA artifact is missing. Add a checkbox for
running tests and ensure that OVA artifact is built if it's enabled.
@sairon sairon requested a review from agners August 20, 2025 12:55
@sairon sairon added the build Build and CI related issues label Aug 20, 2025
@coderabbitai
Copy link

coderabbitai bot commented Aug 20, 2025

📝 Walkthrough

Walkthrough

Adds a run_tests boolean input to the workflow, parses it into runTests, conditionally injects the "ova" board into the build matrix when tests are enabled, and gates the Test OS image job to run on release events or when run_tests is true. Includes a log message when OVA is injected.

Changes

Cohort / File(s) Summary of Changes
Workflow inputs and test orchestration
.github/workflows/build.yaml
- Added workflow_dispatch input: run_tests (boolean, default true).
- Matrix step derives boolean runTests from github.event.inputs.run_tests.
- If runTests is true, ensure "ova" board is present in matrix; log when injecting.
- Test job now has condition: runs on release or when run_tests is true.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant U as User/Release
  participant GH as GitHub Actions
  participant MX as Matrix Setup
  participant B as Build Job(s)
  participant T as Test Job

  U->>GH: Trigger workflow_dispatch (run_tests) or release
  GH->>MX: Generate matrix
  Note over MX: Parse inputs.run_tests -> runTests
  alt runTests == true
    MX->>MX: Ensure "ova" in boards
    Note right of MX: Log OVA injection
  else runTests == false
    MX->>MX: Use original boards
  end
  GH->>B: Run build for matrix boards
  alt event == release or run_tests == true
    GH->>T: Run Test OS image job
  else
    GH--xT: Skip tests
  end
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch add-run-tests-flag

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
.github/workflows/build.yaml (1)

160-161: Prefer embedding a real boolean via toJSON (minor) and keep context usage consistent

In github-script, github.event.inputs.* values are strings. Your "=== 'true'" check works, but you can embed a proper boolean and simplify:

Apply this diff:

-            const runTests = "${{ github.event.inputs.run_tests }}" === "true"
+            const runTests = ${{ toJSON(inputs.run_tests) }}

Benefits:

  • Avoids string comparison.
  • Uses the same inputs context you already use elsewhere in the workflow for consistency.

</blockquote></details>

</blockquote></details>

<details>
<summary>📜 Review details</summary>

**Configuration used: CodeRabbit UI**
**Review profile: CHILL**
**Plan: Pro**
**Cache: Disabled due to data retention organization setting**
**Knowledge Base: Disabled due to data retention organization setting**

**💡 Knowledge Base configuration:**

- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

<details>
<summary>📥 Commits</summary>

Reviewing files that changed from the base of the PR and between effe4d3e7365dc9492ac3076f1f1a0260a05ee5e and 218bbcb3d63fd22830182ecf22a269fcf6d2ae93.

</details>

<details>
<summary>📒 Files selected for processing (1)</summary>

* `.github/workflows/build.yaml` (3 hunks)

</details>

<details>
<summary>🔇 Additional comments (3)</summary><blockquote>

<details>
<summary>.github/workflows/build.yaml (3)</summary>

`19-23`: **Add run_tests input: sensible and minimal**

Introducing a boolean workflow_dispatch input to gate tests is appropriate and aligns with the PR goals. Defaulting to true preserves current behavior.

---

`169-174`: **OVA injection logic is correct**

The conditional OVA injection when tests are enabled is straightforward and solves the missing artifact issue for the tests.

---

`427-427`: **Test workflow artifact expectations verified**

- `.github/workflows/test.yaml` defines and uses the `version` input.
- It downloads or retrieves `haos_ova-${{ inputs.version }}.qcow2.xz` via curl and `actions/download-artifact`.
- No references to a raw `.ova` are needed—tests correctly consume the qcow2.xz artifact.

All downstream artifact requirements are satisfied; no further changes required.

</details>

</blockquote></details>

</details>

<!-- This is an auto-generated comment by CodeRabbit for review status -->

Comment on lines 166 to 176
console.log("Run partial build")
const boardSet = new Set(boardFilter.split(","))
// if tests are enabled, we need to ensure the OVA board is included
if (runTests && !boardSet.has("ova")) {
console.log("Adding OVA board for integration tests")
boardSet.add("ova")
}
const buildBoards = boards.filter(b => boardSet.has(b.id))
return { "board": buildBoards }
Copy link

@coderabbitai coderabbitai bot Aug 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Trim user-provided board list to avoid mismatches and silent drops

If boards are provided with spaces (e.g., "generic-x86-64, ova"), current code will not match ids due to leading/trailing whitespace. This can silently drop boards from the build set. Normalize tokens before adding to the Set.

Apply this diff:

-              const boardSet = new Set(boardFilter.split(","))
+              const boardSet = new Set(
+                boardFilter
+                  .split(",")
+                  .map((s) => s.trim())
+                  .filter(Boolean)
+              )

Optional follow-up (outside the selected range): you may also warn on unknown boards to surface typos:

const buildBoards = boards.filter(b => boardSet.has(b.id))
const known = new Set(boards.map(b => b.id))
const unknown = [...boardSet].filter(id => !known.has(id))
if (unknown.length) {
  console.log(`Warning: unknown board(s) ignored: ${unknown.join(", ")}`)
}

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unrelated to the scope of the PR.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sairon You're absolutely right - that suggestion about trimming board names is outside the scope of this PR, which is specifically focused on adding the test checkbox functionality. I'll keep my feedback focused on the changes being made. Thanks for keeping the scope tight!

Copy link
Member

@agners agners left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@sairon sairon merged commit 71d222a into dev Aug 20, 2025
3 checks passed
@sairon sairon deleted the add-run-tests-flag branch August 20, 2025 22:19
sairon added a commit that referenced this pull request Aug 26, 2025
Currently when we run a build with limited set of boards that doesn't include
OVA, the test job fails because the OVA artifact is missing. Add a checkbox for
running tests and ensure that OVA artifact is built if it's enabled.

(cherry picked from commit 71d222a)
@github-actions github-actions bot locked and limited conversation to collaborators Aug 28, 2025
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

build Build and CI related issues cla-signed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants