Compare commits

...

57 Commits

Author SHA1 Message Date
7a0b6b50f3
add gitea refs
Signed-off-by: ahuston-0 <aliceghuston@gmail.com>
Reviewed-on: https://<censored>/ahuston-0/hydra/pulls/1
2025-04-09 11:16:53 -04:00
76e8ca1c37
add Gitea pulls docs entry
Signed-off-by: ahuston-0 <aliceghuston@gmail.com>
2025-04-09 11:16:53 -04:00
Faye Chun
81da89cc59
Add a plugin to poll Gitea pull requests
Based off the existing GithubPulls.pm and GitlabPulls.pm plugins.

Also adds an integration test for the new 'giteapulls' input type to
the existing 'gitea' test.
2025-04-09 11:16:44 -04:00
John Ericson
c8de5b99e3
Merge pull request #1471 from NixOS/queue-runner-machines-json
Queue-runner: Always produce a machines JSON object
2025-04-08 21:44:23 +00:00
John Ericson
a5b17d0686 Queue-runner: Always produce a machines JSON object
Even if there are no machines, there should at least be an empty object.
2025-04-08 17:38:19 -04:00
John Ericson
1c52c4c0ed
Merge pull request #1456 from NixOS/hydra.nixos.org-rebased
web: replace 'errormsg' with 'errormsg IS NULL' in most cases
2025-04-07 19:05:51 +00:00
Pierre Bourdon
b4322edd05 web: replace 'errormsg' with 'errormsg IS NULL' in most cases
This is implement in an extremely hacky way due to poor DBIx feature
support. Ideally, what we'd need is a way to tell DBIx to ignore the
errormsg column unless explicitly requested, and to automatically add a
computed 'errormsg IS NULL' column in others. Since it does not support
that, this commit instead hacks some support via method overrides while
taking care to not break anything obvious.
2025-04-07 14:48:07 -04:00
John Ericson
8350f964ee
Merge pull request #1469 from NixOS/release-reservations
queue-runner: release machine reservation while copying outputs
2025-04-07 18:19:16 +00:00
Pierre Bourdon
143a07bff0 queue-runner: release machine reservation while copying outputs
This allows for better builder usage when the queue runner is busy. To
avoid running into uncontrollable imbalances between builder/queue
runner, we only release the machine reservation after the local
throttler has found a slot to start copying the outputs for that build.

As opposed to asserting uniqueness to understand resource utilization,
we just switch to using `std::unique_ptr`.
2025-04-07 14:01:50 -04:00
John Ericson
cc4b206d85
Merge pull request #1466 from NixOS/bump-nixpkgs
Bump nixpkgs
2025-04-07 17:32:38 +00:00
John Ericson
e77444da98
Merge pull request #1468 from NixOS/steps-waiting-for-download-slot
Add metric for builds waiting for download slot
2025-04-07 17:28:00 +00:00
K900
8a6482bb1c Add metric for builds waiting for download slot
(cherry picked from commit f23ec71227911891807706b6b978836e4d80edde)
2025-04-07 13:16:49 -04:00
Jörg Thalheim
b3a433336e bump nixpkgs 2025-04-07 19:09:46 +02:00
Jörg Thalheim
68b2d6da0a
Merge pull request #1467 from NixOS/merge-queue
Make github actions ci merge-queue friendly
2025-04-07 17:02:43 +00:00
Jörg Thalheim
c94ba404fd don't build hydra twice in a pull request + enable merge queue 2025-04-07 18:57:32 +02:00
Jörg Thalheim
56170dd117
Merge pull request #1464 from NixOS/more-hydra.nixos.org-changes
More hydra.nixos.org changes
2025-04-07 16:50:36 +00:00
Jörg Thalheim
d4b55f8190
Merge pull request #1465 from NixOS/gitea
test/gitea: fix eval
2025-04-07 18:48:57 +02:00
Jörg Thalheim
78687e23cf test/gitea: fix eval 2025-04-07 18:43:12 +02:00
Jörg Thalheim
f02fc5e2ff
Merge pull request #1463 from NixOS/fix-nixos-tests
Fix evaluation of NixOS tests, avoid `with`
2025-04-07 18:37:50 +02:00
Pierre Bourdon
8e02589ac8 queue-runner: switch to pseudorandom ordering of builds processing
We don't rely on sequential / monotonic build IDs processing anymore, so
randomizing actually has the advantage of mixing builds for different
systems together, to avoid only one chunk of builds for a single system
getting processed while builders for other systems are starved.
2025-04-07 12:33:35 -04:00
Pierre Bourdon
52a0199a9b queue runner: introduce some parallelism for remote paths lookup
Each output for a given step being ingested is looked up in parallel,
which should basically multiply the speed of builds ingestion by the
average number of outputs per derivation.
2025-04-07 12:33:35 -04:00
Pierre Bourdon
9265fc5002 queue-runner: reduce the time between queue monitor restarts
This will induce more DB queries (though these are fairly cheap), but at
the benefit of processing bumps within 1m instead of within 10m.
2025-04-07 12:33:35 -04:00
Pierre Bourdon
d8ffa6b56a queue-runner: remove id > X from new builds query
Running the query with/without it shows that it makes no difference to
postgres, since there's an index on finished=0 already. This allows a
few simplifications, but also paves the way towards running multiple
parallel monitor threads in the future.
2025-04-07 12:33:35 -04:00
Pierre Bourdon
efcf6815d9 queue-runner: add prom metrics to allow detecting internal bottlenecks
By looking at the ratio of running vs. waiting for the dispatcher and
the queue monitor, we should get better visibility into what hydra is
currently bottlenecked on.

There are other side effects we can try to measure to get to the same
result, but having a simple way doesn't cost us much.
2025-04-07 12:33:35 -04:00
Pierre Bourdon
1e2d3211d9 queue-runner: limit parallelism of CPU intensive operations
My current theory is that running more parallel xz than available CPU
cores is reducing our overall throughput by requiring more scheduling
overhead and more cache thrashing.
2025-04-07 12:33:35 -04:00
Pierre Bourdon
5a9985f96c web: Skip System on /machines
It is redundant
2025-04-07 12:33:35 -04:00
John Ericson
0d0c4f278b Fix evaluation of NixOS tests, avoid with 2025-04-07 12:32:28 -04:00
John Ericson
3fdb18a4bc
Merge pull request #1462 from NixOS/web-changes
A number of Perl-side changes from the hydra.nixos.org branch
2025-04-07 12:17:16 -04:00
Maximilian Bosch
6133693097 readIntoSocket: fix with store URIs containing an &
The third argument to `open()` in `-|` mode is passed to a shell if it's
a string. In my case the store URI contains
`?secret-key=${signingKey.directory}/secret&compression=zstd`

For the `nix store cat` case this means that

* until `&` the process will be started in the background. This fails
  immediately because no path to cat is specified.
* `compression=zstd` is a variable assignment
* the `$path` argument to `store cat` is attempted to be executed as
  another command

Passing just the list solves the problem.

(cherry picked from commit 3ee51dbe589458cc54ff753317bbc6db530bddc0)
2025-04-07 11:59:49 -04:00
git@71rd.net
abe35881e4 Stream files from store instead of buffering them
When an artifact is requested from hydra the output is first copied
from the nix store into memory and then sent as a response, delaying
the download and taking up significant amounts of memory.

As reported in https://github.com/NixOS/hydra/issues/1357

Instead of calling a command and blocking while reading in the entire
output, this adds read_into_socket(). the function takes a
command, starting a subprocess with that command, returning a file
descriptor attached to stdout.
This file descriptor is then by responsebuilder of Catalyst to steam
the output directly

(cherry picked from commit 459aa0a5983a0bd546399c08231468d6e9282f54)
2025-04-07 11:59:49 -04:00
ajs124
99359c251a lazy-load evaluation errors
Closes #1362
2025-04-07 11:54:47 -04:00
Maximilian Bosch
9d8f30affe Only show stepname if it doesn't equal the name of the drv
When building e.g. nixpkgs, the "Running builds" view will mostly look
like this

    hello.x86_64-linux (Build of hello-X.Y)
    exa.x86_64-linux (Build of exa-X.Y)
    ...

This doesn't provide any useful information. Showing the step name only
makes sense if it's not a child of the job's derivation. With this
patch, that information will only be shown if the drv name (i.e. w/o
`/nix/store/` prefix, .drv ext & hash) is not equal to the drv name of
the job itself (build.nixname).
2025-04-07 11:54:47 -04:00
Maximilian Bosch
33b982f408 Running builds view: show build step names
When using Hydra to build machine configurations, you'll often see
"nixosConfigurations.foo" five times, i.e. for each build step being
run. This isn't very helpful I think because in such a case, a single
build step can also be compiling the Linux kernel.

This change also fetches the `drvpath` and `type` from the `buildsteps`
relation. We're already joining it, so this doesn't make much difference
(confirmed via query logging that this doesn't cause extra SQL queries).

Unfortunately build steps don't have a human readable name, so I'm
deriving it from the drvpath by stripping away the hash (assuming that
it'll never contain a `-` and that `/nix/store/` is used as prefix). I
decided against using the Nix bindings for that to avoid too much
overhead due to store operations for each build step.
2025-04-07 11:54:47 -04:00
Maximilian Bosch
a816e8e22c Make "timed out" and "log limit exceeded" builds aborted
In 73694087a088ed4481b4ab268a03351b1bcaac3c I gave builds that failed
because of a timeout or exceeded log limit a stop sign and I stand by
that reasoning: with that it's possible to distinguish between actual
build failures and rather transient things such as timeouts.

Back then I considered it a feature that these are shown in a different
tab, but I don't think that's a good idea anymore. When using a jobset to
e.g. track the regressions from a mass rebuild (like a compiler or gcc
update), "Newly failed builds" should exclusively display regressions (and
flaky builds of course, not much I can do about that).

Also, when a bunch of builds fail in such a jobset because of e.g. a
broken connection to a builder that results in a timeout, I want to be
able to restart them all w/o rebuilding actual regressions.

To make it clear that we not only have "Aborted" builds in the tab, I
renamed the label to "Aborted / Timed out".
2025-04-07 11:54:47 -04:00
Pierre Bourdon
0159135fc7 web: include current step status on /machines 2025-04-07 11:54:47 -04:00
John Ericson
1d2d3ae6b7
Merge pull request #1461 from NixOS/nix-2.28
Nix 2.28
2025-04-07 11:52:45 -04:00
John Ericson
257b211832
Merge pull request #1460 from NixOS/nix-2.27
Nix 2.27
2025-04-07 11:37:43 -04:00
John Ericson
d6a5df25bf Fix the build 2025-04-07 11:36:59 -04:00
John Ericson
6534a54ee5 Fix Nix code
Can now at least enter dev shell, but build is still broken.
2025-04-07 11:28:34 -04:00
John Ericson
1595064bee flake.lock: Update to nix and nix-eval-jobs 2.28
Flake lock file updates:

• Updated input 'nix':
    'github:NixOS/nix/d0f98c76f962147610489e84c10033ca92e9c532?narHash=sha256-u6RhBWQ1XohTZ4Ub5ml1PTcaxQgtqFNng6Sohy1rojw%3D' (2025-04-07)
  → 'github:NixOS/nix/a4962f73b5fc874d4b16baef47921daf349addfc?narHash=sha256-r%2BpsCOW77vTSTNbxTVrYHeh6OgB0QukbnyUVDwg8s4I%3D' (2025-04-07)
• Updated input 'nix-eval-jobs':
    'github:nix-community/nix-eval-jobs/62f9c9e8d00d2ff6ab27a6197ab459a8e0808e59?narHash=sha256-PypQspB7h7EENe4RQQUQj2Ay8J1%2BO49AKNO9JbAU4Ek%3D' (2025-04-07)
  → 'github:nix-community/nix-eval-jobs/cba718bafe5dc1607c2b6761ecf53c641a6f3b21?narHash=sha256-v5n6t49X7MOpqS9j0FtI6TWOXvxuZMmGsp2OfUK5QfA%3D' (2025-04-07)
2025-04-07 11:16:09 -04:00
John Ericson
1cb1e139c4 Fix build (due to C++ API changes) 2025-04-07 11:12:12 -04:00
John Ericson
6b97e3ab7b flake.lock: Update to nix and nix-eval-jobs 2.27
Flake lock file updates:

• Updated input 'nix':
    'github:NixOS/nix/e310c19a1aeb1ce1ed4d41d5ab2d02db596e0918?narHash=sha256-q/RgA4bB7zWai4oPySq9mch7qH14IEeom2P64SXdqHs%3D' (2025-02-18)
  → 'github:NixOS/nix/d0f98c76f962147610489e84c10033ca92e9c532?narHash=sha256-u6RhBWQ1XohTZ4Ub5ml1PTcaxQgtqFNng6Sohy1rojw%3D' (2025-04-07)
• Updated input 'nix-eval-jobs':
    'github:nix-community/nix-eval-jobs/f7418fc1fa45b96d37baa95ff3c016dd5be3876b?narHash=sha256-Lo4KFBNcY8tmBuCmEr2XV0IUZtxXHmbXPNLkov/QSU0%3D' (2025-03-26)
  → 'github:nix-community/nix-eval-jobs/62f9c9e8d00d2ff6ab27a6197ab459a8e0808e59?narHash=sha256-PypQspB7h7EENe4RQQUQj2Ay8J1%2BO49AKNO9JbAU4Ek%3D' (2025-04-07)
2025-04-07 11:02:52 -04:00
Jörg Thalheim
cad08f87d2
Merge pull request #1458 from NixOS/meson
docs: fix contribution guide for new meson-based build
2025-03-29 15:37:06 +01:00
Jörg Thalheim
3fef32b364 gitignore hydra-data as created by foreman 2025-03-29 14:31:18 +00:00
Jörg Thalheim
ae18a7b3ae fix development workflow after switching to meson-based build 2025-03-29 14:31:18 +00:00
Jörg Thalheim
b657bcdfb7
Merge pull request #1457 from dermetfan/fix-1429
hydra-eval-jobset: do not wait on n-e-j inside transaction
2025-03-29 11:36:13 +01:00
Jörg Thalheim
3b4c4972c2
Merge pull request #1449 from knedlsepp/fix-metrics-rendering-with-special-characters
Fix rendering of metrics with special characters
2025-03-29 08:52:07 +01:00
John Ericson
b2fe3f5218
Merge pull request #1455 from qowoz/226-constituent
nix-eval-jobs + constituent globs
2025-03-27 23:18:39 -04:00
Maximilian Bosch
9911f0107f Reimplement (named) constituent jobs (+globbing) based on nix-eval-jobs
Depends on https://github.com/nix-community/nix-eval-jobs/pull/349 & #1421.

Almost equivalent to #1425, but with a small change: when having e.g. an
aggregate job with a glob that matches nothing, the jobset evaluation is
failed now. This was the intended behavior before (hydra-eval-jobset
fails hard if an aggregate is broken), the code-path was never reached
however since the aggregate was never marked as broken in this case
before.
2025-03-28 11:12:54 +10:00
zowoq
feebb61897 flake.lock: Update
Flake lock file updates:

• Updated input 'nix-eval-jobs':
    'github:nix-community/nix-eval-jobs/4b392b284877d203ae262e16af269f702df036bc?narHash=sha256-3wIReAqdTALv39gkWXLMZQvHyBOc3yPkWT2ZsItxedY%3D' (2025-02-14)
  → 'github:nix-community/nix-eval-jobs/f7418fc1fa45b96d37baa95ff3c016dd5be3876b?narHash=sha256-Lo4KFBNcY8tmBuCmEr2XV0IUZtxXHmbXPNLkov/QSU0%3D' (2025-03-26)
2025-03-28 11:12:54 +10:00
zowoq
4bcbed2f1b hydraTest: remove outdated postgresql version
error: postgresql_12 has been removed since it reached its EOL upstream
2025-03-28 11:12:48 +10:00
Robin Stumm
987dad3371 hydra-eval-jobset: do not wait on n-e-j inside transaction
fixes #1429
2025-03-26 20:23:26 +01:00
John Ericson
d2db3c7446
Merge pull request #1450 from NixOS/hydra-compress-race
Fix race condition in hydra-compress-logs
2025-03-16 14:37:39 -04:00
John Ericson
97dcdae068
Merge pull request #1451 from NixOS/revert-to-fix-hangs
Revert "Use `LegacySSHStore`"
2025-03-03 10:18:28 -05:00
John Ericson
9a5bd39d4c Revert "Use LegacySSHStore"
There were some hangs caused by this. Need to fix them, ideally
reproducing the issue in a test, before trying this again.

This reverts commit 4a4a0f901c70676ee47f830d2ff6a72789ba1baf.
2025-03-03 10:12:38 -05:00
Martin Weinelt
f1deb22c02
Fix race condition in hydra-compress-logs 2025-03-02 03:08:26 +01:00
Josef Kemetmüller
d22d030503 Fix rendering of metrics with special characters
My main motivation here is to get metrics with brackets to work in order
to support "pytest" test names:

- test_foo.py::test_bar[1]
- test_foo.py::test_bar[2]

I couldn't find an "HTML escape"-style function that would generate
valid html `id` attribute names from random strings, so I went with a
hash digest instead.
2025-02-27 09:25:42 +01:00
66 changed files with 1313 additions and 373 deletions

View File

@ -1,7 +1,10 @@
name: "Test"
on:
pull_request:
merge_group:
push:
branches:
- master
jobs:
tests:
runs-on: ubuntu-latest

1
.gitignore vendored
View File

@ -3,6 +3,7 @@
/src/sql/hydra-postgresql.sql
/src/sql/hydra-sqlite.sql
/src/sql/tmp.sqlite
.hydra-data
result
result-*
outputs

View File

@ -72,17 +72,16 @@ Make sure **State** at the top of the page is set to "_Enabled_" and click on "_
You can build Hydra via `nix-build` using the provided [default.nix](./default.nix):
```
$ nix-build
$ nix build
```
### Development Environment
You can use the provided shell.nix to get a working development environment:
```
$ nix-shell
$ autoreconfPhase
$ configurePhase # NOTE: not ./configure
$ make
$ nix develop
$ mesonConfigurePhase
$ ninja
```
### Executing Hydra During Development
@ -91,9 +90,9 @@ When working on new features or bug fixes you need to be able to run Hydra from
can be done using [foreman](https://github.com/ddollar/foreman):
```
$ nix-shell
$ nix develop
$ # hack hack
$ make
$ ninja -C build
$ foreman start
```
@ -115,22 +114,24 @@ Start by following the steps in [Development Environment](#development-environme
Then, you can run the tests and the perlcritic linter together with:
```console
$ nix-shell
$ make check
$ nix develop
$ ninja -C build test
```
You can run a single test with:
```
$ nix-shell
$ yath test ./t/foo/bar.t
$ nix develop
$ cd build
$ meson test --test-args=../t/Hydra/Event.t testsuite
```
And you can run just perlcritic with:
```
$ nix-shell
$ make perlcritic
$ nix develop
$ cd build
$ meson test perlcritic
```
### JSON API

View File

@ -11,12 +11,6 @@ $ cd hydra
To enter a shell in which all environment variables (such as `PERL5LIB`)
and dependencies can be found:
```console
$ nix-shell
```
of when flakes are enabled:
```console
$ nix develop
```
@ -24,15 +18,15 @@ $ nix develop
To build Hydra, you should then do:
```console
[nix-shell]$ autoreconfPhase
[nix-shell]$ configurePhase
[nix-shell]$ make -j$(nproc)
$ mesonConfigurePhase
$ ninja
```
You start a local database, the webserver, and other components with
foreman:
```console
$ ninja -C build
$ foreman start
```
@ -47,18 +41,11 @@ $ ./src/script/hydra-server
You can run Hydra's test suite with the following:
```console
[nix-shell]$ make check
[nix-shell]$ # to run as many tests as you have cores:
[nix-shell]$ make check YATH_JOB_COUNT=$NIX_BUILD_CORES
[nix-shell]$ # or run yath directly:
[nix-shell]$ yath test
[nix-shell]$ # to run as many tests as you have cores:
[nix-shell]$ yath test -j $NIX_BUILD_CORES
$ meson test
# to run as many tests as you have cores:
$ YATH_JOB_COUNT=$NIX_BUILD_CORES meson test
```
When using `yath` instead of `make check`, ensure you have run `make`
in the root of the repository at least once.
**Warning**: Currently, the tests can fail
if run with high parallelism [due to an issue in
`Test::PostgreSQL`](https://github.com/TJC/Test-postgresql/issues/40)
@ -75,7 +62,7 @@ will reload the page every time you save.
To build Hydra and its dependencies:
```console
$ nix-build release.nix -A build.x86_64-linux
$ nix build .#packages.x86_64-linux.default
```
## Development Tasks

View File

@ -92,6 +92,23 @@ Sets Gitea CI status
- `gitea_authorization.<repo-owner>`
## Gitea pulls
Create jobs based on open Gitea pull requests
### Configuration options
- `gitea_authorization.<repo-owner>`
## Gitea refs
Hydra plugin for retrieving the list of references (branches or tags) from
Gitea following a certain naming scheme.
### Configuration options
- `gitea_authorization.<repo-owner>`
## GitHub pulls
Create jobs based on open GitHub pull requests

20
flake.lock generated
View File

@ -12,16 +12,16 @@
"nixpkgs-regression": []
},
"locked": {
"lastModified": 1739899400,
"narHash": "sha256-q/RgA4bB7zWai4oPySq9mch7qH14IEeom2P64SXdqHs=",
"lastModified": 1744030329,
"narHash": "sha256-r+psCOW77vTSTNbxTVrYHeh6OgB0QukbnyUVDwg8s4I=",
"owner": "NixOS",
"repo": "nix",
"rev": "e310c19a1aeb1ce1ed4d41d5ab2d02db596e0918",
"rev": "a4962f73b5fc874d4b16baef47921daf349addfc",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "2.26-maintenance",
"ref": "2.28-maintenance",
"repo": "nix",
"type": "github"
}
@ -29,11 +29,11 @@
"nix-eval-jobs": {
"flake": false,
"locked": {
"lastModified": 1739500569,
"narHash": "sha256-3wIReAqdTALv39gkWXLMZQvHyBOc3yPkWT2ZsItxedY=",
"lastModified": 1744018595,
"narHash": "sha256-v5n6t49X7MOpqS9j0FtI6TWOXvxuZMmGsp2OfUK5QfA=",
"owner": "nix-community",
"repo": "nix-eval-jobs",
"rev": "4b392b284877d203ae262e16af269f702df036bc",
"rev": "cba718bafe5dc1607c2b6761ecf53c641a6f3b21",
"type": "github"
},
"original": {
@ -44,11 +44,11 @@
},
"nixpkgs": {
"locked": {
"lastModified": 1739461644,
"narHash": "sha256-1o1qR0KYozYGRrnqytSpAhVBYLNBHX+Lv6I39zGRzKM=",
"lastModified": 1743987495,
"narHash": "sha256-46T2vMZ4/AfCK0Y2OjlFzJPxmdpP8GtsuEqSSJv3oe4=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "97a719c9f0a07923c957cf51b20b329f9fb9d43f",
"rev": "db8f4fe18ce772a9c8f3adf321416981c8fe9371",
"type": "github"
},
"original": {

View File

@ -4,7 +4,7 @@
inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixos-24.11-small";
inputs.nix = {
url = "github:NixOS/nix/2.26-maintenance";
url = "github:NixOS/nix/2.28-maintenance";
inputs.nixpkgs.follows = "nixpkgs";
# hide nix dev tooling from our lock file
@ -34,7 +34,6 @@
hydra = final.callPackage ./package.nix {
inherit (nixpkgs.lib) fileset;
rawSrc = self;
nix-perl-bindings = final.nixComponents.nix-perl-bindings;
};
};
@ -73,21 +72,29 @@
validate-openapi = hydraJobs.tests.validate-openapi.${system};
});
packages = forEachSystem (system: {
nix-eval-jobs = nixpkgs.legacyPackages.${system}.callPackage nix-eval-jobs {
nix = nix.packages.${system}.nix;
};
hydra = nixpkgs.legacyPackages.${system}.callPackage ./package.nix {
inherit (nixpkgs.lib) fileset;
inherit (self.packages.${system}) nix-eval-jobs;
rawSrc = self;
packages = forEachSystem (system: let
nixComponents = {
inherit (nix.packages.${system})
nix-util
nix-store
nix-expr
nix-fetchers
nix-flake
nix-main
nix-cmd
nix-cli
nix-perl-bindings
;
nix-perl-bindings = nix.hydraJobs.perlBindings.${system};
};
in {
nix-eval-jobs = nixpkgs.legacyPackages.${system}.callPackage nix-eval-jobs {
inherit nixComponents;
};
hydra = nixpkgs.legacyPackages.${system}.callPackage ./package.nix {
inherit (nixpkgs.lib) fileset;
inherit nixComponents;
inherit (self.packages.${system}) nix-eval-jobs;
rawSrc = self;
};
default = self.packages.${system}.hydra;
});

View File

@ -12,20 +12,6 @@ nix_util_dep = dependency('nix-util', required: true)
nix_store_dep = dependency('nix-store', required: true)
nix_main_dep = dependency('nix-main', required: true)
# Nix need extra flags not provided in its pkg-config files.
nix_dep = declare_dependency(
dependencies: [
nix_util_dep,
nix_store_dep,
nix_main_dep,
],
compile_args: [
'-include', 'nix/config-util.hh',
'-include', 'nix/config-store.hh',
'-include', 'nix/config-main.hh',
],
)
pqxx_dep = dependency('libpqxx', required: true)
prom_cpp_core_dep = dependency('prometheus-cpp-core', required: true)

View File

@ -15,7 +15,6 @@
systemd.services.hydra-send-stats.enable = false;
services.postgresql.enable = true;
services.postgresql.package = pkgs.postgresql_12;
# The following is to work around the following error from hydra-server:
# [error] Caught exception in engine "Cannot determine local time zone"

View File

@ -468,7 +468,7 @@ in
elif [[ $compression == zstd ]]; then
compression="zstd --rm"
fi
find ${baseDir}/build-logs -type f -name "*.drv" -mtime +3 -size +0c | xargs -r "$compression" --force --quiet
find ${baseDir}/build-logs -ignore_readdir_race -type f -name "*.drv" -mtime +3 -size +0c | xargs -r "$compression" --force --quiet
'';
startAt = "Sun 01:45";
};

View File

@ -27,8 +27,7 @@ in
{
install = forEachSystem (system:
with import (nixpkgs + "/nixos/lib/testing-python.nix") { inherit system; };
simpleTest {
(import (nixpkgs + "/nixos/lib/testing-python.nix") { inherit system; }).simpleTest {
name = "hydra-install";
nodes.machine = hydraServer;
testScript =
@ -43,8 +42,7 @@ in
});
notifications = forEachSystem (system:
with import (nixpkgs + "/nixos/lib/testing-python.nix") { inherit system; };
simpleTest {
(import (nixpkgs + "/nixos/lib/testing-python.nix") { inherit system; }).simpleTest {
name = "hydra-notifications";
nodes.machine = {
imports = [ hydraServer ];
@ -56,7 +54,7 @@ in
'';
services.influxdb.enable = true;
};
testScript = ''
testScript = { nodes, ... }: ''
machine.wait_for_job("hydra-init")
# Create an admin account and some other state.
@ -87,7 +85,7 @@ in
# Setup the project and jobset
machine.succeed(
"su - hydra -c 'perl -I ${config.services.hydra-dev.package.perlDeps}/lib/perl5/site_perl ${./t/setup-notifications-jobset.pl}' >&2"
"su - hydra -c 'perl -I ${nodes.machine.services.hydra-dev.package.perlDeps}/lib/perl5/site_perl ${./t/setup-notifications-jobset.pl}' >&2"
)
# Wait until hydra has build the job and
@ -101,9 +99,10 @@ in
});
gitea = forEachSystem (system:
let pkgs = nixpkgs.legacyPackages.${system}; in
with import (nixpkgs + "/nixos/lib/testing-python.nix") { inherit system; };
makeTest {
let
pkgs = nixpkgs.legacyPackages.${system};
in
(import (nixpkgs + "/nixos/lib/testing-python.nix") { inherit system; }).makeTest {
name = "hydra-gitea";
nodes.machine = { pkgs, ... }: {
imports = [ hydraServer ];
@ -145,11 +144,25 @@ in
git -C /tmp/repo add .
git config --global user.email test@localhost
git config --global user.name test
# Create initial commit
git -C /tmp/repo commit -m 'Initial import'
git -C /tmp/repo remote add origin gitea@machine:root/repo
GIT_SSH_COMMAND='ssh -i $HOME/.ssh/privk -o StrictHostKeyChecking=no' \
export GIT_SSH_COMMAND='ssh -i $HOME/.ssh/privk -o StrictHostKeyChecking=no'
git -C /tmp/repo push origin master
git -C /tmp/repo log >&2
# Create PR branch
git -C /tmp/repo checkout -b pr
git -C /tmp/repo commit --allow-empty -m 'Additional change'
git -C /tmp/repo push origin pr
git -C /tmp/repo log >&2
# Create release branch
git -C /tmp/repo checkout -b release/release-1.0
git -C /tmp/repo commit --allow-empty -m 'Additional change'
git -C /tmp/repo push origin release/release-1.0
git -C /tmp/repo log >&2
'';
scripts.hydra-setup = pkgs.writeShellScript "hydra.sh" ''
@ -185,7 +198,7 @@ in
cat >data.json <<EOF
{
"description": "Trivial",
"checkinterval": "60",
"checkinterval": "20",
"enabled": "1",
"visible": "1",
"keepnr": "1",
@ -199,7 +212,17 @@ in
"gitea_repo_name": {"value": "repo", "type": "string"},
"gitea_repo_owner": {"value": "root", "type": "string"},
"gitea_status_repo": {"value": "git", "type": "string"},
"gitea_http_url": {"value": "http://localhost:3001", "type": "string"}
"gitea_http_url": {"value": "http://localhost:3001", "type": "string"},
"pulls": {
"type": "giteapulls",
"value": "localhost:3001 root repo http",
"emailresponsible": false
},
"releases": {
"type": "gitea_refs",
"value": "localhost:3001 root repo heads http - release",
"emailresponseible": false
}
}
}
EOF
@ -227,15 +250,41 @@ in
};
smallDrv = pkgs.writeText "jobset.nix" ''
{ trivial = builtins.derivation {
name = "trivial";
{ pulls, releases, ... }:
let
genDrv = name: builtins.derivation {
inherit name;
system = "${system}";
builder = "/bin/sh";
allowSubstitutes = false;
preferLocalBuild = true;
args = ["-c" "echo success > $out; exit 0"];
};
prs = builtins.fromJSON (builtins.readFile pulls);
prJobNames = map (n: "pr-''${n}") (builtins.attrNames prs);
prJobset = builtins.listToAttrs (
map (
name: {
inherit name;
value = genDrv name;
}
) prJobNames
);
rels = builtins.fromJSON (builtins.readFile releases);
relJobNames = builtins.attrNames rels;
relJobset = builtins.listToAttrs (
map (
name: {
inherit name;
value = genDrv name;
}
) relJobNames
);
in {
trivial = genDrv "trivial";
} // prJobset // relJobset
'';
in
''
@ -279,18 +328,34 @@ in
+ '| jq .buildstatus | xargs test 0 -eq'
)
machine.sleep(3)
data = machine.succeed(
'curl -Lf -s "http://localhost:3001/api/v1/repos/root/repo/statuses/$(cd /tmp/repo && git show | head -n1 | awk "{print \\$2}")" '
'curl -Lf -s "http://localhost:3001/api/v1/repos/root/repo/statuses/$(cd /tmp/repo && git show master | head -n1 | awk "{print \\$2}")?sort=leastindex" '
+ "-H 'Accept: application/json' -H 'Content-Type: application/json' "
+ f"-H 'Authorization: token ${api_token}'"
)
response = json.loads(data)
assert len(response) == 2, "Expected exactly three status updates for latest commit (queued, finished)!"
assert len(response) == 2, "Expected exactly two status updates for latest commit (queued, finished)!"
assert response[0]['status'] == "success", "Expected finished status to be success!"
assert response[1]['status'] == "pending", "Expected queued status to be pending!"
# giteapulls test
machine.succeed(
"curl --fail -X POST http://localhost:3001/api/v1/repos/root/repo/pulls "
+ "-H 'Accept: application/json' -H 'Content-Type: application/json' "
+ f"-H 'Authorization: token ${api_token}'"
+ ' -d \'{"title":"Test PR", "base":"master", "head": "pr"}\'''
)
machine.wait_until_succeeds(
'curl -Lf -s http://localhost:3000/build/2 -H "Accept: application/json" '
+ '| jq .buildstatus | xargs test 0 -eq'
)
machine.shutdown()
'';
});

View File

@ -8,11 +8,7 @@
, perlPackages
, nix-util
, nix-store
, nix-main
, nix-cli
, nix-perl-bindings
, nixComponents
, git
, makeWrapper
@ -65,7 +61,7 @@ let
name = "hydra-perl-deps";
paths = lib.closePropagation
([
nix-perl-bindings
nixComponents.nix-perl-bindings
git
] ++ (with perlPackages; [
AuthenSASL
@ -93,6 +89,7 @@ let
DateTime
DBDPg
DBDSQLite
DBIxClassHelpers
DigestSHA1
EmailMIME
EmailSender
@ -165,7 +162,7 @@ stdenv.mkDerivation (finalAttrs: {
nukeReferences
pkg-config
mdbook
nix-cli
nixComponents.nix-cli
perlDeps
perl
unzip
@ -175,9 +172,9 @@ stdenv.mkDerivation (finalAttrs: {
libpqxx
openssl
libxslt
nix-util
nix-store
nix-main
nixComponents.nix-util
nixComponents.nix-store
nixComponents.nix-main
perlDeps
perl
boost
@ -204,14 +201,14 @@ stdenv.mkDerivation (finalAttrs: {
glibcLocales
libressl.nc
python3
nix-cli
nixComponents.nix-cli
];
hydraPath = lib.makeBinPath (
[
subversion
openssh
nix-cli
nixComponents.nix-cli
coreutils
findutils
pixz
@ -241,7 +238,7 @@ stdenv.mkDerivation (finalAttrs: {
shellHook = ''
pushd $(git rev-parse --show-toplevel) >/dev/null
PATH=$(pwd)/src/hydra-evaluator:$(pwd)/src/script:$(pwd)/src/hydra-queue-runner:$PATH
PATH=$(pwd)/build/src/hydra-evaluator:$(pwd)/build/src/script:$(pwd)/build/src/hydra-queue-runner:$PATH
PERL5LIB=$(pwd)/src/lib:$PERL5LIB
export HYDRA_HOME="$(pwd)/src/"
mkdir -p .hydra-data
@ -272,7 +269,7 @@ stdenv.mkDerivation (finalAttrs: {
--prefix PATH ':' $out/bin:$hydraPath \
--set HYDRA_RELEASE ${version} \
--set HYDRA_HOME $out/libexec/hydra \
--set NIX_RELEASE ${nix-cli.name or "unknown"} \
--set NIX_RELEASE ${nixComponents.nix-cli.name or "unknown"} \
--set NIX_EVAL_JOBS_RELEASE ${nix-eval-jobs.name or "unknown"}
done
'';

View File

@ -1,8 +1,8 @@
#include "db.hh"
#include "hydra-config.hh"
#include "pool.hh"
#include "shared.hh"
#include "signals.hh"
#include <nix/util/pool.hh>
#include <nix/main/shared.hh>
#include <nix/util/signals.hh>
#include <algorithm>
#include <thread>

View File

@ -2,7 +2,8 @@ hydra_evaluator = executable('hydra-evaluator',
'hydra-evaluator.cc',
dependencies: [
libhydra_dep,
nix_dep,
nix_util_dep,
nix_main_dep,
pqxx_dep,
],
install: true,

View File

@ -5,17 +5,20 @@
#include <sys/stat.h>
#include <fcntl.h>
#include "build-result.hh"
#include "path.hh"
#include "legacy-ssh-store.hh"
#include "serve-protocol.hh"
#include <nix/store/build-result.hh>
#include <nix/store/path.hh>
#include <nix/store/legacy-ssh-store.hh>
#include <nix/store/serve-protocol.hh>
#include <nix/store/serve-protocol-impl.hh>
#include "state.hh"
#include "current-process.hh"
#include "processes.hh"
#include "util.hh"
#include "ssh.hh"
#include "finally.hh"
#include "url.hh"
#include <nix/util/current-process.hh>
#include <nix/util/processes.hh>
#include <nix/util/util.hh>
#include <nix/store/serve-protocol.hh>
#include <nix/store/serve-protocol-impl.hh>
#include <nix/store/ssh.hh>
#include <nix/util/finally.hh>
#include <nix/util/url.hh>
using namespace nix;
@ -36,6 +39,38 @@ bool ::Machine::isLocalhost() const
namespace nix::build_remote {
static std::unique_ptr<SSHMaster::Connection> openConnection(
::Machine::ptr machine, SSHMaster & master)
{
Strings command = {"nix-store", "--serve", "--write"};
if (machine->isLocalhost()) {
command.push_back("--builders");
command.push_back("");
} else {
auto remoteStore = machine->storeUri.params.find("remote-store");
if (remoteStore != machine->storeUri.params.end()) {
command.push_back("--store");
command.push_back(shellEscape(remoteStore->second));
}
}
auto ret = master.startCommand(std::move(command), {
"-a", "-oBatchMode=yes", "-oConnectTimeout=60", "-oTCPKeepAlive=yes"
});
// XXX: determine the actual max value we can use from /proc.
// FIXME: Should this be upstreamed into `startCommand` in Nix?
int pipesize = 1024 * 1024;
fcntl(ret->in.get(), F_SETPIPE_SZ, &pipesize);
fcntl(ret->out.get(), F_SETPIPE_SZ, &pipesize);
return ret;
}
static void copyClosureTo(
::Machine::Connection & conn,
Store & destStore,
@ -52,8 +87,8 @@ static void copyClosureTo(
// FIXME: substitute output pollutes our build log
/* Get back the set of paths that are already valid on the remote
host. */
auto present = conn.store->queryValidPaths(
closure, true, useSubstitutes);
auto present = conn.queryValidPaths(
destStore, true, closure, useSubstitutes);
if (present.size() == closure.size()) return;
@ -68,7 +103,12 @@ static void copyClosureTo(
std::unique_lock<std::timed_mutex> sendLock(conn.machine->state->sendLock,
std::chrono::seconds(600));
conn.store->addMultipleToStoreLegacy(destStore, missing);
conn.to << ServeProto::Command::ImportPaths;
destStore.exportPaths(missing, conn.to);
conn.to.flush();
if (readInt(conn.from) != 1)
throw Error("remote machine failed to import closure");
}
@ -188,7 +228,7 @@ static BuildResult performBuild(
counter & nrStepsBuilding
)
{
auto kont = conn.store->buildDerivationAsync(drvPath, drv, options);
conn.putBuildDerivationRequest(localStore, drvPath, drv, options);
BuildResult result;
@ -197,10 +237,7 @@ static BuildResult performBuild(
startTime = time(0);
{
MaintainCount<counter> mc(nrStepsBuilding);
result = kont();
// Without proper call-once functions, we need to manually
// delete after calling.
kont = {};
result = ServeProto::Serialise<BuildResult>::read(localStore, conn);
}
stopTime = time(0);
@ -216,7 +253,7 @@ static BuildResult performBuild(
// If the protocol was too old to give us `builtOutputs`, initialize
// it manually by introspecting the derivation.
if (GET_PROTOCOL_MINOR(conn.store->getProtocol()) < 6)
if (GET_PROTOCOL_MINOR(conn.remoteVersion) < 6)
{
// If the remote is too old to handle CA derivations, we cant get this
// far anyways
@ -261,10 +298,11 @@ static void copyPathFromRemote(
lambda function only gets executed if someone tries to read
from source2, we will send the command from here rather
than outside the lambda. */
conn.store->narFromPath(info.path, [&](Source & source) {
TeeSource tee{source, sink};
extractNarData(tee, conn.store->printStorePath(info.path), narMembers);
});
conn.to << ServeProto::Command::DumpStorePath << localStore.printStorePath(info.path);
conn.to.flush();
TeeSource tee(conn.from, sink);
extractNarData(tee, localStore.printStorePath(info.path), narMembers);
});
destStore.addToStore(info, *source2, NoRepair, NoCheckSigs);
@ -348,8 +386,19 @@ void RemoteResult::updateWithBuildResult(const nix::BuildResult & buildResult)
}
/* Utility guard object to auto-release a semaphore on destruction. */
template <typename T>
class SemaphoreReleaser {
public:
SemaphoreReleaser(T* s) : sem(s) {}
~SemaphoreReleaser() { sem->release(); }
private:
T* sem;
};
void State::buildRemote(ref<Store> destStore,
std::unique_ptr<MachineReservation> reservation,
::Machine::ptr machine, Step::ptr step,
const ServeProto::BuildOptions & buildOptions,
RemoteResult & result, std::shared_ptr<ActiveStep> activeStep,
@ -366,39 +415,30 @@ void State::buildRemote(ref<Store> destStore,
updateStep(ssConnecting);
// FIXME: rewrite to use Store.
::Machine::Connection conn {
.machine = machine,
.store = [&]{
auto * pSpecified = std::get_if<StoreReference::Specified>(&machine->storeUri.variant);
auto storeRef = machine->completeStoreReference();
auto * pSpecified = std::get_if<StoreReference::Specified>(&storeRef.variant);
if (!pSpecified || pSpecified->scheme != "ssh") {
throw Error("Currently, only (legacy-)ssh stores are supported!");
}
auto remoteStore = machine->openStore().dynamic_pointer_cast<LegacySSHStore>();
assert(remoteStore);
remoteStore->connPipeSize = 1024 * 1024;
if (machine->isLocalhost()) {
auto rp_new = remoteStore->remoteProgram.get();
rp_new.push_back("--builders");
rp_new.push_back("");
const_cast<nix::Setting<Strings> &>(remoteStore->remoteProgram).assign(rp_new);
}
remoteStore->extraSshArgs = {
"-a", "-oBatchMode=yes", "-oConnectTimeout=60", "-oTCPKeepAlive=yes"
LegacySSHStoreConfig storeConfig {
pSpecified->scheme,
pSpecified->authority,
storeRef.params
};
const_cast<nix::Setting<int> &>(remoteStore->logFD).assign(logFD.get());
return nix::ref{remoteStore};
}(),
};
auto master = storeConfig.createSSHMaster(
false, // no SSH master yet
logFD.get());
// FIXME: rewrite to use Store.
auto child = build_remote::openConnection(machine, master);
{
auto activeStepState(activeStep->state_.lock());
if (activeStepState->cancelled) throw Error("step cancelled");
activeStepState->pid = conn.store->getConnectionPid();
activeStepState->pid = child->sshPid;
}
Finally clearPid([&]() {
@ -413,12 +453,35 @@ void State::buildRemote(ref<Store> destStore,
process. Meh. */
});
::Machine::Connection conn {
{
.to = child->in.get(),
.from = child->out.get(),
/* Handshake. */
.remoteVersion = 0xdadbeef, // FIXME avoid dummy initialize
},
/*.machine =*/ machine,
};
Finally updateStats([&]() {
auto stats = conn.store->getConnectionStats();
bytesReceived += stats.bytesReceived;
bytesSent += stats.bytesSent;
bytesReceived += conn.from.read;
bytesSent += conn.to.written;
});
constexpr ServeProto::Version our_version = 0x206;
try {
conn.remoteVersion = decltype(conn)::handshake(
conn.to,
conn.from,
our_version,
machine->storeUri.render());
} catch (EndOfFile & e) {
child->sshPid.wait();
std::string s = chomp(readFile(result.logFile));
throw Error("cannot connect to %1%: %2%", machine->storeUri.render(), s);
}
{
auto info(machine->state->connectInfo.lock());
info->consecutiveFailures = 0;
@ -475,6 +538,23 @@ void State::buildRemote(ref<Store> destStore,
result.logFile = "";
}
/* Throttle CPU-bound work. Opportunistically skip updating the current
* step, since this requires a DB roundtrip. */
if (!localWorkThrottler.try_acquire()) {
MaintainCount<counter> mc(nrStepsWaitingForDownloadSlot);
updateStep(ssWaitingForLocalSlot);
localWorkThrottler.acquire();
}
SemaphoreReleaser releaser(&localWorkThrottler);
/* Once we've started copying outputs, release the machine reservation
* so further builds can happen. We do not release the machine earlier
* to avoid situations where the queue runner is bottlenecked on
* copying outputs and we end up building too many things that we
* haven't been able to allow copy slots for. */
reservation.reset();
wakeDispatcher();
StorePathSet outputs;
for (auto & [_, realisation] : buildResult.builtOutputs)
outputs.insert(realisation.outPath);
@ -487,7 +567,7 @@ void State::buildRemote(ref<Store> destStore,
auto now1 = std::chrono::steady_clock::now();
auto infos = conn.store->queryPathInfosUncached(outputs);
auto infos = conn.queryPathInfos(*localStore, outputs);
size_t totalNarSize = 0;
for (auto & [_, info] : infos) totalNarSize += info.narSize;
@ -522,11 +602,9 @@ void State::buildRemote(ref<Store> destStore,
}
}
/* Shut down the connection done by RAII.
Only difference is kill() instead of wait() (i.e. send signal
then wait())
*/
/* Shut down the connection. */
child->in = -1;
child->sshPid.wait();
} catch (Error & e) {
/* Disable this machine until a certain period of time has

View File

@ -1,7 +1,7 @@
#include "hydra-build-result.hh"
#include "store-api.hh"
#include "util.hh"
#include "source-accessor.hh"
#include <nix/store/store-api.hh>
#include <nix/util/util.hh>
#include <nix/util/source-accessor.hh>
#include <regex>

View File

@ -2,8 +2,8 @@
#include "state.hh"
#include "hydra-build-result.hh"
#include "finally.hh"
#include "binary-cache-store.hh"
#include <nix/util/finally.hh>
#include <nix/store/binary-cache-store.hh>
using namespace nix;
@ -16,7 +16,7 @@ void setThreadName(const std::string & name)
}
void State::builder(MachineReservation::ptr reservation)
void State::builder(std::unique_ptr<MachineReservation> reservation)
{
setThreadName("bld~" + std::string(reservation->step->drvPath.to_string()));
@ -35,22 +35,20 @@ void State::builder(MachineReservation::ptr reservation)
activeSteps_.lock()->erase(activeStep);
});
std::string machine = reservation->machine->storeUri.render();
try {
auto destStore = getDestStore();
res = doBuildStep(destStore, reservation, activeStep);
// Might release the reservation.
res = doBuildStep(destStore, std::move(reservation), activeStep);
} catch (std::exception & e) {
printMsg(lvlError, "uncaught exception building %s on %s: %s",
localStore->printStorePath(reservation->step->drvPath),
reservation->machine->storeUri.render(),
localStore->printStorePath(activeStep->step->drvPath),
machine,
e.what());
}
}
/* Release the machine and wake up the dispatcher. */
assert(reservation.unique());
reservation = 0;
wakeDispatcher();
/* If there was a temporary failure, retry the step after an
exponentially increasing interval. */
Step::ptr step = wstep.lock();
@ -72,11 +70,11 @@ void State::builder(MachineReservation::ptr reservation)
State::StepResult State::doBuildStep(nix::ref<Store> destStore,
MachineReservation::ptr reservation,
std::unique_ptr<MachineReservation> reservation,
std::shared_ptr<ActiveStep> activeStep)
{
auto & step(reservation->step);
auto & machine(reservation->machine);
auto step(reservation->step);
auto machine(reservation->machine);
{
auto step_(step->state.lock());
@ -211,7 +209,7 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
try {
/* FIXME: referring builds may have conflicting timeouts. */
buildRemote(destStore, machine, step, buildOptions, result, activeStep, updateStep, narMembers);
buildRemote(destStore, std::move(reservation), machine, step, buildOptions, result, activeStep, updateStep, narMembers);
} catch (Error & e) {
if (activeStep->state_.lock()->cancelled) {
printInfo("marking step %d of build %d as cancelled", stepNr, buildId);

View File

@ -40,13 +40,15 @@ void State::dispatcher()
printMsg(lvlDebug, "dispatcher woken up");
nrDispatcherWakeups++;
auto now1 = std::chrono::steady_clock::now();
auto t_before_work = std::chrono::steady_clock::now();
auto sleepUntil = doDispatch();
auto now2 = std::chrono::steady_clock::now();
auto t_after_work = std::chrono::steady_clock::now();
dispatchTimeMs += std::chrono::duration_cast<std::chrono::milliseconds>(now2 - now1).count();
prom.dispatcher_time_spent_running.Increment(
std::chrono::duration_cast<std::chrono::microseconds>(t_after_work - t_before_work).count());
dispatchTimeMs += std::chrono::duration_cast<std::chrono::milliseconds>(t_after_work - t_before_work).count();
/* Sleep until we're woken up (either because a runnable build
is added, or because a build finishes). */
@ -60,6 +62,10 @@ void State::dispatcher()
*dispatcherWakeup_ = false;
}
auto t_after_sleep = std::chrono::steady_clock::now();
prom.dispatcher_time_spent_waiting.Increment(
std::chrono::duration_cast<std::chrono::microseconds>(t_after_sleep - t_after_work).count());
} catch (std::exception & e) {
printError("dispatcher: %s", e.what());
sleep(1);
@ -282,7 +288,7 @@ system_time State::doDispatch()
/* Make a slot reservation and start a thread to
do the build. */
auto builderThread = std::thread(&State::builder, this,
std::make_shared<MachineReservation>(*this, step, mi.machine));
std::make_unique<MachineReservation>(*this, step, mi.machine));
builderThread.detach(); // FIXME?
keepGoing = true;

View File

@ -2,9 +2,9 @@
#include <memory>
#include "hash.hh"
#include "derivations.hh"
#include "store-api.hh"
#include <nix/util/hash.hh>
#include <nix/store/derivations.hh>
#include <nix/store/store-api.hh>
#include "nar-extractor.hh"
struct BuildProduct

View File

@ -11,16 +11,16 @@
#include <nlohmann/json.hpp>
#include "signals.hh"
#include <nix/util/signals.hh>
#include "state.hh"
#include "hydra-build-result.hh"
#include "store-api.hh"
#include "remote-store.hh"
#include <nix/store/store-api.hh>
#include <nix/store/remote-store.hh>
#include "globals.hh"
#include <nix/store/globals.hh>
#include "hydra-config.hh"
#include "s3-binary-cache-store.hh"
#include "shared.hh"
#include <nix/store/s3-binary-cache-store.hh>
#include <nix/main/shared.hh>
using namespace nix;
using nlohmann::json;
@ -70,10 +70,31 @@ State::PromMetrics::PromMetrics()
.Register(*registry)
.Add({})
)
, queue_max_id(
prometheus::BuildGauge()
.Name("hydraqueuerunner_queue_max_build_id_info")
.Help("Maximum build record ID in the queue")
, dispatcher_time_spent_running(
prometheus::BuildCounter()
.Name("hydraqueuerunner_dispatcher_time_spent_running")
.Help("Time (in micros) spent running the dispatcher")
.Register(*registry)
.Add({})
)
, dispatcher_time_spent_waiting(
prometheus::BuildCounter()
.Name("hydraqueuerunner_dispatcher_time_spent_waiting")
.Help("Time (in micros) spent waiting for the dispatcher to obtain work")
.Register(*registry)
.Add({})
)
, queue_monitor_time_spent_running(
prometheus::BuildCounter()
.Name("hydraqueuerunner_queue_monitor_time_spent_running")
.Help("Time (in micros) spent running the queue monitor")
.Register(*registry)
.Add({})
)
, queue_monitor_time_spent_waiting(
prometheus::BuildCounter()
.Name("hydraqueuerunner_queue_monitor_time_spent_waiting")
.Help("Time (in micros) spent waiting for the queue monitor to obtain work")
.Register(*registry)
.Add({})
)
@ -85,6 +106,7 @@ State::State(std::optional<std::string> metricsAddrOpt)
: config(std::make_unique<HydraConfig>())
, maxUnsupportedTime(config->getIntOption("max_unsupported_time", 0))
, dbPool(config->getIntOption("max_db_connections", 128))
, localWorkThrottler(config->getIntOption("max_local_worker_threads", std::min(maxSupportedLocalWorkers, std::max(4u, std::thread::hardware_concurrency()) - 2)))
, maxOutputSize(config->getIntOption("max_output_size", 2ULL << 30))
, maxLogSize(config->getIntOption("max_log_size", 64ULL << 20))
, uploadLogsToBinaryCache(config->getBoolOption("upload_logs_to_binary_cache", false))
@ -551,6 +573,7 @@ void State::dumpStatus(Connection & conn)
{"nrActiveSteps", activeSteps_.lock()->size()},
{"nrStepsBuilding", nrStepsBuilding.load()},
{"nrStepsCopyingTo", nrStepsCopyingTo.load()},
{"nrStepsWaitingForDownloadSlot", nrStepsWaitingForDownloadSlot.load()},
{"nrStepsCopyingFrom", nrStepsCopyingFrom.load()},
{"nrStepsWaiting", nrStepsWaiting.load()},
{"nrUnsupportedSteps", nrUnsupportedSteps.load()},
@ -592,6 +615,7 @@ void State::dumpStatus(Connection & conn)
}
{
auto machines_json = json::object();
auto machines_(machines.lock());
for (auto & i : *machines_) {
auto & m(i.second);
@ -618,8 +642,9 @@ void State::dumpStatus(Connection & conn)
machine["avgStepTime"] = (float) s->totalStepTime / s->nrStepsDone;
machine["avgStepBuildTime"] = (float) s->totalStepBuildTime / s->nrStepsDone;
}
statusJson["machines"][m->storeUri.render()] = machine;
machines_json[m->storeUri.render()] = machine;
}
statusJson["machines"] = machines_json;
}
{

View File

@ -13,7 +13,9 @@ hydra_queue_runner = executable('hydra-queue-runner',
srcs,
dependencies: [
libhydra_dep,
nix_dep,
nix_util_dep,
nix_store_dep,
nix_main_dep,
pqxx_dep,
prom_cpp_core_dep,
prom_cpp_pull_dep,

View File

@ -1,6 +1,6 @@
#include "nar-extractor.hh"
#include "archive.hh"
#include <nix/util/archive.hh>
#include <unordered_set>

View File

@ -1,9 +1,9 @@
#pragma once
#include "source-accessor.hh"
#include "types.hh"
#include "serialise.hh"
#include "hash.hh"
#include <nix/util/source-accessor.hh>
#include <nix/util/types.hh>
#include <nix/util/serialise.hh>
#include <nix/util/hash.hh>
struct NarMemberData
{

View File

@ -1,6 +1,8 @@
#include "state.hh"
#include "hydra-build-result.hh"
#include "globals.hh"
#include <nix/store/globals.hh>
#include <nix/store/parsed-derivations.hh>
#include <nix/util/thread-pool.hh>
#include <cstring>
@ -37,16 +39,21 @@ void State::queueMonitorLoop(Connection & conn)
auto destStore = getDestStore();
unsigned int lastBuildId = 0;
bool quit = false;
while (!quit) {
auto t_before_work = std::chrono::steady_clock::now();
localStore->clearPathInfoCache();
bool done = getQueuedBuilds(conn, destStore, lastBuildId);
bool done = getQueuedBuilds(conn, destStore);
if (buildOne && buildOneDone) quit = true;
auto t_after_work = std::chrono::steady_clock::now();
prom.queue_monitor_time_spent_running.Increment(
std::chrono::duration_cast<std::chrono::microseconds>(t_after_work - t_before_work).count());
/* Sleep until we get notification from the database about an
event. */
if (done && !quit) {
@ -56,12 +63,10 @@ void State::queueMonitorLoop(Connection & conn)
conn.get_notifs();
if (auto lowestId = buildsAdded.get()) {
lastBuildId = std::min(lastBuildId, static_cast<unsigned>(std::stoul(*lowestId) - 1));
printMsg(lvlTalkative, "got notification: new builds added to the queue");
}
if (buildsRestarted.get()) {
printMsg(lvlTalkative, "got notification: builds restarted");
lastBuildId = 0; // check all builds
}
if (buildsCancelled.get() || buildsDeleted.get() || buildsBumped.get()) {
printMsg(lvlTalkative, "got notification: builds cancelled or bumped");
@ -71,6 +76,10 @@ void State::queueMonitorLoop(Connection & conn)
printMsg(lvlTalkative, "got notification: jobset shares changed");
processJobsetSharesChange(conn);
}
auto t_after_sleep = std::chrono::steady_clock::now();
prom.queue_monitor_time_spent_waiting.Increment(
std::chrono::duration_cast<std::chrono::microseconds>(t_after_sleep - t_after_work).count());
}
exit(0);
@ -84,20 +93,18 @@ struct PreviousFailure : public std::exception {
bool State::getQueuedBuilds(Connection & conn,
ref<Store> destStore, unsigned int & lastBuildId)
ref<Store> destStore)
{
prom.queue_checks_started.Increment();
printInfo("checking the queue for builds > %d...", lastBuildId);
printInfo("checking the queue for builds...");
/* Grab the queued builds from the database, but don't process
them yet (since we don't want a long-running transaction). */
std::vector<BuildID> newIDs;
std::map<BuildID, Build::ptr> newBuildsByID;
std::unordered_map<BuildID, Build::ptr> newBuildsByID;
std::multimap<StorePath, BuildID> newBuildsByPath;
unsigned int newLastBuildId = lastBuildId;
{
pqxx::work txn(conn);
@ -106,17 +113,12 @@ bool State::getQueuedBuilds(Connection & conn,
"jobsets.name as jobset, job, drvPath, maxsilent, timeout, timestamp, "
"globalPriority, priority from Builds "
"inner join jobsets on builds.jobset_id = jobsets.id "
"where builds.id > $1 and finished = 0 order by globalPriority desc, builds.id",
lastBuildId);
"where finished = 0 order by globalPriority desc, random()");
for (auto const & row : res) {
auto builds_(builds.lock());
BuildID id = row["id"].as<BuildID>();
if (buildOne && id != buildOne) continue;
if (id > newLastBuildId) {
newLastBuildId = id;
prom.queue_max_id.Set(id);
}
if (builds_->count(id)) continue;
auto build = std::make_shared<Build>(
@ -318,15 +320,13 @@ bool State::getQueuedBuilds(Connection & conn,
/* Stop after a certain time to allow priority bumps to be
processed. */
if (std::chrono::system_clock::now() > start + std::chrono::seconds(600)) {
if (std::chrono::system_clock::now() > start + std::chrono::seconds(60)) {
prom.queue_checks_early_exits.Increment();
break;
}
}
prom.queue_checks_finished.Increment();
lastBuildId = newBuildsByID.empty() ? newLastBuildId : newBuildsByID.begin()->first - 1;
return newBuildsByID.empty();
}
@ -405,6 +405,34 @@ void State::processQueueChange(Connection & conn)
}
std::map<DrvOutput, std::optional<StorePath>> State::getMissingRemotePaths(
ref<Store> destStore,
const std::map<DrvOutput, std::optional<StorePath>> & paths)
{
Sync<std::map<DrvOutput, std::optional<StorePath>>> missing_;
ThreadPool tp;
for (auto & [output, maybeOutputPath] : paths) {
if (!maybeOutputPath) {
auto missing(missing_.lock());
missing->insert({output, maybeOutputPath});
} else {
tp.enqueue([&] {
if (!destStore->isValidPath(*maybeOutputPath)) {
auto missing(missing_.lock());
missing->insert({output, maybeOutputPath});
}
});
}
}
tp.process();
auto missing(missing_.lock());
return *missing;
}
Step::ptr State::createStep(ref<Store> destStore,
Connection & conn, Build::ptr build, const StorePath & drvPath,
Build::ptr referringBuild, Step::ptr referringStep, std::set<StorePath> & finishedDrvs,
@ -463,14 +491,17 @@ Step::ptr State::createStep(ref<Store> destStore,
it's not runnable yet, and other threads won't make it
runnable while step->created == false. */
step->drv = std::make_unique<Derivation>(localStore->readDerivation(drvPath));
step->parsedDrv = std::make_unique<ParsedDerivation>(drvPath, *step->drv);
{
auto parsedDrv = ParsedDerivation{drvPath, *step->drv};
step->drvOptions = std::make_unique<DerivationOptions>(DerivationOptions::fromParsedDerivation(parsedDrv));
}
step->preferLocalBuild = step->parsedDrv->willBuildLocally(*localStore);
step->preferLocalBuild = step->drvOptions->willBuildLocally(*localStore, *step->drv);
step->isDeterministic = getOr(step->drv->env, "isDetermistic", "0") == "1";
step->systemType = step->drv->platform;
{
StringSet features = step->requiredSystemFeatures = step->parsedDrv->getRequiredSystemFeatures();
StringSet features = step->requiredSystemFeatures = step->drvOptions->getRequiredSystemFeatures(*step->drv);
if (step->preferLocalBuild)
features.insert("local");
if (!features.empty()) {
@ -485,16 +516,15 @@ Step::ptr State::createStep(ref<Store> destStore,
/* Are all outputs valid? */
auto outputHashes = staticOutputHashes(*localStore, *(step->drv));
bool valid = true;
std::map<DrvOutput, std::optional<StorePath>> missing;
std::map<DrvOutput, std::optional<StorePath>> paths;
for (auto & [outputName, maybeOutputPath] : destStore->queryPartialDerivationOutputMap(drvPath, &*localStore)) {
auto outputHash = outputHashes.at(outputName);
if (maybeOutputPath && destStore->isValidPath(*maybeOutputPath))
continue;
valid = false;
missing.insert({{outputHash, outputName}, maybeOutputPath});
paths.insert({{outputHash, outputName}, maybeOutputPath});
}
auto missing = getMissingRemotePaths(destStore, paths);
bool valid = missing.empty();
/* Try to copy the missing paths from the local store or from
substitutes. */
if (!missing.empty()) {

View File

@ -6,6 +6,8 @@
#include <map>
#include <memory>
#include <queue>
#include <regex>
#include <semaphore>
#include <prometheus/counter.h>
#include <prometheus/gauge.h>
@ -13,15 +15,18 @@
#include "db.hh"
#include "parsed-derivations.hh"
#include "pathlocks.hh"
#include "pool.hh"
#include "build-result.hh"
#include "store-api.hh"
#include "sync.hh"
#include <nix/store/derivations.hh>
#include <nix/store/derivation-options.hh>
#include <nix/store/pathlocks.hh>
#include <nix/util/pool.hh>
#include <nix/store/build-result.hh>
#include <nix/store/store-api.hh>
#include <nix/util/sync.hh>
#include "nar-extractor.hh"
#include "legacy-ssh-store.hh"
#include "machines.hh"
#include <nix/store/serve-protocol.hh>
#include <nix/store/serve-protocol-impl.hh>
#include <nix/store/serve-protocol-connection.hh>
#include <nix/store/machines.hh>
typedef unsigned int BuildID;
@ -55,6 +60,7 @@ typedef enum {
ssConnecting = 10,
ssSendingInputs = 20,
ssBuilding = 30,
ssWaitingForLocalSlot = 35,
ssReceivingOutputs = 40,
ssPostProcessing = 50,
} StepState;
@ -165,7 +171,7 @@ struct Step
nix::StorePath drvPath;
std::unique_ptr<nix::Derivation> drv;
std::unique_ptr<nix::ParsedDerivation> parsedDrv;
std::unique_ptr<nix::DerivationOptions> drvOptions;
std::set<std::string> requiredSystemFeatures;
bool preferLocalBuild;
bool isDeterministic;
@ -290,11 +296,9 @@ struct Machine : nix::Machine
bool isLocalhost() const;
// A connection to a machine
struct Connection {
struct Connection : nix::ServeProto::BasicClientConnection {
// Backpointer to the machine
ptr machine;
// Opened store
nix::ref<nix::LegacySSHStore> store;
};
};
@ -352,6 +356,10 @@ private:
typedef std::map<nix::StoreReference::Variant, Machine::ptr> Machines;
nix::Sync<Machines> machines; // FIXME: use atomic_shared_ptr
/* Throttler for CPU-bound local work. */
static constexpr unsigned int maxSupportedLocalWorkers = 1024;
std::counting_semaphore<maxSupportedLocalWorkers> localWorkThrottler;
/* Various stats. */
time_t startedAt;
counter nrBuildsRead{0};
@ -361,6 +369,7 @@ private:
counter nrStepsDone{0};
counter nrStepsBuilding{0};
counter nrStepsCopyingTo{0};
counter nrStepsWaitingForDownloadSlot{0};
counter nrStepsCopyingFrom{0};
counter nrStepsWaiting{0};
counter nrUnsupportedSteps{0};
@ -391,7 +400,6 @@ private:
struct MachineReservation
{
typedef std::shared_ptr<MachineReservation> ptr;
State & state;
Step::ptr step;
Machine::ptr machine;
@ -449,7 +457,12 @@ private:
prometheus::Counter& queue_steps_created;
prometheus::Counter& queue_checks_early_exits;
prometheus::Counter& queue_checks_finished;
prometheus::Gauge& queue_max_id;
prometheus::Counter& dispatcher_time_spent_running;
prometheus::Counter& dispatcher_time_spent_waiting;
prometheus::Counter& queue_monitor_time_spent_running;
prometheus::Counter& queue_monitor_time_spent_waiting;
PromMetrics();
};
@ -493,8 +506,7 @@ private:
void queueMonitorLoop(Connection & conn);
/* Check the queue for new builds. */
bool getQueuedBuilds(Connection & conn,
nix::ref<nix::Store> destStore, unsigned int & lastBuildId);
bool getQueuedBuilds(Connection & conn, nix::ref<nix::Store> destStore);
/* Handle cancellation, deletion and priority bumps. */
void processQueueChange(Connection & conn);
@ -502,6 +514,12 @@ private:
BuildOutput getBuildOutputCached(Connection & conn, nix::ref<nix::Store> destStore,
const nix::StorePath & drvPath);
/* Returns paths missing from the remote store. Paths are processed in
* parallel to work around the possible latency of remote stores. */
std::map<nix::DrvOutput, std::optional<nix::StorePath>> getMissingRemotePaths(
nix::ref<nix::Store> destStore,
const std::map<nix::DrvOutput, std::optional<nix::StorePath>> & paths);
Step::ptr createStep(nix::ref<nix::Store> store,
Connection & conn, Build::ptr build, const nix::StorePath & drvPath,
Build::ptr referringBuild, Step::ptr referringStep, std::set<nix::StorePath> & finishedDrvs,
@ -531,16 +549,17 @@ private:
void abortUnsupported();
void builder(MachineReservation::ptr reservation);
void builder(std::unique_ptr<MachineReservation> reservation);
/* Perform the given build step. Return true if the step is to be
retried. */
enum StepResult { sDone, sRetry, sMaybeCancelled };
StepResult doBuildStep(nix::ref<nix::Store> destStore,
MachineReservation::ptr reservation,
std::unique_ptr<MachineReservation> reservation,
std::shared_ptr<ActiveStep> activeStep);
void buildRemote(nix::ref<nix::Store> destStore,
std::unique_ptr<MachineReservation> reservation,
Machine::ptr machine, Step::ptr step,
const nix::ServeProto::BuildOptions & buildOptions,
RemoteResult & result, std::shared_ptr<ActiveStep> activeStep,

View File

@ -238,7 +238,7 @@ sub serveFile {
# XSS hole.
$c->response->header('Content-Security-Policy' => 'sandbox allow-scripts');
$c->stash->{'plain'} = { data => grab(cmd => ["nix", "--experimental-features", "nix-command",
$c->stash->{'plain'} = { data => readIntoSocket(cmd => ["nix", "--experimental-features", "nix-command",
"store", "cat", "--store", getStoreUri(), "$path"]) };
# Detect MIME type.

View File

@ -364,6 +364,21 @@ sub evals_GET {
);
}
sub errors :Chained('jobsetChain') :PathPart('errors') :Args(0) :ActionClass('REST') { }
sub errors_GET {
my ($self, $c) = @_;
$c->stash->{template} = 'eval-error.tt';
my $jobsetName = $c->stash->{params}->{name};
$c->stash->{jobset} = $c->stash->{project}->jobsets->find(
{ name => $jobsetName },
{ '+columns' => { 'errormsg' => 'errormsg' } }
);
$self->status_ok($c, entity => $c->stash->{jobset});
}
# Redirect to the latest finished evaluation of this jobset.
sub latest_eval : Chained('jobsetChain') PathPart('latest-eval') {

View File

@ -86,6 +86,17 @@ sub view_GET {
);
}
sub errors :Chained('evalChain') :PathPart('errors') :Args(0) :ActionClass('REST') { }
sub errors_GET {
my ($self, $c) = @_;
$c->stash->{template} = 'eval-error.tt';
$c->stash->{eval} = $c->model('DB::JobsetEvals')->find($c->stash->{eval}->id, { prefetch => 'evaluationerror' });
$self->status_ok($c, entity => $c->stash->{eval});
}
sub create_jobset : Chained('evalChain') PathPart('create-jobset') Args(0) {
my ($self, $c) = @_;

View File

@ -162,7 +162,7 @@ sub status_GET {
{ "buildsteps.busy" => { '!=', 0 } },
{ order_by => ["globalpriority DESC", "id"],
join => "buildsteps",
columns => [@buildListColumns]
columns => [@buildListColumns, 'buildsteps.drvpath', 'buildsteps.type']
})]
);
}

View File

@ -37,7 +37,16 @@ sub buildDiff {
my $n = 0;
foreach my $build (@{$builds}) {
my $aborted = $build->finished != 0 && ($build->buildstatus == 3 || $build->buildstatus == 4);
my $aborted = $build->finished != 0 && (
# aborted
$build->buildstatus == 3
# cancelled
|| $build->buildstatus == 4
# timeout
|| $build->buildstatus == 7
# log limit exceeded
|| $build->buildstatus == 10
);
my $d;
my $found = 0;
while ($n < scalar(@{$builds2})) {

View File

@ -36,6 +36,7 @@ our @EXPORT = qw(
jobsetOverview
jobsetOverview_
pathIsInsidePrefix
readIntoSocket
readNixFile
registerRoot
restartBuilds
@ -296,8 +297,7 @@ sub getEvals {
my @evals = $evals_result_set->search(
{ hasnewbuilds => 1 },
{ order_by => "$me.id DESC", rows => $rows, offset => $offset
, prefetch => { evaluationerror => [ ] } });
{ order_by => "$me.id DESC", rows => $rows, offset => $offset });
my @res = ();
my $cache = {};
@ -417,6 +417,16 @@ sub pathIsInsidePrefix {
return $cur;
}
sub readIntoSocket{
my (%args) = @_;
my $sock;
eval {
open($sock, "-|", @{$args{cmd}}) or die q(failed to open socket from command:\n $x);
};
return $sock;
}

View File

@ -0,0 +1,84 @@
# Allow building based on Gitea pull requests.
#
# Example input:
# "pulls": {
# "type": "giteapulls",
# "value": "example.com alice repo"
# "emailresponsible": false
# }
package Hydra::Plugin::GiteaPulls;
use strict;
use warnings;
use parent 'Hydra::Plugin';
use HTTP::Request;
use LWP::UserAgent;
use JSON::MaybeXS;
use Hydra::Helper::CatalystUtils;
use File::Temp;
use POSIX qw(strftime);
sub supportedInputTypes {
my ($self, $inputTypes) = @_;
$inputTypes->{'giteapulls'} = 'Open Gitea Pull Requests';
}
sub _iterate {
my ($url, $auth, $pulls, $ua) = @_;
my $req = HTTP::Request->new('GET', $url);
$req->header('Authorization' => 'token ' . $auth) if defined $auth;
my $res = $ua->request($req);
my $content = $res->decoded_content;
die "Error pulling from the gitea pulls API: $content\n"
unless $res->is_success;
my $pulls_list = decode_json $content;
foreach my $pull (@$pulls_list) {
$pulls->{$pull->{number}} = $pull;
}
# TODO Make Link header parsing more robust!!!
my @links = split ',', ($res->header("Link") // "");
my $next = "";
foreach my $link (@links) {
my ($url, $rel) = split ";", $link;
if (trim($rel) eq 'rel="next"') {
$next = substr trim($url), 1, -1;
last;
}
}
_iterate($next, $auth, $pulls, $ua) unless $next eq "";
}
sub fetchInput {
my ($self, $type, $name, $value, $project, $jobset) = @_;
return undef if $type ne "giteapulls";
my ($baseUrl, $owner, $repo, $proto) = split ' ', $value;
if (not defined $proto) { # the protocol handler is exposed as an option in order to do integration testing
$proto = "https"
}
my $auth = $self->{config}->{gitea_authorization}->{$owner};
my $ua = LWP::UserAgent->new();
my %pulls;
_iterate("$proto://$baseUrl/api/v1/repos/$owner/$repo/pulls?limit=100", $auth, \%pulls, $ua);
my $tempdir = File::Temp->newdir("gitea-pulls" . "XXXXX", TMPDIR => 1);
my $filename = "$tempdir/gitea-pulls.json";
open(my $fh, ">", $filename) or die "Cannot open $filename for writing: $!";
print $fh encode_json \%pulls;
close $fh;
my $storePath = trim(`nix-store --add "$filename"`
or die "cannot copy path $filename to the Nix store.\n");
chomp $storePath;
my $timestamp = time;
return { storePath => $storePath, revision => strftime "%Y%m%d%H%M%S", gmtime($timestamp) };
}
1;

View File

@ -0,0 +1,129 @@
package Hydra::Plugin::GiteaRefs;
use strict;
use warnings;
use parent 'Hydra::Plugin';
use HTTP::Request;
use LWP::UserAgent;
use JSON::MaybeXS;
use Hydra::Helper::CatalystUtils;
use File::Temp;
use POSIX qw(strftime);
=head1 NAME
GiteaRefs - Hydra plugin for retrieving the list of references (branches or
tags) from Gitea following a certain naming scheme
=head1 DESCRIPTION
This plugin reads the list of branches or tags using Gitea's REST API. The name
of the reference must follow a particular prefix. This list is stored in the
nix-store and used as an input to declarative jobsets.
=head1 CONFIGURATION
The plugin doesn't require any dedicated configuration block, but it has to
consult C<gitea_authorization> entry for obtaining the API token. In addition,
The declarative project C<spec.json> file must contains an input such as
"pulls": {
"type": "gitea_refs",
"value": "[gitea_hostname] [owner] [repo] heads|tags [scheme] - [prefix]",
"emailresponsible": false
}
In the above snippet, C<[gitea_hostname]> must be set to the hostname of the
repository's Gitea instance.
C<[owner]> is the repository owner and C<[repo]> is the repository name. Also
note a literal C<->, which is placed there for the future use.
C<heads|tags> denotes that one of these two is allowed, that is, the third
position should hold either the C<heads> or the C<tags> keyword. In case of the former, the plugin
will fetch all branches, while in case of the latter, it will fetch the tags.
C<scheme> should be set to either https or http, depending on what the Gitea
host supports.
C<prefix> denotes the prefix the reference name must start with, in order to be
included.
For example, C<"value": "projects.blender.org blender blender heads https - blender-v/"> refers to
L<https://projects.blender.org/blender/blender> repository, and will fetch all branches that
begin with C<blender-v/>.
=head1 USE
The result is stored in the nix-store as a JSON I<map>, where the key is the
name of the reference, while the value is the complete Gitea response. Thus,
any of the values listed in
L<https://docs.gitea.com/api#tag/repository/operation/repoListAllGitRefs> can be
used to build the git input value in C<jobsets.nix>.
=cut
sub supportedInputTypes {
my ($self, $inputTypes) = @_;
$inputTypes->{'gitea_refs'} = 'Open Gitea Refs';
}
sub _iterate {
my ($url, $auth, $refs, $ua) = @_;
my $req = HTTP::Request->new('GET', $url);
$req->header('Accept' => 'application/json');
$req->header('Authorization' => $auth) if defined $auth;
my $res = $ua->request($req);
my $content = $res->decoded_content;
die "Error pulling from the gitea refs API: $content\n"
unless $res->is_success;
my $refs_list = decode_json $content;
# TODO Stream out the json instead
foreach my $ref (@$refs_list) {
my $ref_name = $ref->{ref};
$ref_name =~ s,^refs/(?:heads|tags)/,,o;
$refs->{$ref_name} = $ref;
}
# TODO Make Link header parsing more robust!!!
my @links = split ',', $res->header("Link");
my $next = "";
foreach my $link (@links) {
my ($url, $rel) = split ";", $link;
if (trim($rel) eq 'rel="next"') {
$next = substr trim($url), 1, -1;
last;
}
}
_iterate($next, $auth, $refs, $ua) unless $next eq "";
}
sub fetchInput {
my ($self, $input_type, $name, $value, $project, $jobset) = @_;
return undef if $input_type ne "gitea_refs";
my ($giteaHostname, $owner, $repo, $type, $scheme, $fut, $prefix) = split ' ', $value;
die "type field is neither 'heads' nor 'tags', but '$type'"
unless $type eq 'heads' or $type eq 'tags';
die "scheme field is neither 'https' nor 'http' but '$scheme'"
unless $scheme eq 'https' or $scheme eq 'http';
my $auth = $self->{config}->{gitea_authorization}->{$owner};
my $giteaEndpoint = "$scheme://$giteaHostname";
my %refs;
my $ua = LWP::UserAgent->new();
_iterate("$giteaEndpoint/api/v1/repos/$owner/$repo/git/refs/$type/$prefix?per_page=100", $auth, \%refs, $ua);
my $tempdir = File::Temp->newdir("gitea-refs" . "XXXXX", TMPDIR => 1);
my $filename = "$tempdir/gitea-refs.json";
open(my $fh, ">", $filename) or die "Cannot open $filename for writing: $!";
print $fh encode_json \%refs;
close $fh;
system("jq -S . < $filename > $tempdir/gitea-refs-sorted.json");
my $storePath = trim(qx{nix-store --add "$tempdir/gitea-refs-sorted.json"}
or die "cannot copy path $filename to the Nix store.\n");
chomp $storePath;
my $timestamp = time;
return { storePath => $storePath, revision => strftime "%Y%m%d%H%M%S", gmtime($timestamp) };
}
1;

View File

@ -105,4 +105,6 @@ __PACKAGE__->add_column(
"+id" => { retrieve_on_insert => 1 }
);
__PACKAGE__->mk_group_accessors('column' => 'has_error');
1;

View File

@ -386,6 +386,8 @@ __PACKAGE__->add_column(
"+id" => { retrieve_on_insert => 1 }
);
__PACKAGE__->mk_group_accessors('column' => 'has_error');
sub supportsDynamicRunCommand {
my ($self) = @_;

View File

@ -0,0 +1,30 @@
package Hydra::Schema::ResultSet::EvaluationErrors;
use strict;
use utf8;
use warnings;
use parent 'DBIx::Class::ResultSet';
use Storable qw(dclone);
__PACKAGE__->load_components('Helper::ResultSet::RemoveColumns');
# Exclude expensive error message values unless explicitly requested, and
# replace them with a summary field describing their presence/absence.
sub search_rs {
my ( $class, $query, $attrs ) = @_;
if ($attrs) {
$attrs = dclone($attrs);
}
unless (exists $attrs->{'select'} || exists $attrs->{'columns'}) {
$attrs->{'+columns'}->{'has_error'} = "errormsg != ''";
}
unless (exists $attrs->{'+columns'}->{'errormsg'}) {
push @{ $attrs->{'remove_columns'} }, 'errormsg';
}
return $class->next::method($query, $attrs);
}

View File

@ -0,0 +1,30 @@
package Hydra::Schema::ResultSet::Jobsets;
use strict;
use utf8;
use warnings;
use parent 'DBIx::Class::ResultSet';
use Storable qw(dclone);
__PACKAGE__->load_components('Helper::ResultSet::RemoveColumns');
# Exclude expensive error message values unless explicitly requested, and
# replace them with a summary field describing their presence/absence.
sub search_rs {
my ( $class, $query, $attrs ) = @_;
if ($attrs) {
$attrs = dclone($attrs);
}
unless (exists $attrs->{'select'} || exists $attrs->{'columns'}) {
$attrs->{'+columns'}->{'has_error'} = "errormsg != ''";
}
unless (exists $attrs->{'+columns'}->{'errormsg'}) {
push @{ $attrs->{'remove_columns'} }, 'errormsg';
}
return $class->next::method($query, $attrs);
}

View File

@ -6,6 +6,7 @@ use base 'Catalyst::View::TT';
use Template::Plugin::HTML;
use Hydra::Helper::Nix;
use Time::Seconds;
use Digest::SHA qw(sha1_hex);
__PACKAGE__->config(
TEMPLATE_EXTENSION => '.tt',
@ -25,8 +26,14 @@ __PACKAGE__->config(
makeNameTextForJobset
relativeDuration
stripSSHUser
metricDivId
/]);
sub metricDivId {
my ($self, $c, $text) = @_;
return "metric-" . sha1_hex($text);
}
sub buildLogExists {
my ($self, $c, $build) = @_;
return 1 if defined $c->config->{log_prefix};

View File

@ -2,8 +2,8 @@
#include <pqxx/pqxx>
#include "environment-variables.hh"
#include "util.hh"
#include <nix/util/environment-variables.hh>
#include <nix/util/util.hh>
struct Connection : pqxx::connection

View File

@ -2,8 +2,8 @@
#include <map>
#include "file-system.hh"
#include "util.hh"
#include <nix/util/file-system.hh>
#include <nix/util/util.hh>
struct HydraConfig
{

View File

@ -61,21 +61,7 @@ END;
<td>[% IF step.busy != 0 || ((step.machine || step.starttime) && (step.status == 0 || step.status == 1 || step.status == 3 || step.status == 4 || step.status == 7)); INCLUDE renderMachineName machine=step.machine; ELSE; "<em>n/a</em>"; END %]</td>
<td class="step-status">
[% IF step.busy != 0 %]
[% IF step.busy == 1 %]
<strong>Preparing</strong>
[% ELSIF step.busy == 10 %]
<strong>Connecting</strong>
[% ELSIF step.busy == 20 %]
<strong>Sending inputs</strong>
[% ELSIF step.busy == 30 %]
<strong>Building</strong>
[% ELSIF step.busy == 40 %]
<strong>Receiving outputs</strong>
[% ELSIF step.busy == 50 %]
<strong>Post-processing</strong>
[% ELSE %]
<strong>Unknown state</strong>
[% END %]
[% INCLUDE renderBusyStatus %]
[% ELSIF step.status == 0 %]
[% IF step.isnondeterministic %]
<span class="warn">Succeeded with non-determistic result</span>

View File

@ -91,6 +91,17 @@ BLOCK renderDuration;
duration % 60 %]s[%
END;
BLOCK renderDrvInfo;
drvname = step.drvpath
.substr(11) # strip `/nix/store/`
.split('-').slice(1).join("-") # strip hash part
.substr(0, -4); # strip `.drv`
IF drvname != releasename;
IF step.type == 0; action = "Build"; ELSE; action = "Substitution"; END;
IF drvname; %]<em> ([% action %] of [% drvname %])</em>[% END;
END;
END;
BLOCK renderBuildListHeader %]
<table class="table table-striped table-condensed clickable-rows">
@ -131,7 +142,12 @@ BLOCK renderBuildListBody;
[% END %]
<td><a class="row-link" href="[% link %]">[% build.id %]</a></td>
[% IF !hideJobName %]
<td><a href="[%link%]">[% IF !hideJobsetName %][%build.jobset.get_column("project")%]:[%build.jobset.get_column("name")%]:[% END %][%build.get_column("job")%]</td>
<td>
<a href="[%link%]">[% IF !hideJobsetName %][%build.jobset.get_column("project")%]:[%build.jobset.get_column("name")%]:[% END %][%build.get_column("job")%]</a>
[% IF showStepName %]
[% INCLUDE renderDrvInfo step=build.buildsteps releasename=build.nixname %]
[% END %]
</td>
[% END %]
<td class="nowrap">[% t = showSchedulingInfo ? build.timestamp : build.stoptime; IF t; INCLUDE renderRelativeDate timestamp=(showSchedulingInfo ? build.timestamp : build.stoptime); ELSE; "-"; END %]</td>
<td>[% !showSchedulingInfo and build.get_column('releasename') ? build.get_column('releasename') : build.nixname %]</td>
@ -245,6 +261,27 @@ BLOCK renderBuildStatusIcon;
END;
BLOCK renderBusyStatus;
IF step.busy == 1 %]
<strong>Preparing</strong>
[% ELSIF step.busy == 10 %]
<strong>Connecting</strong>
[% ELSIF step.busy == 20 %]
<strong>Sending inputs</strong>
[% ELSIF step.busy == 30 %]
<strong>Building</strong>
[% ELSIF step.busy == 35 %]
<strong>Waiting to receive outputs</strong>
[% ELSIF step.busy == 40 %]
<strong>Receiving outputs</strong>
[% ELSIF step.busy == 50 %]
<strong>Post-processing</strong>
[% ELSE %]
<strong>Unknown state</strong>
[% END;
END;
BLOCK renderStatus;
IF build.finished;
buildstatus = build.buildstatus;
@ -476,7 +513,7 @@ BLOCK renderEvals %]
ELSE %]
-
[% END %]
[% IF eval.evaluationerror.errormsg %]
[% IF eval.evaluationerror.has_error %]
<span class="badge badge-warning">Eval Errors</span>
[% END %]
</td>
@ -602,7 +639,7 @@ BLOCK renderJobsetOverview %]
<td>[% HTML.escape(j.description) %]</td>
<td>[% IF j.lastcheckedtime;
INCLUDE renderDateTime timestamp = j.lastcheckedtime;
IF j.errormsg || j.fetcherrormsg; %]&nbsp;<span class = 'badge badge-warning'>Error</span>[% END;
IF j.has_error || j.fetcherrormsg; %]&nbsp;<span class = 'badge badge-warning'>Error</span>[% END;
ELSE; "-";
END %]</td>
[% IF j.get_column('nrtotal') > 0 %]

26
src/root/eval-error.tt Normal file
View File

@ -0,0 +1,26 @@
[% PROCESS common.tt %]
<!DOCTYPE html>
<html lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<meta http-equiv="X-UA-Compatible" content="IE=Edge" />
[% INCLUDE style.tt %]
</head>
<body>
<div class="tab-content tab-pane">
<div id="tabs-errors" class="">
[% IF jobset %]
<p>Errors occurred at [% INCLUDE renderDateTime timestamp=(jobset.errortime || jobset.lastcheckedtime) %].</p>
<div class="card bg-light"><div class="card-body"><pre>[% HTML.escape(jobset.fetcherrormsg || jobset.errormsg) %]</pre></div></div>
[% ELSIF eval %]
<p>Errors occurred at [% INCLUDE renderDateTime timestamp=(eval.evaluationerror.errortime || eval.timestamp) %].</p>
<div class="card bg-light"><div class="card-body"><pre>[% HTML.escape(eval.evaluationerror.errormsg) %]</pre></div></div>
[% END %]
</div>
</div>
</body>
</html>

View File

@ -18,8 +18,7 @@
<h3>Metric: <a [% HTML.attributes(href => c.uri_for('/job' project.name jobset.name job 'metric' metric.name)) %]><tt>[%HTML.escape(metric.name)%]</tt></a></h3>
[% id = "metric-" _ metric.name;
id = id.replace('\.', '_');
[% id = metricDivId(metric.name);
INCLUDE createChart dataUrl=c.uri_for('/job' project.name jobset.name job 'metric' metric.name); %]
[% END %]

View File

@ -65,7 +65,7 @@ c.uri_for(c.controller('JobsetEval').action_for('view'),
[% END %]
[% IF aborted.size > 0 %]
<li class="nav-item"><a class="nav-link" href="#tabs-aborted" data-toggle="tab"><span class="text-warning">Aborted Jobs ([% aborted.size %])</span></a></li>
<li class="nav-item"><a class="nav-link" href="#tabs-aborted" data-toggle="tab"><span class="text-warning">Aborted / Timed out Jobs ([% aborted.size %])</span></a></li>
[% END %]
[% IF nowFail.size > 0 %]
<li class="nav-item"><a class="nav-link" href="#tabs-now-fail" data-toggle="tab"><span class="text-warning">Newly Failing Jobs ([% nowFail.size %])</span></a></li>
@ -90,7 +90,7 @@ c.uri_for(c.controller('JobsetEval').action_for('view'),
[% END %]
<li class="nav-item"><a class="nav-link" href="#tabs-inputs" data-toggle="tab">Inputs</a></li>
[% IF eval.evaluationerror.errormsg %]
[% IF eval.evaluationerror.has_error %]
<li class="nav-item"><a class="nav-link" href="#tabs-errors" data-toggle="tab"><span class="text-warning">Evaluation Errors</span></a></li>
[% END %]
</ul>
@ -108,13 +108,6 @@ c.uri_for(c.controller('JobsetEval').action_for('view'),
<div class="tab-content">
[% IF eval.evaluationerror.errormsg %]
<div id="tabs-errors" class="tab-pane">
<p>Errors occurred at [% INCLUDE renderDateTime timestamp=(eval.evaluationerror.errortime || eval.timestamp) %].</p>
<div class="card bg-light"><div class="card-body"><pre>[% HTML.escape(eval.evaluationerror.errormsg) %]</pre></div></div>
</div>
[% END %]
<div id="tabs-aborted" class="tab-pane">
[% INCLUDE renderSome builds=aborted tabname="#tabs-aborted" %]
</div>
@ -172,10 +165,9 @@ c.uri_for(c.controller('JobsetEval').action_for('view'),
[% END %]
</div>
[% IF eval.evaluationerror.errormsg %]
[% IF eval.evaluationerror.has_error %]
<div id="tabs-errors" class="tab-pane">
<p>Errors occurred at [% INCLUDE renderDateTime timestamp=(eval.evaluationerror.errortime || eval.timestamp) %].</p>
<div class="card bg-light"><div class="card-body"><pre>[% HTML.escape(eval.evaluationerror.errormsg) %]</pre></div></div>
<iframe src="[% c.uri_for(c.controller('JobsetEval').action_for('errors'), [eval.id], params) %]" loading="lazy" frameBorder="0" width="100%"></iframe>
</div>
[% END %]
</div>

View File

@ -61,7 +61,7 @@
[% END %]
<li class="nav-item"><a class="nav-link active" href="#tabs-evaluations" data-toggle="tab">Evaluations</a></li>
[% IF jobset.errormsg || jobset.fetcherrormsg %]
[% IF jobset.has_error || jobset.fetcherrormsg %]
<li class="nav-item"><a class="nav-link" href="#tabs-errors" data-toggle="tab"><span class="text-warning">Evaluation Errors</span></a></li>
[% END %]
<li class="nav-item"><a class="nav-link" href="#tabs-jobs" data-toggle="tab">Jobs</a></li>
@ -79,7 +79,7 @@
<th>Last checked:</th>
<td>
[% IF jobset.lastcheckedtime %]
[% INCLUDE renderDateTime timestamp = jobset.lastcheckedtime %], [% IF jobset.errormsg || jobset.fetcherrormsg %]<em class="text-warning">with errors!</em>[% ELSE %]<em>no errors</em>[% END %]
[% INCLUDE renderDateTime timestamp = jobset.lastcheckedtime %], [% IF jobset.has_error || jobset.fetcherrormsg %]<em class="text-warning">with errors!</em>[% ELSE %]<em>no errors</em>[% END %]
[% ELSE %]
<em>never</em>
[% END %]
@ -117,10 +117,9 @@
</div>
[% IF jobset.errormsg || jobset.fetcherrormsg %]
[% IF jobset.has_error || jobset.fetcherrormsg %]
<div id="tabs-errors" class="tab-pane">
<p>Errors occurred at [% INCLUDE renderDateTime timestamp=(jobset.errortime || jobset.lastcheckedtime) %].</p>
<div class="card bg-light"><div class="card-body"><pre>[% HTML.escape(jobset.fetcherrormsg || jobset.errormsg) %]</pre></div></div>
<iframe src="[% c.uri_for('/jobset' project.name jobset.name "errors") %]" loading="lazy" frameBorder="0" width="100%"></iframe>
</div>
[% END %]

View File

@ -10,31 +10,7 @@
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<meta http-equiv="X-UA-Compatible" content="IE=Edge" />
<script type="text/javascript" src="[% c.uri_for("/static/js/jquery/jquery-3.4.1.min.js") %]"></script>
<script type="text/javascript" src="[% c.uri_for("/static/js/jquery/jquery-ui-1.10.4.min.js") %]"></script>
<script type="text/javascript" src="[% c.uri_for("/static/js/moment/moment-2.24.0.min.js") %]"></script>
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<link href="[% c.uri_for("/static/fontawesome/css/all.css") %]" rel="stylesheet" />
<script type="text/javascript" src="[% c.uri_for("/static/js/popper.min.js") %]"></script>
<script type="text/javascript" src="[% c.uri_for("/static/bootstrap/js/bootstrap.min.js") %]"></script>
<link href="[% c.uri_for("/static/bootstrap/css/bootstrap.min.css") %]" rel="stylesheet" />
<!-- hydra.css may need to be moved to before boostrap to make the @media rule work. -->
<link rel="stylesheet" href="[% c.uri_for("/static/css/hydra.css") %]" type="text/css" />
<link rel="stylesheet" href="[% c.uri_for("/static/css/rotated-th.css") %]" type="text/css" />
<style>
.popover { max-width: 40%; }
</style>
<script type="text/javascript" src="[% c.uri_for("/static/js/bootbox.min.js") %]"></script>
<link rel="stylesheet" href="[% c.uri_for("/static/css/tree.css") %]" type="text/css" />
<script type="text/javascript" src="[% c.uri_for("/static/js/common.js") %]"></script>
[% INCLUDE style.tt %]
[% IF c.config.enable_google_login %]
<meta name="google-signin-client_id" content="[% c.config.google_client_id %]">

View File

@ -6,10 +6,10 @@
<thead>
<tr>
<th>Job</th>
<th>System</th>
<th>Build</th>
<th>Step</th>
<th>What</th>
<th>Status</th>
<th>Since</th>
</tr>
</thead>
@ -40,10 +40,10 @@
[% idle = 0 %]
<tr>
<td><tt>[% INCLUDE renderFullJobName project=step.project jobset=step.jobset job=step.job %]</tt></td>
<td><tt>[% step.system %]</tt></td>
<td><a href="[% c.uri_for('/build' step.build) %]">[% step.build %]</a></td>
<td>[% IF step.busy >= 30 %]<a class="row-link" href="[% c.uri_for('/build' step.build 'nixlog' step.stepnr 'tail') %]">[% step.stepnr %]</a>[% ELSE; step.stepnr; END %]</td>
<td><tt>[% step.drvpath.match('-(.*)').0 %]</tt></td>
<td>[% INCLUDE renderBusyStatus %]</td>
<td style="width: 10em">[% INCLUDE renderDuration duration = curTime - step.starttime %] </td>
</tr>
[% END %]

View File

@ -129,6 +129,12 @@ $(document).ready(function() {
el.addClass("is-local");
}
});
[...document.getElementsByTagName("iframe")].forEach((element) => {
element.contentWindow.addEventListener("DOMContentLoaded", (_) => {
element.style.height = element.contentWindow.document.body.scrollHeight + 'px';
})
})
});
var tabsLoaded = {};

View File

@ -7,7 +7,7 @@
[% ELSE %]
[% INCLUDE renderBuildList builds=resource showSchedulingInfo=1 hideResultInfo=1 busy=1 %]
[% INCLUDE renderBuildList builds=resource showSchedulingInfo=1 hideResultInfo=1 busy=1 showStepName=1 %]
[% END %]

24
src/root/style.tt Normal file
View File

@ -0,0 +1,24 @@
<script type="text/javascript" src="[% c.uri_for("/static/js/jquery/jquery-3.4.1.min.js") %]"></script>
<script type="text/javascript" src="[% c.uri_for("/static/js/jquery/jquery-ui-1.10.4.min.js") %]"></script>
<script type="text/javascript" src="[% c.uri_for("/static/js/moment/moment-2.24.0.min.js") %]"></script>
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<link href="[% c.uri_for("/static/fontawesome/css/all.css") %]" rel="stylesheet" />
<script type="text/javascript" src="[% c.uri_for("/static/js/popper.min.js") %]"></script>
<script type="text/javascript" src="[% c.uri_for("/static/bootstrap/js/bootstrap.min.js") %]"></script>
<link href="[% c.uri_for("/static/bootstrap/css/bootstrap.min.css") %]" rel="stylesheet" />
<!-- hydra.css may need to be moved to before boostrap to make the @media rule work. -->
<link rel="stylesheet" href="[% c.uri_for("/static/css/hydra.css") %]" type="text/css" />
<link rel="stylesheet" href="[% c.uri_for("/static/css/rotated-th.css") %]" type="text/css" />
<style>
.popover { max-width: 40%; }
</style>
<script type="text/javascript" src="[% c.uri_for("/static/js/bootbox.min.js") %]"></script>
<link rel="stylesheet" href="[% c.uri_for("/static/css/tree.css") %]" type="text/css" />
<script type="text/javascript" src="[% c.uri_for("/static/js/common.js") %]"></script>

View File

@ -773,6 +773,9 @@ sub checkJobsetWrapped {
my $jobsetChanged = 0;
my %buildMap;
my @jobs;
push @jobs, $_ while defined($_ = $jobsIter->());
$db->txn_do(sub {
my $prevEval = getPrevJobsetEval($db, $jobset, 1);
@ -796,7 +799,7 @@ sub checkJobsetWrapped {
my @jobsWithConstituents;
while (defined(my $job = $jobsIter->())) {
foreach my $job (@jobs) {
if ($jobsetsJobset) {
die "The .jobsets jobset must only have a single job named 'jobsets'"
unless $job->{attr} eq "jobsets";

View File

@ -32,4 +32,9 @@ subtest "/jobset/PROJECT/JOBSET/evals" => sub {
ok($jobsetevals->is_success, "The page showing the jobset evals returns 200.");
};
subtest "/jobset/PROJECT/JOBSET/errors" => sub {
my $jobsetevals = request(GET '/jobset/' . $project->name . '/' . $jobset->name . '/errors');
ok($jobsetevals->is_success, "The page showing the jobset eval errors returns 200.");
};
done_testing;

View File

@ -35,6 +35,10 @@ subtest "Fetching the eval's overview" => sub {
is($fetch->code, 200, "channel page is 200");
};
subtest "Fetching the eval's overview" => sub {
my $fetch = request(GET '/eval/' . $eval->id, '/errors');
is($fetch->code, 200, "errors page is 200");
};
done_testing;

View File

@ -6,27 +6,55 @@ use Hydra::Helper::Exec;
my $ctx = test_context();
my $jobsetCtx = $ctx->makeJobset(
subtest "broken constituents expression" => sub {
my $jobsetCtx = $ctx->makeJobset(
expression => 'constituents-broken.nix',
);
my $jobset = $jobsetCtx->{"jobset"};
);
my $jobset = $jobsetCtx->{"jobset"};
my ($res, $stdout, $stderr) = captureStdoutStderr(60,
my ($res, $stdout, $stderr) = captureStdoutStderr(60,
("hydra-eval-jobset", $jobsetCtx->{"project"}->name, $jobset->name)
);
isnt($res, 0, "hydra-eval-jobset exits non-zero");
ok(utf8::decode($stderr), "Stderr output is UTF8-clean");
like(
);
isnt($res, 0, "hydra-eval-jobset exits non-zero");
ok(utf8::decode($stderr), "Stderr output is UTF8-clean");
like(
$stderr,
qr/aggregate job mixed_aggregate failed with the error: "constituentA": does not exist/,
qr/aggregate job 'mixed_aggregate' references non-existent job 'constituentA'/,
"The stderr record includes a relevant error message"
);
);
$jobset->discard_changes({ '+columns' => {'errormsg' => 'errormsg'} }); # refresh from DB
like(
$jobset->discard_changes({ '+columns' => {'errormsg' => 'errormsg'} }); # refresh from DB
like(
$jobset->errormsg,
qr/aggregate job mixed_aggregate failed with the error: "constituentA": does not exist/,
qr/aggregate job mixed_aggregate failed with the error: constituentA: does not exist/,
"The jobset records a relevant error message"
);
);
};
subtest "no matches" => sub {
my $jobsetCtx = $ctx->makeJobset(
expression => 'constituents-no-matches.nix',
);
my $jobset = $jobsetCtx->{"jobset"};
my ($res, $stdout, $stderr) = captureStdoutStderr(60,
("hydra-eval-jobset", $jobsetCtx->{"project"}->name, $jobset->name)
);
isnt($res, 0, "hydra-eval-jobset exits non-zero");
ok(utf8::decode($stderr), "Stderr output is UTF8-clean");
like(
$stderr,
qr/aggregate job 'non_match_aggregate' references constituent glob pattern 'tests\.\*' with no matches/,
"The stderr record includes a relevant error message"
);
$jobset->discard_changes({ '+columns' => {'errormsg' => 'errormsg'} }); # refresh from DB
like(
$jobset->errormsg,
qr/aggregate job non_match_aggregate failed with the error: tests\.\*: constituent glob pattern had no matches/,
qr/in job non_match_aggregate:\ntests\.\*: constituent glob pattern had no matches/,
"The jobset records a relevant error message"
);
};
done_testing;

View File

@ -0,0 +1,138 @@
use strict;
use warnings;
use Setup;
use Test2::V0;
use Hydra::Helper::Exec;
use Data::Dumper;
my $ctx = test_context();
subtest "general glob testing" => sub {
my $jobsetCtx = $ctx->makeJobset(
expression => 'constituents-glob.nix',
);
my $jobset = $jobsetCtx->{"jobset"};
my ($res, $stdout, $stderr) = captureStdoutStderr(60,
("hydra-eval-jobset", $jobsetCtx->{"project"}->name, $jobset->name)
);
is($res, 0, "hydra-eval-jobset exits zero");
my $builds = {};
for my $build ($jobset->builds) {
$builds->{$build->job} = $build;
}
subtest "basic globbing works" => sub {
ok(defined $builds->{"ok_aggregate"}, "'ok_aggregate' is part of the jobset evaluation");
my @constituents = $builds->{"ok_aggregate"}->constituents->all;
is(2, scalar @constituents, "'ok_aggregate' has two constituents");
my @sortedConstituentNames = sort (map { $_->nixname } @constituents);
is($sortedConstituentNames[0], "empty-dir-A", "first constituent of 'ok_aggregate' is 'empty-dir-A'");
is($sortedConstituentNames[1], "empty-dir-B", "second constituent of 'ok_aggregate' is 'empty-dir-B'");
};
subtest "transitivity is OK" => sub {
ok(defined $builds->{"indirect_aggregate"}, "'indirect_aggregate' is part of the jobset evaluation");
my @constituents = $builds->{"indirect_aggregate"}->constituents->all;
is(1, scalar @constituents, "'indirect_aggregate' has one constituent");
is($constituents[0]->nixname, "direct_aggregate", "'indirect_aggregate' has 'direct_aggregate' as single constituent");
};
};
subtest "* selects all except current aggregate" => sub {
my $jobsetCtx = $ctx->makeJobset(
expression => 'constituents-glob-all.nix',
);
my $jobset = $jobsetCtx->{"jobset"};
my ($res, $stdout, $stderr) = captureStdoutStderr(60,
("hydra-eval-jobset", $jobsetCtx->{"project"}->name, $jobset->name)
);
subtest "no eval errors" => sub {
ok(utf8::decode($stderr), "Stderr output is UTF8-clean");
ok(
$stderr !~ "aggregate job ok_aggregate has a constituent .* that doesn't correspond to a Hydra build",
"Catchall wildcard must not select itself as constituent"
);
$jobset->discard_changes; # refresh from DB
is(
$jobset->has_error,
0,
"eval-errors non-empty"
);
};
my $builds = {};
for my $build ($jobset->builds) {
$builds->{$build->job} = $build;
}
subtest "two constituents" => sub {
ok(defined $builds->{"ok_aggregate"}, "'ok_aggregate' is part of the jobset evaluation");
my @constituents = $builds->{"ok_aggregate"}->constituents->all;
is(2, scalar @constituents, "'ok_aggregate' has two constituents");
my @sortedConstituentNames = sort (map { $_->nixname } @constituents);
is($sortedConstituentNames[0], "empty-dir-A", "first constituent of 'ok_aggregate' is 'empty-dir-A'");
is($sortedConstituentNames[1], "empty-dir-B", "second constituent of 'ok_aggregate' is 'empty-dir-B'");
};
};
subtest "trivial cycle check" => sub {
my $jobsetCtx = $ctx->makeJobset(
expression => 'constituents-cycle.nix',
);
my $jobset = $jobsetCtx->{"jobset"};
my ($res, $stdout, $stderr) = captureStdoutStderr(60,
("hydra-eval-jobset", $jobsetCtx->{"project"}->name, $jobset->name)
);
ok(
$stderr =~ "Found dependency cycle between jobs 'indirect_aggregate' and 'ok_aggregate'",
"Dependency cycle error is on stderr"
);
ok(utf8::decode($stderr), "Stderr output is UTF8-clean");
$jobset->discard_changes({ '+columns' => {'errormsg' => 'errormsg'} }); # refresh from DB
like(
$jobset->errormsg,
qr/Dependency cycle: indirect_aggregate <-> ok_aggregate/,
"eval-errors non-empty"
);
is(0, $jobset->builds->count, "No builds should be scheduled");
};
subtest "cycle check with globbing" => sub {
my $jobsetCtx = $ctx->makeJobset(
expression => 'constituents-cycle-glob.nix',
);
my $jobset = $jobsetCtx->{"jobset"};
my ($res, $stdout, $stderr) = captureStdoutStderr(60,
("hydra-eval-jobset", $jobsetCtx->{"project"}->name, $jobset->name)
);
ok(utf8::decode($stderr), "Stderr output is UTF8-clean");
$jobset->discard_changes({ '+columns' => {'errormsg' => 'errormsg'} }); # refresh from DB
like(
$jobset->errormsg,
qr/aggregate job indirect_aggregate failed with the error: Dependency cycle: indirect_aggregate <-> packages.constituentA/,
"packages.constituentA error missing"
);
# on this branch of Hydra, hydra-eval-jobset fails hard if an aggregate
# job is broken.
is(0, $jobset->builds->count, "Zero jobs are scheduled");
};
done_testing;

14
t/jobs/config.nix Normal file
View File

@ -0,0 +1,14 @@
rec {
path = "/nix/store/l9mg93sgx50y88p5rr6x1vib6j1rjsds-coreutils-9.1/bin";
mkDerivation = args:
derivation ({
system = builtins.currentSystem;
PATH = path;
} // args);
mkContentAddressedDerivation = args: mkDerivation ({
__contentAddressed = true;
outputHashMode = "recursive";
outputHashAlgo = "sha256";
} // args);
}

View File

@ -0,0 +1,34 @@
with import ./config.nix;
{
packages.constituentA = mkDerivation {
name = "empty-dir-A";
builder = ./empty-dir-builder.sh;
_hydraAggregate = true;
_hydraGlobConstituents = true;
constituents = [ "*_aggregate" ];
};
packages.constituentB = mkDerivation {
name = "empty-dir-B";
builder = ./empty-dir-builder.sh;
};
ok_aggregate = mkDerivation {
name = "direct_aggregate";
_hydraAggregate = true;
_hydraGlobConstituents = true;
constituents = [
"packages.*"
];
builder = ./empty-dir-builder.sh;
};
indirect_aggregate = mkDerivation {
name = "indirect_aggregate";
_hydraAggregate = true;
constituents = [
"ok_aggregate"
];
builder = ./empty-dir-builder.sh;
};
}

View File

@ -0,0 +1,21 @@
with import ./config.nix;
{
ok_aggregate = mkDerivation {
name = "direct_aggregate";
_hydraAggregate = true;
_hydraGlobConstituents = true;
constituents = [
"indirect_aggregate"
];
builder = ./empty-dir-builder.sh;
};
indirect_aggregate = mkDerivation {
name = "indirect_aggregate";
_hydraAggregate = true;
constituents = [
"ok_aggregate"
];
builder = ./empty-dir-builder.sh;
};
}

View File

@ -0,0 +1,22 @@
with import ./config.nix;
{
packages.constituentA = mkDerivation {
name = "empty-dir-A";
builder = ./empty-dir-builder.sh;
};
packages.constituentB = mkDerivation {
name = "empty-dir-B";
builder = ./empty-dir-builder.sh;
};
ok_aggregate = mkDerivation {
name = "direct_aggregate";
_hydraAggregate = true;
_hydraGlobConstituents = true;
constituents = [
"*"
];
builder = ./empty-dir-builder.sh;
};
}

View File

@ -0,0 +1,31 @@
with import ./config.nix;
{
packages.constituentA = mkDerivation {
name = "empty-dir-A";
builder = ./empty-dir-builder.sh;
};
packages.constituentB = mkDerivation {
name = "empty-dir-B";
builder = ./empty-dir-builder.sh;
};
ok_aggregate = mkDerivation {
name = "direct_aggregate";
_hydraAggregate = true;
_hydraGlobConstituents = true;
constituents = [
"packages.*"
];
builder = ./empty-dir-builder.sh;
};
indirect_aggregate = mkDerivation {
name = "indirect_aggregate";
_hydraAggregate = true;
constituents = [
"ok_aggregate"
];
builder = ./empty-dir-builder.sh;
};
}

View File

@ -0,0 +1,20 @@
with import ./config.nix;
{
non_match_aggregate = mkDerivation {
name = "mixed_aggregate";
_hydraAggregate = true;
_hydraGlobConstituents = true;
constituents = [
"tests.*"
];
builder = ./empty-dir-builder.sh;
};
# Without a second job no jobset is attempted to be created
# (the only job would be broken)
# and thus the constituent validation is never reached.
dummy = mkDerivation {
name = "dummy";
builder = ./empty-dir-builder.sh;
};
}

View File

@ -0,0 +1,24 @@
{
"enabled": 1,
"hidden": false,
"description": "declarative-jobset-example",
"nixexprinput": "src",
"nixexprpath": "declarative/generator.nix",
"checkinterval": 300,
"schedulingshares": 100,
"enableemail": false,
"emailoverride": "",
"keepnr": 3,
"inputs": {
"src": {
"type": "path",
"value": "/home/ma27/Projects/hydra-cppnix/t/jobs",
"emailresponsible": false
},
"jobspath": {
"type": "string",
"value": "/home/ma27/Projects/hydra-cppnix/t/jobs",
"emailresponsible": false
}
}
}

View File

@ -14,7 +14,7 @@ our @EXPORT = qw(
sub evalSucceeds {
my ($jobset) = @_;
my ($res, $stdout, $stderr) = captureStdoutStderr(60, ("hydra-eval-jobset", $jobset->project->name, $jobset->name));
$jobset->discard_changes; # refresh from DB
$jobset->discard_changes({ '+columns' => {'errormsg' => 'errormsg'} }); # refresh from DB
if ($res) {
chomp $stdout; chomp $stderr;
utf8::decode($stdout) or die "Invalid unicode in stdout.";
@ -29,7 +29,7 @@ sub evalSucceeds {
sub evalFails {
my ($jobset) = @_;
my ($res, $stdout, $stderr) = captureStdoutStderr(60, ("hydra-eval-jobset", $jobset->project->name, $jobset->name));
$jobset->discard_changes; # refresh from DB
$jobset->discard_changes({ '+columns' => {'errormsg' => 'errormsg'} }); # refresh from DB
if (!$res) {
chomp $stdout; chomp $stderr;
utf8::decode($stdout) or die "Invalid unicode in stdout.";

View File

@ -22,11 +22,11 @@ is(nrQueuedBuildsForJobset($jobset), 0, "Evaluating jobs/broken-constituent.nix
like(
$jobset->errormsg,
qr/^"does-not-exist": does not exist$/m,
qr/^does-not-exist: does not exist$/m,
"Evaluating jobs/broken-constituent.nix should log an error for does-not-exist");
like(
$jobset->errormsg,
qr/^"does-not-evaluate": "error: assertion 'false' failed/m,
qr/^does-not-evaluate: error: assertion 'false' failed/m,
"Evaluating jobs/broken-constituent.nix should log an error for does-not-evaluate");
done_testing;

View File

@ -13,7 +13,7 @@ my $constituentBuildA = $builds->{"constituentA"};
my $constituentBuildB = $builds->{"constituentB"};
my $eval = $constituentBuildA->jobsetevals->first();
is($eval->evaluationerror->errormsg, "");
is($eval->evaluationerror->has_error, 0);
subtest "Verifying the direct aggregate" => sub {
my $aggBuild = $builds->{"direct_aggregate"};