Commit Graph

4198 Commits

Author SHA1 Message Date
54b8cb188e perl: jobsetevals -> jobset via by jobset_id
Frankly, this was suspiciously little work.
2021-01-26 13:51:39 -05:00
54341cd9f6 hydra-evaluator: deal in jobset IDs 2021-01-26 13:51:31 -05:00
cb01859718 hydra-evaluator: JobsetName -> JobsetIdentity 2021-01-26 11:50:38 -05:00
705a45df2b hydra-evaluator: reformat readJobsets query 2021-01-26 11:50:37 -05:00
ac3e8a4a59 jobsetevals: refer to jobset by ID 2021-01-26 11:50:37 -05:00
99e3c83358 JobsetEvals: noop: re-run the generator to update the order of fields 2021-01-26 11:50:36 -05:00
bf674a9653 hydra.sql: embed some in-line docs about schema changes 2021-01-26 11:50:36 -05:00
dc5a0d59c5 sql: Stop loading SQL if an error occurs
Otherwise we may go ahead and create DBIx classes for a half-loaded schema.
2021-01-26 11:50:32 -05:00
d0b3f2dac4 Merge pull request #848 from grahamc/normalize-nixexprinputpath
Normalize nixexpr{input,path} from builds to jobsetevals.
2021-01-25 15:22:01 +01:00
9516b256f1 Normalize nixexpr{input,path} from builds to jobsetevals.
Duplicating this data on every record of the builds table cost
approximately 4G of duplication.

Note that the database migration included took about 4h45m on an
untuned server which uses very slow rotational disks in a RAID5 setup,
with not a lot of RAM. I imagine in production it might take an hour
or two, but not 4. If this should become a chunked migration, I can do
that.

Note: Because of the question about chunked migrations, I have NOT
YET tested this migration thoroughly enough for merge.
2021-01-22 09:10:18 -05:00
53c2fc2216 Merge pull request #847 from grahamc/jobsetevals-evaluation-errors
JobsetEvals: record evaluation errors
2021-01-22 15:08:22 +01:00
bd99052a6f tests: create database with the utf-8 locale
Otherwise tests may fail with wide character errors.
2021-01-21 17:08:05 -05:00
c64c4aac4f jobset page: render error labels per eval 2021-01-21 17:08:02 -05:00
805dd6e7ee Evaluation page: render evaluation errors 2021-01-21 13:11:05 -05:00
086eed5147 hydra-eval-jobs: write evaluation errorMsg to the jobseteval table 2021-01-21 13:10:41 -05:00
fb6b10a86c gitignore: artifacts 2021-01-21 13:10:41 -05:00
d9989b7fa1 Schema: add errorMsg, errorTime to JobsetEvals 2021-01-21 13:10:41 -05:00
6bb876cb35 Merge pull request #846 from grahamc/buildoutputs-index-hash-path
BuildOutputs: index path with HASH
2021-01-18 20:01:44 +01:00
bc4b96d053 BuildOutputs: index path with HASH
Looking at AWS' Performance Insights for a Hydra instance, I found
the hydra-queue-runner's query:

    select id, buildStatus, releaseName, closureSize, size
    from Builds b
    join BuildOutputs o on b.id = o.build
    where
      finished = ?
      and (buildStatus = ? or buildStatus = ?)
      and path = $1

was the slowest query by at least 10x. Running an explain on this
showed why:

hydra=> explain select id, buildStatus, releaseName, closureSize, size
    from Builds b join BuildOutputs o on b.id = o.build where
    finished = 1 and (buildStatus = 0 or buildStatus = 6) and
    path = '/nix/store/s93khs2dncf2cy273mbyr4fb4ns3db20-MIDIVisualizer-5.1';

                                                     QUERY PLAN
    ------------------------------------------------------------------------
     Gather  (cost=1000.43..33718.98 rows=2 width=56)
       Workers Planned: 2
       ->  Nested Loop  (cost=0.43..32718.78 rows=1 width=56)
             ->  Parallel Seq Scan on buildoutputs o  (cost=0.00..32710.32
                                                       rows=1
                                                        width=4)
                   Filter: (path = '/nix/store/s93kh...snip...'::text)
             ->  Index Scan using indexbuildsonjobsetidfinishedid on builds b
                                            (cost=0.43..8.45 rows=1 width=56)
                   Index Cond: ((id = o.build) AND (finished = 1))
                   Filter: ((buildstatus = 0) OR (buildstatus = 6))
    (8 rows)

A paralell sequential scan is definitely better than a sequential scan, but the
cost ranging from 0 to 32710 is not great. Looking at the table, I saw the `path`
column is completely unindex:

    hydra=> \d buildoutputs
                Table "public.buildoutputs"
    Column |  Type   | Collation | Nullable | Default
    --------+---------+-----------+----------+---------
    build  | integer |           | not null |
    name   | text    |           | not null |
    path   | text    |           | not null |
    Indexes:
        "buildoutputs_pkey" PRIMARY KEY, btree (build, name)
    Foreign-key constraints:
        "buildoutputs_build_fkey" FOREIGN KEY (build) REFERENCES builds(id)
            ON DELETE CASCADE

Since we always do exact matches on the path and don't care about ordering,
and since the path column is very high cardinality a `hash` index is a
good candidate. Note that I did test a btree index and it performed
similarly well, but slightly worse.

After creating the index (this took about 10 seconds) on a test database:

    create index IndexBuildOutputsPath on BuildOutputs using hash(path);

We get a *significantly* reduced cost:

    hydra=> explain select id, buildStatus, releaseName, closureSize, size
    hydra->     from Builds b join BuildOutputs o on b.id = o.build where
    hydra->     finished = 1 and (buildStatus = 0 or buildStatus = 6) and
    hydra->     path = '/nix/store/s93khs2dncf2cy273mbyr4fb4ns3db20-MIDIVisualizer-5.1';
                                                QUERY PLAN
    -------------------------------------------------------------------------------------------------------
    Nested Loop  (cost=0.43..41.41 rows=2 width=56)
    ->  Index Scan using buildoutputs_path_hash on buildoutputs o  (cost=0.00..16.05 rows=3 width=4)
            Index Cond: (path = '/nix/store/s93khs2dncf2cy273mbyr4fb4ns3db20-MIDIVisualizer-5.1'::text)
    ->  Index Scan using indexbuildsonjobsetidfinishedid on builds b  (cost=0.43..8.45 rows=1 width=56)
            Index Cond: ((id = o.build) AND (finished = 1))
            Filter: ((buildstatus = 0) OR (buildstatus = 6))
    (6 rows)

For direct comparison, the overall query plan was changed:

    From: Gather      (cost=1000.43..33718.98 rows=2 width=56)
    To:   Nested Loop (cost=   0.43.....41.41 rows=2 width=56)

and the query plan for buildoutputs changed from a maximum cost of
32,710 down to 16.

In practical terms, the query's planning and execution time was reduced:

Before (ms) | Try 1   | Try 2   | Try 3
------------+---------+---------+--------
Planning    |   0.898 |   0.416 |   0.383
Execution   | 138.644 | 172.331 | 375.585

After (ms)  | Try 1   | Try 2   | Try 3
------------+---------+---------+--------
Planning    |   0.298 |   0.290 |   0.296
Execution   | 219.625 |   0.035 |   0.034
2021-01-18 11:28:05 -05:00
be0aa7eb85 Merge pull request #841 from pingiun/github-login
Implement GitHub logins
2021-01-05 14:51:51 +01:00
43d662f63a Don't use enable_github_login option after all
Instead the github_client_id option is used to detect if github logins
should be enabled.
2021-01-04 18:09:49 +01:00
c49ca66689 Die when no email is found 2021-01-04 18:09:05 +01:00
20d8134936 Update src/lib/Hydra/Controller/User.pm
Co-authored-by: Eelco Dolstra <edolstra@gmail.com>
2021-01-04 17:48:43 +01:00
19f9d8249f Update src/lib/Hydra/Controller/User.pm
Co-authored-by: Eelco Dolstra <edolstra@gmail.com>
2021-01-04 17:48:37 +01:00
20d09518f8 Merge pull request #839 from pingiun/shield-io
Add endpoint to generate a shields.io badge
2021-01-04 14:09:09 +01:00
525a229dac Convert validate-openapi to a Hydra job 2021-01-03 18:47:05 +01:00
ce7b23ae09 Disable broken validate-openapi test 2021-01-03 18:40:08 +01:00
c4062c2772 Merge pull request #842 from pingiun/also-trigger-flakes
Also trigger flake based jobsets with push-github endpoint
2021-01-03 18:28:29 +01:00
b59a5850a8 Merge branch 'receiveContents' of https://github.com/orivej/hydra 2021-01-03 18:26:04 +01:00
2a695a621d Merge branch 'update-for-nix2020' of https://github.com/matthewbauer/hydra 2021-01-03 18:24:35 +01:00
896e72169f flake.lock: Update
Flake input changes:

* Updated 'nix': 'github:NixOS/nix/79aa7d95183cbe6c0d786965f0dbff414fd1aa67' -> 'github:NixOS/nix/8a2ce0f455da32bc20978e68c0aad9efb4560abc'
2021-01-03 18:17:05 +01:00
bc10fd9d75 Merge pull request #831 from NixOS/dependabot/github_actions/cachix/install-nix-action-v12
Bump cachix/install-nix-action from v10 to v12
2021-01-03 18:12:07 +01:00
5f4eddbe57 Use email scope 2020-12-31 13:40:33 +01:00
e88355b3d4 Use email api call 2020-12-31 13:40:32 +01:00
d38ef4492e Update for Nix Value::type API changes
This updates hydra to be compatible with Nix https://github.com/NixOS/nix/pull/4355.

Along with https://github.com/NixOS/hydra/pull/840 needed for https://github.com/NixOS/nixpkgs/pull/107909

/cc @edolstra
2020-12-29 17:25:51 -06:00
1b3000e132 Allow push-github endpoint to also trigger flakes 2020-12-28 15:27:09 +01:00
bbd4891133 Implement GitHub logins
Requires the following configuration options
enable_github_login = 1
github_client_id
github_client_secret
Or github_client_secret_file which points to a file with the secret
2020-12-28 14:37:03 +01:00
34a856c7ab Update for receiveContents taking string_view
nix change: https://github.com/NixOS/nix/commit/faa31f40
2020-12-26 11:23:26 +00:00
f64230b45e Add endpoint to generate a shields.io badge 2020-12-25 15:05:34 +01:00
bde8d81876 Merge pull request #811 from helsinki-systems/fix/override-constraint
Stop violating not null constraint
2020-11-22 00:02:26 +01:00
bd0ab9a5fb Stop violating not null constraint
Fixes this error:

ERROR: failed to process declarative jobset test:inputs,
DBIx::Class::Storage::DBI::_dbh_execute(): DBI Exception: DBD::Pg::st
execute failed: ERROR:  null value in column "emailoverride" violates
not-null constraint
2020-11-21 22:04:40 +01:00
b40db4c4f8 flake.lock: Update
Flake input changes:

* Updated 'nix': 'github:NixOS/nix/5e6eabe1551f3fa2a463fced39a4a48b504ed1ab' -> 'github:NixOS/nix/79aa7d95183cbe6c0d786965f0dbff414fd1aa67'
2020-11-19 17:23:39 +01:00
f47749a62d Fix persistent hash mismatch errors when importing
This would start happening if the network connection between the Hydra
server and the remote build server breaks after sucessfully importing
at least one output of a derivation, but before having finished
importing all outputs.

Fixes #816.
2020-11-10 04:50:35 +01:00
2dcdc06e3a Bump cachix/install-nix-action from v10 to v12
Bumps [cachix/install-nix-action](https://github.com/cachix/install-nix-action) from v10 to v12.
- [Release notes](https://github.com/cachix/install-nix-action/releases)
- [Commits](https://github.com/cachix/install-nix-action/compare/v10...07da2520eebede906fbeefa9dd0a2b635323909d)

Signed-off-by: dependabot[bot] <support@github.com>
2020-11-09 05:34:56 +00:00
0bdeb63a15 flake.lock: Update
Flake input changes:

* Updated 'nix': 'github:NixOS/nix/e0ca98c2071b815578470e280df8fdb750c7e23b' -> 'github:NixOS/nix/5e6eabe1551f3fa2a463fced39a4a48b504ed1ab'
2020-11-03 14:57:06 +01:00
73dfef364b Copy deriver field to the binary cache
Fixes https://github.com/NixOS/nixos-org-configurations/issues/129.
2020-11-02 17:08:02 +01:00
d50f9a242f Merge pull request #826 from Taneb/patch-2
Remove Debug prints from GitLabStatus.pm
2020-11-02 12:13:42 +01:00
2742fde8c2 Remove Debug prints from GitLabStatus.pm
These make the hydra-queue-runner logs very noisy even when not using the GitlabStatus plugin.
Also, they shouldn't be necessary except when developing the plugin itself and should have been removed before release.
2020-11-02 10:14:54 +00:00
87317812a8 Fix some broken indices
These indices basically did nothing since they put "id" first.

In particular this makes /job/.../all much faster.
2020-10-28 14:30:44 +01:00
2922f45b4f Ignore 'nix develop' outputs directory 2020-10-28 13:41:34 +01:00