181 Commits

Author SHA1 Message Date
Graham Christensen
5bd8dc171b hydra.sql: Update comment on regeneration
We no longer need to generate the hydra-postgres.sql document,
that is a relic from when sqlite was also supported.
2021-08-06 14:40:34 -04:00
Damien Diederen
df7dab1291 GitInput: Include deepClone option in the cache key
Without this commit, two jobsets using the same repository as input,
but different `deepClone` options, end up incorrectly sharing the same
"checkout" for a given (`uri`, `branch`, `revision`) tuple.  The
presence or absence of `.git` is determined by the jobset execution
order.

This patch adds the missing `isDeepClone` boolean to the cache key.

The database upgrade script empties the `CachedGitInputs` table, as we
don't know if existing checkouts are deep clones.  Unfortunately, this
generally forces rebuilds even for correct `deepClone` checkouts, as
the binary contents of `.git` are not deterministic.

Fixes #510
2021-06-19 17:37:40 +02:00
Graham Christensen
a9e4ede006 SQL: create better indexes for builds based on the jobset id
These are primarily used by the jobsetOverview renders.
2021-06-01 11:23:22 -04:00
Ismaël Bouya
339a09f2e4
Fix check in jobsets
The current check happening in jobsets is incorrect.
The wanted constraint is stated as follow :
- If type is 0 (legacy), then the flake field should be null, and
  both nixExprInput and nixExprPath should be non-null
- If type is 1 (flake), then the flake field should be non-null, and
  both nixExprInput and nixExprPath should be null

The current version will not catch (i.e. it will accept) situations
where you have for instance :
type = 1, nixExprPath null, nixExprInput non-null, flake non-null

This commit fixes that.

I split(ted) that into two constraints, to make it more readable and
easier to extend if a new type appears in the future.

The complete query could be instead :
( type = 0
  AND nixExprInput IS NOT NULL AND nixExprPath IS NOT NULL AND flake IS NULL )
OR ( type = 1
  AND nixExprInput IS NULL AND nixExprPath IS NULL AND flake IS NOT NULL )

(but an "OR" cannot be split, hence the other formulation)
2021-02-03 22:14:53 +01:00
Graham Christensen
f1e75c8bff
Move evaluation errors from evaluations to EvaluationErrors, a new table
DBIx likes to eagerly select all columns without a way to really tell
it so. Therefore, this splits this one large column in to its own
table.

I'd also like to make "jobsets" use this table too, but that is on hold
to stop the bleeding caused by the extreme amount of traffic this is
causing.
2021-02-01 21:33:14 -05:00
Graham Christensen
ac3e8a4a59
jobsetevals: refer to jobset by ID 2021-01-26 11:50:37 -05:00
Graham Christensen
bf674a9653
hydra.sql: embed some in-line docs about schema changes 2021-01-26 11:50:36 -05:00
Graham Christensen
9516b256f1
Normalize nixexpr{input,path} from builds to jobsetevals.
Duplicating this data on every record of the builds table cost
approximately 4G of duplication.

Note that the database migration included took about 4h45m on an
untuned server which uses very slow rotational disks in a RAID5 setup,
with not a lot of RAM. I imagine in production it might take an hour
or two, but not 4. If this should become a chunked migration, I can do
that.

Note: Because of the question about chunked migrations, I have NOT
YET tested this migration thoroughly enough for merge.
2021-01-22 09:10:18 -05:00
Graham Christensen
d9989b7fa1
Schema: add errorMsg, errorTime to JobsetEvals 2021-01-21 13:10:41 -05:00
Graham Christensen
bc4b96d053
BuildOutputs: index path with HASH
Looking at AWS' Performance Insights for a Hydra instance, I found
the hydra-queue-runner's query:

    select id, buildStatus, releaseName, closureSize, size
    from Builds b
    join BuildOutputs o on b.id = o.build
    where
      finished = ?
      and (buildStatus = ? or buildStatus = ?)
      and path = $1

was the slowest query by at least 10x. Running an explain on this
showed why:

hydra=> explain select id, buildStatus, releaseName, closureSize, size
    from Builds b join BuildOutputs o on b.id = o.build where
    finished = 1 and (buildStatus = 0 or buildStatus = 6) and
    path = '/nix/store/s93khs2dncf2cy273mbyr4fb4ns3db20-MIDIVisualizer-5.1';

                                                     QUERY PLAN
    ------------------------------------------------------------------------
     Gather  (cost=1000.43..33718.98 rows=2 width=56)
       Workers Planned: 2
       ->  Nested Loop  (cost=0.43..32718.78 rows=1 width=56)
             ->  Parallel Seq Scan on buildoutputs o  (cost=0.00..32710.32
                                                       rows=1
                                                        width=4)
                   Filter: (path = '/nix/store/s93kh...snip...'::text)
             ->  Index Scan using indexbuildsonjobsetidfinishedid on builds b
                                            (cost=0.43..8.45 rows=1 width=56)
                   Index Cond: ((id = o.build) AND (finished = 1))
                   Filter: ((buildstatus = 0) OR (buildstatus = 6))
    (8 rows)

A paralell sequential scan is definitely better than a sequential scan, but the
cost ranging from 0 to 32710 is not great. Looking at the table, I saw the `path`
column is completely unindex:

    hydra=> \d buildoutputs
                Table "public.buildoutputs"
    Column |  Type   | Collation | Nullable | Default
    --------+---------+-----------+----------+---------
    build  | integer |           | not null |
    name   | text    |           | not null |
    path   | text    |           | not null |
    Indexes:
        "buildoutputs_pkey" PRIMARY KEY, btree (build, name)
    Foreign-key constraints:
        "buildoutputs_build_fkey" FOREIGN KEY (build) REFERENCES builds(id)
            ON DELETE CASCADE

Since we always do exact matches on the path and don't care about ordering,
and since the path column is very high cardinality a `hash` index is a
good candidate. Note that I did test a btree index and it performed
similarly well, but slightly worse.

After creating the index (this took about 10 seconds) on a test database:

    create index IndexBuildOutputsPath on BuildOutputs using hash(path);

We get a *significantly* reduced cost:

    hydra=> explain select id, buildStatus, releaseName, closureSize, size
    hydra->     from Builds b join BuildOutputs o on b.id = o.build where
    hydra->     finished = 1 and (buildStatus = 0 or buildStatus = 6) and
    hydra->     path = '/nix/store/s93khs2dncf2cy273mbyr4fb4ns3db20-MIDIVisualizer-5.1';
                                                QUERY PLAN
    -------------------------------------------------------------------------------------------------------
    Nested Loop  (cost=0.43..41.41 rows=2 width=56)
    ->  Index Scan using buildoutputs_path_hash on buildoutputs o  (cost=0.00..16.05 rows=3 width=4)
            Index Cond: (path = '/nix/store/s93khs2dncf2cy273mbyr4fb4ns3db20-MIDIVisualizer-5.1'::text)
    ->  Index Scan using indexbuildsonjobsetidfinishedid on builds b  (cost=0.43..8.45 rows=1 width=56)
            Index Cond: ((id = o.build) AND (finished = 1))
            Filter: ((buildstatus = 0) OR (buildstatus = 6))
    (6 rows)

For direct comparison, the overall query plan was changed:

    From: Gather      (cost=1000.43..33718.98 rows=2 width=56)
    To:   Nested Loop (cost=   0.43.....41.41 rows=2 width=56)

and the query plan for buildoutputs changed from a maximum cost of
32,710 down to 16.

In practical terms, the query's planning and execution time was reduced:

Before (ms) | Try 1   | Try 2   | Try 3
------------+---------+---------+--------
Planning    |   0.898 |   0.416 |   0.383
Execution   | 138.644 | 172.331 | 375.585

After (ms)  | Try 1   | Try 2   | Try 3
------------+---------+---------+--------
Planning    |   0.298 |   0.290 |   0.296
Execution   | 219.625 |   0.035 |   0.034
2021-01-18 11:28:05 -05:00
Jelle Besseling
bbd4891133
Implement GitHub logins
Requires the following configuration options
enable_github_login = 1
github_client_id
github_client_secret
Or github_client_secret_file which points to a file with the secret
2020-12-28 14:37:03 +01:00
Eelco Dolstra
87317812a8 Fix some broken indices
These indices basically did nothing since they put "id" first.

In particular this makes /job/.../all much faster.
2020-10-28 14:30:44 +01:00
Eelco Dolstra
d4e4be4fd1
Remove SHA-1 hash from BuildProducts
SHA-1 is deprecated and it will be expensive to compute with the
streaming NAR handler.
2020-07-27 18:24:10 +02:00
Eelco Dolstra
8adb433e3b
Remove the Jobs table
This table has been superfluous for a long time.
2020-05-27 20:09:36 +02:00
Eelco Dolstra
96a514c169
Remove the "releases" feature
We haven't used this in many years (it was really only used for nix
and patchelf releases).
2020-05-06 12:39:21 +02:00
Maximilian Bosch
efcbc08686
Get rid of dependency to SQLite
SQLite isn't properly supported by Hydra for a few years now[1], but
Hydra still depends on it. Apart from a slightly bigger closure this can
cause confusion by users since Hydra picks up SQLite rather than
PostgreSQL by default if HYDRA_DBI isn't configured properly[2]

[1] 78974abb6943533c3d94c7bd3282bf738b7be61c
[2] https://logs.nix.samueldr.com/nixos-dev/2020-04-10#3297342;
2020-04-16 00:42:40 +02:00
Graham Christensen
5fae9d96a2
hydra-evaluator: add a 'ONE_AT_A_TIME' evaluator style
In the past, jobsets which are automatically evaluated are evaluated
regularly, on a schedule. This schedule means a new evaluation is
created every checkInterval seconds (assuming something changed.)

This model works well for architectures where our build farm can
easily keep up with demand.

This commit adds a new type of evaluation, called ONE_AT_A_TIME, which
only schedules a new evaluation if the previous evaluation of the
jobset has no unfinished builds.

This model of evaluation lets us have 'low-tier' architectures.

For example, we could now have a jobset for ARMv7l builds, where
the buildfarm only has a single, underpowered ARMv7l builder.
Configuring that jobset as ONE_AT_A_TIME will create an evaluation
and then won't schedule another evaluation until every job of
the existing evaluation is complete.

This way, the cache will have a complete collection of pre-built
software for some commits, but the underpowered architecture will
never become backlogged in ancient revisions.
2020-03-03 19:28:44 -05:00
Graham Christensen
027668f0db
hydra.sql: add an index for slow queries in production
These queries used to use (jobset, project) based indexes,
and the addition of jobset_id makes most of those indexes
unusable now.
2020-02-11 12:52:28 -05:00
Graham Christensen
2637a7ad76
Builds: index literally what latest-finished queries 2020-02-11 07:06:21 -05:00
Graham Christensen
8ef08f1385
Builds.jobset_id: make not-null 2020-02-11 07:06:20 -05:00
Graham Christensen
2cdcc7f188
Jobs.jobset_id: make not-null 2020-02-11 07:06:17 -05:00
Graham Christensen
ddf00fa627
Builds: add a nullable jobset_id foreign key to Jobsets.
Also, adds an explicitly named "builds" accessor to the Jobsets
Schema object, which uses the project/jobset name.
2020-02-10 11:43:02 -05:00
Graham Christensen
efa1f1d4fb
Jobs: add a nullable jobset_id foreign key to Jobsets.
Also, adds an explicitly named "jobs" accessor to the Jobsets
Schema object, which uses the project/jobset name.
2020-02-10 11:43:02 -05:00
Graham Christensen
e00030563b
Jobsets: add a SERIAL, unique, non-null id column
A postgresql column which is non-null and unique is treated with
the same optimisations as a primary key, so we have no need to
try and recreate the `id` as the primary key.

No read paths are impacted by this change, and the database will
automatically create an ID for each insert. Thus, no code needs to
change.
2020-02-10 11:42:59 -05:00
Graham Christensen
6fe57ab5fa
Copy the flake migration from the flake branch
hydra.nixos.org is already running this rev, and it should be safe to
apply to everyone else. If we make changes to this migration, we'll
need to write another migration anyway.
2020-02-09 15:21:28 -05:00
Antoine Eiche
8a0a5ec3a3 Create extension pg_trgm in the NixOS module
The creation of the `pg_trgm` extension needs superuser power. So,
this patch makes the extension creation in the Hydra NixOS module when
a local database is used.

If it is not possible to create this extension (remote database for
instance with nosuperuser), the creation of the `pg_trgm` index is
skipped (this index speedup queries on builds.drvpath) and warnings
are emitted:

    initialising the Hydra database schema...
    WARNING:  Can not create extension pg_trgm: permission denied to create extension "pg_trgm"
    WARNING:  HINT: Temporary provide superuser role to your Hydra Postgresql user and run the script src/sql/upgrade-57.sql
    WARNING:  The pg_trgm index on builds.drvpath has been skipped (slower complex queries on builds.drvpath)

This allows to keep smooth migrations: the migration process doesn't
require a manual step (but this manual step is recommended on big
remote databases).
2019-07-25 16:19:33 +02:00
Antoine Eiche
42784a9053 sql: refactor some sql statements to lowercase 2019-07-25 16:19:33 +02:00
Antoine Eiche
7935cffd6b Create a pg_trgm index on builds.drvpath
The search query uses the LIKE operator which requires a sequential
scan (it can't use the already existing B-tree index). This new
index (trigram) avoids a sequential scan of the builds table when the
LIKE operator is used.

Here is the analyze of a request on the builds table with this index:

     explain analyze select * from builds where drvpath like '%k3r71gz0gv16ld8rhcp2bb8gb5w1xc4b%';
                                                                 QUERY PLAN
     -----------------------------------------------------------------------------------------------------------------------------------
      Bitmap Heap Scan on builds  (cost=128.00..132.01 rows=1 width=492) (actual time=0.070..0.077 rows=1 loops=1)
        Recheck Cond: (drvpath ~~ '%k3r71gz0gv16ld8rhcp2bb8gb5w1xc4b%'::text)
        ->  Bitmap Index Scan on indextrgmbuildsondrvpath  (cost=0.00..128.00 rows=1 width=0) (actual time=0.047..0.047 rows=3 loops=1)
              Index Cond: (drvpath ~~ '%k3r71gz0gv16ld8rhcp2bb8gb5w1xc4b%'::text)
      Total runtime: 0.206 ms
     (5 rows)
2019-06-06 14:28:25 +02:00
Eelco Dolstra
e9670641ec
Distinguish build step states
The web interface now shows whether a build step is connecting,
copying inputs/outputs, building, etc.
2017-12-07 15:35:31 +01:00
Eelco Dolstra
7c976d2aec
hydra-queue-runner: Make build notification more reliable
Previously, when hydra-queue-runner was restarted, any pending "build
finished" notifications were lost. Now hydra-queue-runner marks
finished but unnotified builds in the database and uses that to run
pending notifications at startup.
2017-07-26 15:17:51 +02:00
Eelco Dolstra
dc5e0b120a
Fix a race that can cause hydra-queue-runner to ignore newly added builds
As @dtzWill discovered, with the concurrent hydra-evaluator, there can
be multiple active transactions adding builds to the database. As a
result, builds can become visible in a non-monotonically increasing
order, breaking the queue monitor's assumption that build IDs only go
up.

The fix is to have hydra-eval-jobset provide the lowest build ID it
just added in the builds_added notification, and have the queue
monitor check from there.

Fixes #496.
2017-07-21 14:34:48 +02:00
Eelco Dolstra
285754aff6
hydra-evaluator improvements
* The "Jobset" page now shows when evaluations are in progress (rather
  than just pending).

* Restored the ability to do a single evaluation from the command line
  by doing "hydra-evaluator <project> <jobset>".

* Fix some consistency issues between jobset status in PostgreSQL and
  in hydra-evaluator. In particular, "lastCheckedTime" was never
  updated internally.
2017-03-15 16:59:57 +01:00
Renzo Carbonara
de55303197 Fix SQL syntax for migration 50. It failed with PostgreSQL 9.4. (#416) 2016-12-09 18:18:17 +01:00
Eelco Dolstra
f6081668dc
Allow determinism checking for entire jobsets
Setting

  xxx-jobset-repeats = patchelf:master:2

will cause Hydra to perform every build step in the specified jobset 2
additional times (i.e. 3 times in total). Non-determinism is not fatal
unless the derivation has the attribute "isDeterministic = true"; we
just note the lack of determinism in the Hydra database. This will
allow us to get stats about the (lack of) reproducibility of all of
Nixpkgs.
2016-12-07 15:57:13 +01:00
Eelco Dolstra
8bb36e79bd
Support testing build determinism
Builds can now specify the attribute "isDeterministic = true" to tell
Hydra to build with build-repeat > 0. If there is a mismatch between
rounds, the step / build fails with a suitable status.

Maybe this should be a meta attribute, but that makes it invisible to
hydra-queue-runner, and it seems reasonable to make a claim of
mandatory determinism part of the derivation (since e.g. enabling this
flag should trigger a rebuild).
2016-12-06 17:46:06 +01:00
Eelco Dolstra
b3169ce438 Kill active build steps when builds are cancelled
We now kill active build steps when there are no more referring
builds. This is useful e.g. for preventing cancelled multi-hour TPC-H
benchmark runs from hogging build machines.
2016-10-31 14:58:29 +01:00
Eelco Dolstra
a12e9478e7 When manually scheduling an eval, force re-instantiation of store derivations
Without this, if (failed or aborted) derivations have been
garbage-collected, there is no way to restart them, which is very
annoying. Now we set a forceEval flag in the jobset to cause it to be
re-evaluated even if none of the inputs have changed.
2016-10-24 20:20:20 +02:00
Eelco Dolstra
bbe45ed844 Remove Persona support
Persona is no longer supported by Mozilla, so let's remove it.
2016-10-20 14:14:04 +02:00
Eelco Dolstra
e0b2921ff2 Concurrent hydra-evaluator
This rewrites the top-level loop of hydra-evaluator in C++. The Perl
stuff is moved into hydra-eval-jobset. (Rewriting the entire evaluator
would be nice but is a bit too much work.) The new version has some
advantages:

* It can run multiple jobset evaluations in parallel.

* It uses PostgreSQL notifications so it doesn't have to poll the
  database. So if a jobset is triggered via the web interface or from
  a GitHub / Bitbucket webhook, evaluation of the jobset will start
  almost instantaneously (assuming the evaluator is not at its
  concurrency limit).

* It imposes a timeout on evaluations. So if e.g. hydra-eval-jobset
  hangs connecting to a Mercurial server, it will eventually be
  killed.
2016-10-14 14:22:12 +02:00
Eelco Dolstra
f70946efca Allow public dashboards
Dashboards can now be marked as publically visible in the user
preferences. The dashboard URL has changed from /user/<name>/dashboard
to /dashboard/<name> because /user/<name> requires being logged in as
<name> or as an admin.
2016-05-27 14:35:32 +02:00
Shea Levy
4392d3e21d Enable declarative projects.
This allows fully declarative project specifications. This is best
illustrated by example:

* I create a new project, setting the declarative spec file to
  "spec.json" and the declarative input to a git repo pointing
  at git://github.com/shlevy/declarative-hydra-example.git
* hydra creates a special ".jobsets" jobset alongside the project
* Just before evaluating the ".jobsets" jobset, hydra fetches
  declarative-hydra-example.git, reads spec.json as a jobset spec,
  and updates the jobset's configuration accordingly:
{
    "enabled": 1,
    "hidden": false,
    "description": "Jobsets",
    "nixexprinput": "src",
    "nixexprpath": "default.nix",
    "checkinterval": 300,
    "schedulingshares": 100,
    "enableemail": false,
    "emailoverride": "",
    "keepnr": 3,
    "inputs": {
        "src": { "type": "git", "value": "git://github.com/shlevy/declarative-hydra-example.git", "emailresponsible": false },
        "nixpkgs": { "type": "git", "value": "git://github.com/NixOS/nixpkgs.git release-16.03", "emailresponsible": false }
    }
}
* When the "jobsets" job of the ".jobsets" jobset completes, hydra
  reads its output as a JSON representation of a dictionary of
  jobset specs and creates a jobset named "master" configured
  accordingly (In this example, this is the same configuration as
  .jobsets itself, except using release.nix instead of default.nix):
{
    "enabled": 1,
    "hidden": false,
    "description": "js",
    "nixexprinput": "src",
    "nixexprpath": "release.nix",
    "checkinterval": 300,
    "schedulingshares": 100,
    "enableemail": false,
    "emailoverride": "",
    "keepnr": 3,
    "inputs": {
        "src": { "type": "git", "value": "git://github.com/shlevy/declarative-hydra-example.git", "emailresponsible": false },
        "nixpkgs": { "type": "git", "value": "git://github.com/NixOS/nixpkgs.git release-16.03", "emailresponsible": false }
    }
}
2016-05-09 08:54:27 -04:00
Eelco Dolstra
a7755678fe Drop unused BuildProducts.description column 2016-04-13 16:30:52 +02:00
Eelco Dolstra
ac23bd1539 Revert "Apply IndexBuildsOnJobFinishedId to unfinished builds only"
This reverts commit 1de5ce7a0e4c28acd3766fa19a9b94fd75436430.
2016-03-16 17:04:20 +01:00
Eelco Dolstra
1de5ce7a0e Apply IndexBuildsOnJobFinishedId to unfinished builds only 2016-03-16 15:17:10 +01:00
Eelco Dolstra
4151be7e69 Make the output size limit configurable
The maximum output size per build step (as the sum of the NARs of each
output) can be set via hydra.conf, e.g.

  max-output-size = 1000000000

The default is 2 GiB.

Also refactored the build error / status handling a bit.
2016-03-09 17:00:09 +01:00
Eelco Dolstra
80ff78b1b6 Unify build and step status codes
Also remove the obsolete status code 5 from the database.
2016-03-09 15:30:43 +01:00
Eelco Dolstra
d7a123fcd4 Keep track of the time we spend copying to/from build machines 2016-02-17 10:30:23 +01:00
Eelco Dolstra
e46acbf05b Remove the errorMsg column from the Builds table
The queue runner no longer uses this field, and it doesn't provide
very interesting historical data (mostly SSH failures), but it takes
up a lot of space. Also, it contained some bad UTF-8 which was
preventing an upgrade to Postgres 9.5, so a good occasion to get rid
of it.
2016-02-12 17:21:55 +01:00
Eelco Dolstra
4d1816b152 Remove obsolete Builds columns and provide accurate "Running builds"
This removes the "busy", "locker" and "logfile" columns, which are no
longer used by the queue runner. The "Running builds" page now only
shows builds that have an active build step.
2015-10-27 15:37:17 +01:00
Eelco Dolstra
30823078c4 Merge branch 'custom-channels' of https://github.com/aszlig/hydra 2015-10-16 17:00:29 +02:00