Skip to content

0.2.138 Release#393

Open
dancoates wants to merge 111 commits intomainfrom
draft-feb-26
Open

0.2.138 Release#393
dancoates wants to merge 111 commits intomainfrom
draft-feb-26

Conversation

@dancoates
Copy link
Copy Markdown

@dancoates dancoates commented Feb 1, 2026

Please do not forget to do terraform apply before deployment.

(We may need to do similar terraform gymnastics to the previous release due to 46d5635.)

chrisvittal and others added 25 commits December 9, 2025 16:20
Primarily for dependabot alerts about urllib3

## Security Assessment
- This change potentially impacts the Hail Batch instance as deployed by
Broad Institute in GCP

### Impact Rating
- This change has a low security impact

### Impact Description
Dependency update to resolve security issue.

### Appsec Review

- [x] Required: The impact has been assessed and approved by appsec
…hail-is#15205)

## Change Description

Fixes hail-is/hail-security#67

Currently, worker VMs are created with external IP addresses. However,
there is no operational reason why they actually need those, as we don't
actually need/want our workers to be accessible from the public
internet. This change updates our worker VM config (sent to GCP for
worker creation) to exclude giving an external IP.

## Security Assessment
- This change potentially impacts the Hail Batch instance as deployed by
Broad Institute in GCP

### Impact Rating
- This change has a medium security impact

### Impact Description

While removing public IPs from our workers makes things generally more
secure, this change still relates to changing our worker VM
configuration and communication.

### Appsec Review

- [ ] Required: The impact has been assessed and approved by appsec
What?

- Refactors hail/build.mill to use mill's multi-file builds.
- Organises dependencies via their maven coordinate to avoid typing full
coordinate each time.
- Uses scala 3 syntax

Why?

- modules like `shadeazure` and `ir-gen` are independent of the root
module's cross value
- listing all mvn coordinates is useful for a subsequent change where I
define boms for various pyspark distributions.
- Scala 3 syntax is a lot nicer. I don't think the overheads are
particularly strenuous.

This change cannot impact the Hail Batch instance as deployed by Broad
Institute in GCP
## Change Description

Stop mirroring back an invalid "next" page specified at login.

## Security Assessment

- This change potentially impacts the Hail Batch instance as deployed by
Broad Institute in GCP

### Impact Rating
- This change has a low security impact


### Impact Description

Prevents a potential reflected XSS

### Appsec Review

- [x] Required: The impact has been assessed and approved by appsec
…5210)

A user reported a 500 when trying to log in on zulip ([#Hail Batch
support > unable to log in -- 500
error](https://hail.zulipchat.com/#narrow/channel/223457-Hail-Batch-support/topic/unable.20to.20log.20in.20--.20500.20error/with/563247607)).

We run into this internal server error because we were failing
[this](https://github.com/hail-is/hail/blob/5d54486fcac06f6b7b2e8af380f812d1552343fc/auth/auth/auth.py#L438)
assertion, as were not handling the 'inactive' status. This change adds
the 'inactive' status to the list of errors, and also makes sure to set
an http error status when rendering the `account-errors.html` template.

## Security Assessment
- This change potentially impacts the Hail Batch instance as deployed by
Broad Institute in GCP

### Impact Rating
- This change has low security impact

### Impact Description
Change the response codes to one portion of the login flow, and make it
so that the server doesn't fail when handling login of an inactive user.

### Appsec Review

- [x] Required: The impact has been assessed and approved by appsec
Replace generated `build-info.properties` with a generated Scala package
object.
Renamed terms to be less shouty.
Retired useless `BuildConfiguration` enum.

This change cannot impact the Hail Batch instance as deployed by Broad
Institute in GCP
## Change Description

Updates cloudnat to have a more realistic number of ports, allow the
number to float, and add routers and nats for all regions where worker
VMs might be created

## Security Assessment

Delete all except the correct answer:
- This change potentially impacts the Hail Batch instance as deployed by
Broad Institute in GCP

### Impact Rating

- This change has a medium security impact

### Impact Description

Networking change, but in analogy to what already exists, and in the
service of adding a secure layer between our worker vms and the outside
world


### Appsec Review

- [x] Required: The impact has been assessed and approved by appsec
Updates the dev deploy build script to link us out to the
`batch.hail.is` UI pages rather than `ci.hail.is` pages.

This change cannot impact the Hail Batch instance as deployed by Broad
Institute in GCP
Later versions of scalac will issue the following warning:
```
object AnyRefMap in package mutable is deprecated (since 2.13.16): Use scala.collection.mutable.HashMap instead for better performance.
```
Seems fair enough. Spark have also followed suit: 
https://gitmemories.com/apache/spark/issues/48128

This change has no impact on the Broad-managed hail batch deployment in
GCP.
## Change Description

Sets the max connections per VM higher, and makes it a power of 2, as
required by the API

## Security Assessment

- This change potentially impacts the Hail Batch instance as deployed by
Broad Institute in GCP

### Impact Rating

- This change has a low security impact

### Impact Description

Small config change to un-break the terraform file and give more
connection space to VMs

### Appsec Review

- [x] Required: The impact has been assessed and approved by appsec
## Change Description

Fixes a bunch of the easy-to-fix 2.13 deprecation warnings.

## Security Assessment
- This change cannot impact the Hail Batch instance as deployed by Broad
Institute in GCP
…15231)

This change cannot impact the Hail Batch instance as deployed by Broad
Institute in GCP
## Change Description

Add documentation for refreshing the trivy-scanner gsa key

## Security Assessment

- This change cannot impact the Hail Batch instance as deployed by Broad
Institute in GCP
TUnion was never implemented and so is safe to delete.  
This change does not impact the Broad-managed hail batch deployment in
GCP.
## Change Description

Python dependency bump

## Security Assessment
This change potentially impacts the Hail Batch instance as deployed by
Broad Institute in GCP

### Impact Rating

- This change has a medium security impact

### Impact Description

Dependency updates

### Appsec Review

- [x] Required: The impact has been assessed and approved by appsec
There have been open FIXMEs to remove this EType in favor of custom
value readers and writers. This change does that.

## Security Assessment
- This change cannot impact the Hail Batch instance as deployed by Broad
Institute in GCP
Use the regular backend `persist`/`unpersist` for BlockMatrix persist
rather than a call to the nonexistent `persist_blockmatrix` that was
added in hail-is#12864.

Fixes hail-is#15229 

## Security Assessment
- This change cannot impact the Hail Batch instance as deployed by Broad
Institute in GCP
While this was convenient for 2.12 (build-in method for 2.13), it was
triggering a pattern compilation for every fileListEntry. Lifting the
pattern out removes the need for this utility.

This change does not impact the blah blah blah
LiftMeOut takes a non-strict argument.
This change does not affect the broad-managed batch service in gcp.
Non-functional change only.
Quality of life change for me while I scan through IR generation
and try to determine which child IRs are strict or non-strict and
update their corresponding block-args in the pretty printer.

This change does not affect the broad-managed batch service in gcp.
…15237)

This message can contain '...' or "..." in different locales.

## Security Assessment

[Edited by Chris L:

- This change might impact the Hail Batch instance as deployed by Broad
Institute in GCP

This change prevents an accidental DOS on our worker VMs whereby the
docker pull mechanism gets stuck in an infinite loop of retrying futile
requests.
Adding for completeness.

This operation is used internally by the compiler and isn't exposed to
python. We were never getting an assertion from the prettyprinter as
this operation is generated and immediately executed in
LowerAndExecuteShuffles.

No affect batch.
Adds blockargs for AggFilter
This change does not affect the broad-managed batch service in GCP.
Missing AggGroupBy blockargs.
This change cannot affect the broad-managed batch service in gcp.
patrick-schultz and others added 4 commits February 3, 2026 10:12
…is#15238)

## Change Description

Use the wartremover scala compiler plugin to enforce using the
`override` keyword when implementing/overriding a method from a parent
class. This is generally considered a best practice, and e.g. helps with
refactoring when deleting a method or changing its signature, as any
implementing/overriding methods in child classes will now produce
compiler errors.

## Security Assessment
- This change cannot impact the Hail Batch instance as deployed by Broad
Institute in GCP
## Change Description

Updates worker VMs to only use ipv4 network stacks.

This:
- Makes more sense given that they are now behind an ipv4 only cloudNAT
- Will hopefully prevent a race condition causing some docker accesses
to fail

Brief description and justification of what this PR is doing.

## Security Assessment

- This change potentially impacts the Hail Batch instance as deployed by
Broad Institute in GCP

### Impact Rating

- This change has a low security impact

### Impact Description

Unlikely to have any security impact. At the margins it's an improvement
because it reduces the number of interfaces that the VM opens, and
forces the expected "ipv4 interface behind a cloud NAT" networking
model.


### Appsec Review

- [x] Required: The impact has been assessed and approved by appsec
## Change Description

Deletes the vestigial `EmitRegion` class. Once upon a time, it was used
to package together a MethodBuilder, and the Region argument to the
method (which was usually the first argument). But now it confuses more
than it helps, like in the common case where we also pass a CodeBuilder,
which itself has a reference to a MethodBuilder.

## Security Assessment
- This change cannot impact the Hail Batch instance as deployed by Broad
Institute in GCP
…5257)

This change does not affect the broad-managed batch service in gcp.
ehigham and others added 30 commits March 11, 2026 17:00
Enables configuration of the maximum number of partition
results the 'batch' query backend can read in parallel.

This can be done via

- the 'max_read_parallelism' init parameter
- hailctl config set query/batch_backend_max_read_parallelism N

This change does not affect the broad-managed batch service in gcp.
…s#15305)

Mostly non-functional change that modifies the alias
`ContextRDD.ElementType[A]` from

```scala
RVDContext => Iterator[A]
```

to

```scala
(HailClassLoader, RVDContext) => Iterator[A]
```

By doing so, we eliminate many of the uses of
`theHailClassLoaderForSparkWorkers`, some of which were in error.

This change does not affect the broad-managed batch service in GCP.
## Change Description

Start capturing retried failed tests as a proxy for flaky tests. 

This will allow us to follow up with additional tracking of the most
common failures, make lists of links to similar test failures for bulk
analysis, etc.

Potentially, we could even add a "YOLO merge anyway" button as an
alternative to Retry, but let's start with a simple tracking change and
see how far we can get with just improving the tests.

Example row recorded by a test Retry:

```
mysql> select * from retried_tests;
+----+----------+--------+--------------+--------+-----------+-----------+-------------------+------------------------+------------------------------------------+------------+---------------------+
| id | batch_id | job_id | job_name     | state  | exit_code | pr_number | target_branch     | source_branch          | source_sha                               | retried_by | retried_at          |
+----+----------+--------+--------------+--------+-----------+-----------+-------------------+------------------------+------------------------------------------+------------+---------------------+
|  1 |  8370753 |    186 | test_batch_4 | Failed |         1 |     15320 | hail-is/hail:main | refresh-cert-on-deploy | 1518edb | chrisl     | 2026-03-10 19:03:09 |
+----+----------+--------+--------------+--------+-----------+-----------+-------------------+------------------------+------------------------------------------+------------+---------------------+
1 row in set (0.01 sec)
```

## Security Assessment

- This change potentially impacts the Hail Batch instance as deployed by
Broad Institute in GCP

### Impact Rating

- This change has a low security impact

### Impact Description

Store well defined data types (that are already being stored elsewhere
in the db) in a new table, as part of an existing development workflow.

### Appsec Review

- [x] Required: The impact has been assessed and approved by appsec
Added benchmark parameters for all benchmark tests in the query codebase
in `batch-config.yaml`. This file lists, for all benchmarks:

- Number of burn in iterations (untimed trials)
- The mean runtime and coefficient of variance when the analyses were
performed
- Configurations for benchmarking at differing fidelity:
    - The minimal detectable `slowdown` factor
    - Number of instances (batch jobs) to use
- Number of iterations (how many timed trials of the benchmark per
instance)

These parameters allow for more precise and controlled benchmark
measurements by explicitly defining the statistical requirements and
execution parameters for each test.

These parameters are now used by the benchmark_in_batch.py script.
Currently only high fidelity (expensive) configurations are supported.
Collecting these configurations was a long boring process so I included
others while I was working on it.

To run the benchmarks, simply run from the hail root directory:

```bash
make benchmark
```

When run last, the benchmark suite took 14 hours to complete and cost 50
USD.
Much of this runtime comes from failing to time out when at the memory
limit and premptions.

This change has no impact on the broad-managed batch service in gcp.
## Change Description
Currently, the `create_test_database_server_config script` in
`build.yaml` determines whether to regenerate TLS certificates by
checking the most recent timestamp in the Kubernetes secret's
`.metadata.managedFields` and checking whether it's over 365 days old.
If it is, the script regenerates the secret, otherwise it leaves it as
is. However, Kubernetes operations (e.g. an `Update` performed by
GoogleContainerEngine) can create new timestamps without actually
refreshing the certificate, causing the script to incorrectly believe
the certificate is still valid when it has actually expired.

This change replaces the current timestamp-based check with a direct
certificate expiration check on the actual certificate stored in the
secret, and thus no longer relies on heuristics around timestamp (and,
furthermore, means we're not impeded by non-regenerative K8s operations
that produce timestamps).

This change has been verified by:
- [Dev deploying the branch](https://batch.hail.is/batches/8368701) and
seeing that i) the deploy_test_db job succeeded, and ii) all jobs
depending on create_test_database_server_config succeeded.
- Verifying the new command locally in my own namespace:
```
% kubectl get secret database-server-config -n grohlice -o json | jq -r '.data["server-ca.pem"]' | base64 -d | openssl x509 -enddate -checkend 0       
notAfter=Mar  2 17:11:19 2027 GMT
Certificate will not expire
```

## Security Assessment
- This change does not impact the Hail Batch instance as deployed by
Broad Institute in GCP

### Impact Rating

Delete all except the correct answer:
- This change has a low security impact

### Impact Description
This change only impacts test database deployments and should not impact
the production deployment/databases.

### Appsec Review

- [ ] Required: The impact has been assessed and approved by appsec
Behaviour verified in dev-deploy.

Note that async-profiler switched to a new binary command line program,
`asprof`,
which is now added to the batch worker's path instead of hard-coding the
path to
`profile.sh` in worker python code.

### Impact Rating
This change has a low security impact

### Impact Description
This change only affects query-on-batch jobs when the HAIL_QUERY_PROFILE
flag is set,
which is typically a developer workflow and not something users are
expected to do.

### Appsec Review

- [ ] Required: The impact has been assessed and approved by appsec
…il-is#15347)

In 0bdffbc, I tried to simplify the logic around liftovers and fasta
sequence indexes.
As an unintended side-effect, this caused unnecessary reads of their
corresponding files each time
a partition function is loaded.

Claude's report:
```
Potential regression (severity 2/5): ReferenceGenome.heal now unconditionally calls liftovers.values.foreach(_.restore(fs)) and fastaFile.restore(fs), which re-read and re-parse chain files and FASTA indexes from the filesystem on every worker task. Previously, these were conditionally skipped if paths hadn't changed. Chain files can be hundreds of MB for whole-genome liftover. This is a real regression for pipelines using liftover or FASTA sequences on distributed backends with many tasks.
```

This change does not affect the Broad-managed hail-batch service in GCP.
## Security Assessment

Delete all except the correct answer:
- This change potentially impacts the Hail Batch instance as deployed by
Broad Institute in GCP

### Impact Rating
- This change has no security impact

### Impact Description
Tooling and pinned version updates

### Appsec Review
- [x] Required: The impact has been assessed and approved by appsec
## Change Description

Fixes the Scala 2.13 deprecation warnings about implicitly converting
`Array` types to `Seq` types, which causes a silent copy. To fix this, I
updated many/most of our uses of arrays to use `ArraySeq` instead, and
many/most of our `Array` typed function parameters to take `IndexedSeq`.
This eliminated many copies in both direction (the silent copy of the
`Array` to `Seq` conversion, and a lot of explicit `IndexedSeq.toArray`
calls).

## Security Assessment

- This change cannot impact the Hail Batch instance as deployed by Broad
Institute in GCP
## Change Description

e06f316 potentially affects runtime by converting while-loops to
higher-level foreach constructs.
These changes should have been verified with benchmarks to ensure
they're safe. This change reverts those and goes a little further in
hot-loops in the hope that it may ease allocation pressure.

This change cannot impact the Hail Batch instance as deployed by Broad
Institute in GCP
Fixes the following found by claude:

```
a51c7c0 Edmund Higham [qob] Expose driver maximum read parallelism ([hail-is#15322](hail-is#15322))

Severity: 4, Confidence: 5, str(None) produces the string "None" instead of None, breaking default max_read_parallelism configuration.
In service_backend.py lines 158-160, when max_read_parallelism is None (the default), str(None) evaluates to the string "None". 
Since configuration_of checks if explicit_argument is not None, this string passes through and is serialized into JSON for the Scala driver. 
On the Scala side (BatchQueryDriver.scala line 261), the field is typed Option[Int], and json4s will fail to parse "None" as an integer, 
crashing the driver. This affects every user who does NOT explicitly set max_read_parallelism.
```

This change does not affect the broad-managed batch service in gcp.
Fixes the following finding by claude by broadcasting the globals in
SparkBackend, ensuring to destroy the broadcast val after use.
```
SparkBackend no longer broadcasts globals; they are captured in the RDD task closure and serialized per-task. Previously collectDArray used backend.broadcast(globals) (sent once, cached per executor). Now globals are captured directly in the RDD's compute method and serialized for every task. For workloads with large globals and hundreds/thousands of partitions, this increases driver serialization time, network bandwidth, and GC pressure. Only affects the Spark backend.
```

This change does not affect the broad-managed batch service in gcp.
We seem to only test vep on release. Claude found this mistake. Hooray for our ai overlords.
This change does not impact the broad-managed batch service in gcp.
…ail-is#15330)

I don't like this but 'how to I keep the filters in the VDS combiner?'
has been a question we've recived for a while. Finally, there's an
answer.

Add the parameter to new_combiner, VariantDatasetCombiner, and thread
the parameter through the combiner. Is this pretty? No. Is anything in
the combiner pretty? No.

CHANGELOG: the VDS combiner now accepts a `gvcf_save_filters` parameter
that saves the filters as the entry field `gvcf_filters` on both the
reference and variant data.

## Security Assessment
- This change cannot impact the Hail Batch instance as deployed by Broad
Institute in GCP
…5331)

Requester pays configuration is now serialised and set with the
ServiceBackendRPC payload instead of via the feature flags.

This change does not affect the broad-managed batch service in gcp.
Fixes stupid mistakes introduced in hail-is#14684. Found by claude.
This change does not affect the Broad-managed hail batch service in GCP.
## Change Description

Restores public IPs to worker VMs.

Adds a firewall validator to mitigate the loss of signal in "no public
IP" scans

## Security Assessment

- This change potentially impacts the Hail Batch instance as deployed by
Broad Institute in GCP

### Impact Rating

- This change has a medium security impact

### Impact Description

Discussed with appsec. We're asserting that the actual technical impact
will be none, as long as the mitigating validator is scanning at least
as frequently as the "No public IP addresses" check in CIS that we are
no longer checking.

### Appsec Review

- [ ] Required: The impact has been assessed and approved by appsec
## Change Description

Much like the `test_nvidia_driver_accesibility_usage` GPU test (disabled
in hail-is#15115), the `test_over_64_cpus` test is causing us problems because
provisioning very large or niche resources is too liable to time out.
The upshot is flaky test suites that impede progress, failing over
functionality that isn't changed very often.

We _should_ work out a way to trigger these tests in particular
circumstances, but for now, mark this one like the GPU test and unblock
development.

## Security Assessment

- This change cannot impact the Hail Batch instance as deployed by Broad
Institute in GCP

(test only change)

- [x] Required: The impact has been assessed and approved by appsec
…-is#15348)

## Change Description

Fixes a bug in CI builds that allowed old zombie builds to be
resurrected as the current build batch:

- Build 1 succeeds
- Target sha moves, requires rebuild
- Build 2 fails 
- User clicks retry => build 2 invalidated
- Build 1 resurrected as the current build batch 

As far as I can tell, this would be a temporary bad state until `_heal`
spots that build 1 has an out of date sha again, but still - it stops
the rebuild from beginning instantly, and showing all the old build
information for build 1 is super confusing behavior in this state. I
think this has also become way more noticeable now that we're properly
syncing github statuses with the CI "current build" state so in the
above example, Retry updates the PR to success even though it should
actually be pending.

## Security Assessment

- This change potentially impacts the Hail Batch instance as deployed by
Broad Institute in GCP

### Impact Rating

- This change has a low security impact

### Impact Description

Logic fix in the CI "current build" detection

### Appsec Review

- [x] Required: The impact has been assessed and approved by appsec
…ail-is#15297)

## Change Description

Addresses the test failing in
https://batch.hail.is/batches/8362516/jobs/172
(`test_hail_python_service_backend_gcp_11`)

This test appears to be unusually susceptible to cluster contention
because (1) it's relatively heavyweight and (2) it has a relatively
short custom timeout (all tests have a global 10m timeout, this one
overrides it with 4m). In the batch above, I am suspicious that a
previous preempted attempt left just enough existing work in place that
the retry was just a little too slow to finish.

(It might be nicer to cancel previous attempts' jobs at the start of
preemption retries, but that's a much bigger change. Hopefully this will
reduce flakiness in the test suite with minimal upfront effort)

## Security Assessment

- This change potentially impacts the Hail Batch instance as deployed by
Broad Institute in GCP

### Impact Rating

- This change has a low security impact


### Impact Description

Just a test change to hopefully reduce flakiness

### Appsec Review

- [ ] Required: The impact has been assessed and approved by appsec
## Change Description

Document the process of refreshing the ci token, and add the new token
to config

## Security Assessment

- This change could possibly impact the Hail Batch instance as deployed
by Broad Institute in GCP


### Security Impact

Low

### Description

Creates a record of the new CI token, and adds docs on how to refresh it
in the future

- [ ]  Approved by appsec
## Change Description

Updates our base ubuntu noble version.

## Security Assessment

- This change potentially impacts the Hail Batch instance as deployed by
Broad Institute in GCP

### Impact Rating

- This change has a low security impact

### Impact Description

Standard version bump

### Appsec Review

- [x] Required: The impact has been assessed and approved by appsec
## Change Description

Fixes hail-is#15335.

## Security Assessment

- This change cannot impact the Hail Batch instance as deployed by Broad
Institute in GCP (helper client function not used in production
services)
## Change Description

Adds a test flakiness dashboard to make discovering and categorizing
tests which have been flaky recently easy.

Also adds in a react build framework, which will make adding richer ui
elements elsewhere in the ui easier too.

Note: this PR is approximately 1/3 package lock (which can be skipped
during review), 1/3 react build framework (which includes test data
generation in the devserver for local testing), and 1/3 flaky test
dashboard tsx

Screenshot using hard-coded devserver test data:

<img width="2529" height="1232" alt="image"
src="https://github.com/user-attachments/assets/fdce9f65-36d2-4e67-b916-0e08d2c38510"
/>


- Adds a react sources directory in `services/ui/`
- Adds tsx files to implement dashboard elements
- Adds makefile targets to build tsx locally and copy compiled js to ci
static
- Adds build.yaml jobs to build tsx for test and deploy and copy
compiled js to ci static
- Updates our devserver deploy script:
  - To work with CI
- To support flaky_tests.html with no upstream input (no context, mock
api data)
- Adds dev-doc explaining react, and the development process

## Security Assessment

- This change potentially impacts the Hail Batch instance as deployed by
Broad Institute in GCP

### Impact Rating

Delete all except the correct answer:
- This change has a medium security impact

### Impact Description

- Introduces new UI elements in the form of react components
- Adds new build steps to generate and copy the distributable react js
- Adds a new build/import mechanism via the react builds

### Appsec Review

- [x] Required: The impact has been assessed and approved by appsec
…is#15363)

This change follows @patrick-schultz's Claude investigation on
performance regressions. It had the following to say:

"""

**Excessive** **`TimeBlock`** **object allocation from** **`ctx.time`**
**wrapping** **`EmitStream.produce`**

`EmitStream.produce`
(`hail/src/main/scala/is/hail/expr/ir/streams/EmitStream.scala`, line
148) now wraps its entire body in `emitter.ctx.executeContext.time { ...
}`. This method is called recursively -- the inner `produce` helper
(line 323) calls `EmitStream.produce` for every stream-typed sub-IR
node. For a complex query with many stream operations, this could result
in hundreds or thousands of `TimeBlock` allocations during code
generation. Each call to `ExecutionTimer.time(name)(block)`
(`hail/src/main/scala/is/hail/utils/ExecutionTimer.scala`, line 91)
allocates a `TimeBlock` object with its own `mutable.ArrayBuffer`,
appends to the parent's children, pushes/pops the stack, and calls
`System.nanoTime()` twice. On some JVM/OS combinations, `nanoTime()` can
take 50-100ns per call. For a large IR tree with 1000+ stream nodes,
this adds roughly 0.1-0.2ms of overhead plus GC pressure from the
`TimeBlock` objects. This is compile-time-only overhead, not
data-processing overhead.

"""

In response, I've removed the timing around `EmitStream.produce` and
added coarser timing around `ExecuteRelational.apply` that does a lot of
work and does not recur. This is called siginificantly less frequently
and so should add minimal overhead while providing better context in the
timing stack.

This change does not affect the broad-managed batch service in gcp.
Resolved a conflict in web_common/web_common/templates/header.html
by undoing our local removal of the Me & Namespaces menu items,
which didn't seem to be effective anyway.
The trivy-action workflow (albeit a more recent release than the one
pinned here) is currently undergoing a supply chain compromise, so as
we don't use it anyway we should prevent it from being used at all.
We see intermittent transfer failures from this server.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.