There are some *.inc already in the repository, mostly in demos/tests
and related to some algorithm implementations. Introduction
of array_alloc.inc has made including these files in the tags generation
even more pertinent, so they are included now.
Also, this commit explicitly marks *.h files as containing C code,
overriding universal-ctags default of interpreting them as C++/ObjectiveC
ones.
Suggested-by: Neil Horman <nhorman@openssl.org>
Signed-off-by: Eugene Syromiatnikov <esyr@openssl.org>
Reviewed-by: Saša Nedvědický <sashan@openssl.org>
Reviewed-by: Matt Caswell <matt@openssl.org>
Reviewed-by: Paul Dale <ppzgs1@gmail.com>
Reviewed-by: Neil Horman <nhorman@openssl.org>
(Merged from https://github.com/openssl/openssl/pull/28059)
Such routines allow alleviating the need to perform explicit integer
overflow check during allocation size calculation and generally make
the allocations more semantic (as they signify that a collection
of NUM items, each occupying SIZE bytes is being allocated), which paves
the road for additional correctness checks in the future.
Signed-off-by: Eugene Syromiatnikov <esyr@openssl.org>
Reviewed-by: Saša Nedvědický <sashan@openssl.org>
Reviewed-by: Matt Caswell <matt@openssl.org>
Reviewed-by: Paul Dale <ppzgs1@gmail.com>
Reviewed-by: Neil Horman <nhorman@openssl.org>
(Merged from https://github.com/openssl/openssl/pull/28059)
Recently, our overnight QUIC interop runs began failing in CI when an
openssl server was tested against an ngtcp2 client:
https://github.com/openssl/openssl/actions/runs/16739736813
The underlying cause bears some explination for historical purposes
The problem began happening with a recent update to ngtcp2 in which
ngtcp2 updated its wolfssl tls backend to support ML-KEM, which caused
ngtcp to emit a client hello message that offered several groups
(including X25519MLKEM768) but only provided a keyshare for x25519.
This in turn triggered the openssl server to respond with a hello retry
request (HRR), requesting an ML-KEM keyshare instead, which ngtcp2
obliged. However all subsequent frames from the client were discarded by
the server, due to failing packet body decryption.
The problem was tracked down to a mismatch in the initial vectors used
by the client and server, leading to an AEAD tag mismatch.
Packet protection keys generate their IV's in QUIC by xoring the packet
number of the received frame with the base IV as derived via HKDF in the
tls layer.
The underlying problem was that openssl hit a very odd corner case with
how we compute the packet number of the received frame. To save space,
QUIC encodes packet numbers using a variable length integer, and only
sends the changed bits in the packet number. This requires that the
receiver (openssl) store the largest received pn of the connection,
which we nominally do.
However, in default_port_packet_handler (where QUIC frames are processed
prior to having an established channel allocated) we use a temporary qrx
to validate the packet protection of those frames. This temporary qrx
may be incorporated into the channel in some cases, but is not in the
case of a valid frame that generates an HRR at the TLS layer. In this
case, the channel allocates its own qrx independently. When this
occurs, the largest_pn value of the temporary qrx is lost, and
subsequent frames are unable to be received, as the newly allocated qrx
belives that the larges_pn for a given pn_space is 0, rather than the
value received in the initial frame (which was a complete 32 bit value,
rather than just the changed lower 8 bits). As a result the IV
construction produced the wrong value, and the decrypt failed on those
subsequent frames.
Up to this point, that wasn't even a problem, as most quic
implementations start their packet numbering at 0, so the next packet
could still have its packet number computed properly. The combination
of ngtcp using large random values for initial packet numbers, along
with the HRR triggering a separate qrx creation on a channel led to the
discovery of this discrepancy.
The fix seems pretty straightforward. When we detect in
port_default_packet_handler, that we have a separate qrx in the new
channel, we migrate processed packets from the temporary qrx to the
canonical channel qrx. In addition to doing that, we also need to
migrate the largest_pn array from the temporary qrx to the channel_qrx
so that subsequent frame reception is guaranteed to compute the received
frame packet number properly, and as such, compute the proper IV for
packet protection decryption.
Fixesopenssl/project#1296
Reviewed-by: Saša Nedvědický <sashan@openssl.org>
Reviewed-by: Matt Caswell <matt@openssl.org>
(Merged from https://github.com/openssl/openssl/pull/28115)
This eliminates locking during writing out of the lock contation report
data, which claws back some of the lost performance degradation imposed
by the lock contention reporting instrumentation:
[Without -DREPORT_RWLOCK_CONTENTION]
~/dev/perftools/source$ ./evp_fetch 100
Average time per fetch call: 4.502162us
~/dev/perftools/source$ ./evp_fetch 200
Average time per fetch call: 8.224920us
[Before]
~/dev/perftools/source$ ./evp_fetch 100
Average time per fetch call: 13.079795us
~/dev/perftools/source$ ./evp_fetch 200
Average time per fetch call: 23.420235us
[After]
~/dev/perftools/source$ ./evp_fetch 100
Average time per fetch call: 6.557428us
~/dev/perftools/source$ ./evp_fetch 200
Average time per fetch call: 13.415148us
The downside is that it produces a file for each TID, which floods
the working directory with debug files, but that mich be an acceptable
trade-off.
Reviewed-by: Neil Horman <nhorman@openssl.org>
Reviewed-by: Saša Nedvědický <sashan@openssl.org>
(Merged from https://github.com/openssl/openssl/pull/27983)
It also drops the premature initalisation of it in
ossl_init_rwlock_contention_data(), deferring it to on-demand one
in ossl_rwlock_{rd,wr}lock(), which seems to shave some of the incurred
overhead:
[Before]
~/dev/perftools/source$ ./evp_fetch 100
Average time per fetch call: 16.944004us
~/dev/perftools/source$ ./evp_fetch 200
Average time per fetch call: 26.325767us
[After]
~/dev/perftools/source$ ./evp_fetch 100
Average time per fetch call: 13.079795us
~/dev/perftools/source$ ./evp_fetch 200
Average time per fetch call: 23.420235us
Signed-off-by: Eugene Syromiatnikov <esyr@openssl.org>
Reviewed-by: Neil Horman <nhorman@openssl.org>
Reviewed-by: Saša Nedvědický <sashan@openssl.org>
(Merged from https://github.com/openssl/openssl/pull/27983)
nightly, run the memory allocation failure test.
We build with asan enabled to log memory leaks and other issues
Note the test is designed to pass even if the test fails, as currently
(perhaps not suprisingly), several error paths result in asan errors.
Reviewed-by: Matt Caswell <matt@openssl.org>
Reviewed-by: Paul Dale <ppzgs1@gmail.com>
(Merged from https://github.com/openssl/openssl/pull/28078)
We would like to be able to test our memory failure paths, but forcing
malloc to return NULL at certain points in time.
This test does that, by running a sepcific workload n+1 time. In this
case the workload is a simple ssl handshake.
We run 1 test which sets our malloc wrapper into record mode, in which
it just acts as a pass through to the system malloc call and records the
number of times it was called.
Then we run a second test, which does the same handshake N times, where
N is the number of times malloc was called in the previous test. For
each iteration in i=0..N we fail the ith malloc operation.
We don't check for functional failures in the second test (as we expect
failures), we just want to make sure that (a) we don't crash and (b)
asan doesn't report any errors.
Currently, we get _lots_ of asan failures, but we can use this test to
log issues for that and fix those up.
Reviewed-by: Matt Caswell <matt@openssl.org>
Reviewed-by: Paul Dale <ppzgs1@gmail.com>
(Merged from https://github.com/openssl/openssl/pull/28078)