[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Linking coreutils against OpenSSL



Hi,

On 11/10/23 21:07, Stephan Verbücheln wrote:

In my opinion, this is yet another reason to use a proper cryptography
library (openssl, gnutls or gcrypt) instead of a custom implementation
for this kind of algorithm.

Yes and no. The reason several of our core tools bring their own functions is to specifically reduce the Essential package set. Hence this thread: we need to weigh the benefits against the drawbacks here.

In coreutils' case, I think the benefits are kind of qualified by the number of direct users. Basically, few people have use cases that require them to routinely verify checksums with the tools from coreutils[1].

The main benefit of this move is that container images will shrink because libssl becomes part of the base layer, so fewer copies of it will be kept in stacked layers. I would disregard this as a benefit, otherwise we could make a case that more packages should be Essential.

The actual drawbacks for users are minimal too:
 - systemd pulls it in anyway
 - apt will pull it in on the remaining systems

I don't quite see the appeal of OpenSSL as a dependency for apt either. I have 2 Gbps Internet at home, and a laptop I bought in 2012, and apt is never CPU bound. I could see the benefit of gzip offloading into the kernel crypto engine, that would be noticeable to me and at least two other people.

We already have two other such libraries in the quasi-essential set: libcrypt, and the Linux kernel.

libcrypt:
 + already in the quasi-essential set (no extra space)
 - still slow

kernel:
 + No extra space needed
 + Support for offload engines that use privileged access
 - Invisible dependency

OpenSSL:
 + Handwritten optimized assembler functions for a set of architectures
 - Horrible code

The optimized assembler function brings a massive speedup on amd64, which is what triggered this thread. The ARM NEON assembler code gives a moderate speedup for hashing compared to autovectorized generic code, but in general vector units are the wrong tool for cryptographic hashes, so I'm not surprised it isn't an order of magnitude.

Over time, when these libraries add support for cryptography
acceleration instructions for more architectures, all programs will
benefit from it.

Yes, but crypto acceleration in instruction form is difficult to implement in RISC architectures -- which is why these usually have separate DMA capable devices, and work queues managed in the kernel.

I would expect that many rich ARM SoCs for phones, laptops and servers
already have something and that openssl supports it already. What
device did you run your benchmark on?

I used a Zynq SoC, and just tested a random file I had that fit into memory, running sha256sum and kcapi-dgst -c sha256 five times each. OpenSSL is a bit faster (going from 1 minute to 45 seconds), but in a real world application I'd give this an offload engine on the FPGA side if it was a hot path, and ignore it if it was not.

In summary, I don't believe this change has any measurable effect beyond growing the Essential set and improving artificial benchmarks, so I'm pretty lukewarm about this.

   Simon

[1] "debootstrap george" is an outlier and should not have been counted


Reply to: