336 Commits

Author SHA1 Message Date
DRC
3cea8ccd51 rdppm.c: Fix CMYK upconversion/downconversion
If the data precision of the PBMPLUS file does not match the target data
precision, then the grayscale or RGB samples are rescaled to the target
data precision.  Thus, we need to pass (1 << cinfo->data_precision) - 1
to rgb_to_cmyk() instead of maxval.  This commit also modifies
TJUnitTest so that it validates the correctness of upconversion and
downconversion in the PPM reader.

Fixes #841
2025-12-09 18:24:36 -05:00
DRC
fc4b417a90 TJCompressor.loadSourceImage: Unset unused buffers
If loading a 2-to-8-bit image, unset srcBuf12 and srcBuf16.  If loading
a 9-to-12-bit image, unset srcBuf8 and srcBuf16.  If loading a
13-to-16-bit image, unset srcBuf8 and srcBuf12.  Otherwise,
TJCompressor.getSourceBuf() will return srcBuf8 or srcBuf12, in order,
if any previous invocation of TJCompressor.loadSourceImage() set them,
irrespective of the buffer that was set by the most recent invocation.

This probably helps with garbage collection as well, since it signals to
the GC that the unused buffers are really unused.
2025-12-09 18:21:21 -05:00
DRC
fc324109d5 JNI: Guard against int overflow w/ huge X, Y vals
This is not a security vulnerability, since applications that pass such
values to the Java API would fail regardless, and such a bug would never
make it into the wild.  By contrast, a security vulnerability arises
from applications that work correctly with most input data sets but
trigger a library failure, such as a buffer overrun, with one or more
specific input data sets.  As with most imaging APIs, the libjpeg-turbo
APIs rely upon the calling application to pass appropriately-sized
buffers and appropriate size/dimension arguments.  The failure to do so
is no more the fault of libjpeg-turbo than calling
'buf = malloc(1); buf[2] = 0;' is the fault of the C library.

Buffer size checking is a bonus feature of the Java API that isn't (and
can't be) provided by the C API, so this commit merely hardens the bonus
feature against API abuse, in keeping with the Java paradigm of throwing
an exception rather than crashing due to a caller-imposed buffer
overrun.
2025-12-03 14:52:03 -05:00
DRC
dfb0cff989 ChangeLog.md: Formatting tweak 2025-12-03 14:14:02 -05:00
DRC
6d48aaacd2 TJ: Handle lossless/CS params w/ YUV enc/compress
- If TJPARAM_LOSSLESS was set, then tj3EncodeYUV*8() called
  jpeg_enable_lossless() (via setCompDefaults()), which caused the
  underlying libjpeg API to silently disable subsampling and color
  conversion.  This led to three issues:

  1. Attempting to encode RGB pixels produced incorrect YCbCr or
     grayscale components, since color conversion did not occur.  The
     same issue occurred if TJPARAM_COLORSPACE was explicitly set to
     TJCS_RGB.
  2. Attempting to encode RGB pixels into a grayscale plane caused
     tj3EncodeYUVPlanes8() to overflow the caller's destination pointer
     array if the array was not big enough to accommodate three
     pointers.  If called from tj3EncodeYUV8(), tj3EncodeYUVPlanes8()
     did not overflow the caller's destination pointer array, but a
     segfault occurred when it attempted to copy to the Cb and Cr
     pointers, which were NULL.  The same issue occurred if
     TJPARAM_COLORSPACE was explicitly set to anything other than
     TJCS_GRAY.
  3. Attempting to encode RGB pixels into subsampled YUV planes caused
     tj3EncodeYUV*8() to overflow the caller's buffer(s) if the
     buffer(s) were not big enough to accommodate 4:4:4 (non-subsampled)
     YUV planes.  That would have been the case if the caller allocated
     its buffer(s) based on the return value of tj3YUVBufSize() or
     tj3YUVPlaneSize().  The same issue occurs if TJPARAM_SUBSAMP is
     explicitly set to TJSAMP_444.

  tj3EncodeYUV*8() now ignores TJPARAM_LOSSLESS and TJPARAM_COLORSPACE.

- If TJPARAM_LOSSLESS was set, then attempting to compress a grayscale
  plane into a JPEG image caused tj3CompressFromYUVPlanes8() to overflow
  the caller's source pointer array if the array was not big enough to
  accommodate three pointers.  If called from tj3CompressFromYUV8(),
  tj3CompressFromYUVPlanes8() did not overflow the caller's source
  pointer array, but a segfault occurred when it attempted to copy from
  the Cb and Cr pointers, which were NULL.  This was similar to Issue 2
  above.  The same issue occurred if TJPARAM_COLORSPACE was explicitly
  set to anything other than TJCS_GRAY.

  tj3CompressFromYUV*8() now throws an error if TJPARAM_LOSSLESS is set,
  and it now ignores TJPARAM_COLORSPACE.

These issues did not pose a security risk, since security exploits
involve supported workflows that function normally except when supplied
with malformed input data.  It is documented that colorspace conversion,
chrominance subsampling, and compression from planar YUV images are
unavailable when TJPARAM_LOSSLESS is set.  When TJPARAM_LOSSLESS is set,
the library effectively sets TJPARAM_SUBSAMP to TJSAMP_444 and
TJPARAM_COLORSPACE to TJCS_RGB, TJCS_GRAY, or TJCS_CMYK, depending on
the pixel format of the source image.  That behavior is strongly implied
by the documentation of TJPARAM_LOSSLESS, although the documentation
isn't specific about whether TJPARAM_LOSSLESS applies to
tj3EncodeYUV*8().  In any case, setting TJPARAM_LOSSLESS before calling
tj3CompressFromYUV*8() was never a supported or functional workflow, and
setting TJPARAM_LOSSLESS before calling tj3EncodeYUV*8() was never a
functional workflow.  Thus, there should be no applications "in the
wild" that use either workflow.  Such applications would crash every
time they attempted to encode to or compress from a YUV image.  In other
words, setting TJPARAM_LOSSLESS or TJPARAM_COLORSPACE required the
caller to understand the ramifications of the loss of color conversion
and/or subsampling, and failing to do so was essentially API abuse
(albeit subtle API abuse, hence the desire to make the behavior more
intuitive.)

This commit also removes no-op code introduced by
6da05150ef.  Since setCompDefaults()
returns after calling jpeg_enable_lossless(), modifying the subsampling
level locally had no effect.  The libjpeg API already silently disables
subsampling in jinit_c_master_control() if lossless compression is
enabled, so it was not necessary for setCompDefaults() to handle that.

Fixes #839
2025-10-22 21:05:25 -04:00
DRC
1f3614f167 TJ: Guard against reused JPEG dst buf w/0 buf size
The libjpeg in-memory destination manager has always re-allocated the
JPEG destination buffer if the specified buffer pointer is NULL or the
specified buffer size is 0.  TurboJPEG's destination manager inherited
that behavior.  Because of fe80ec2275,
TurboJPEG's destination manager tries to reuse the most recent
destination buffer if the same buffer pointer is specified.  (The
purpose of that is to enable repeated invocations of tj*Compress*() or
tj*Transform() to automatically grow the destination buffer, as needed,
with no intervention from the calling program.)  However, because of the
inherited code, TurboJPEG's destination manager also reallocated the
destination buffer if the specified buffer size was 0.  Thus, passing a
previously-used JPEG destination buffer pointer to tj*Compress*() or
tj*Transform() while specifying a destination buffer size of 0 confused
the destination manager.  It reallocated the destination buffer to 4096
bytes but reported the old destination buffer size to the libjpeg API.
This caused a buffer overrun if the old destination buffer size was
larger than 4096 bytes.

The documentation for tj*Compress*() is contradictory on this matter.
It states that the JPEG destination buffer size must be specified if the
destination buffer pointer is non-NULL.  However, it also states that,
if the destination buffer is reused, the specified destination buffer
size is ignored.  The documentation for tj*Transform() does not specify
the function's behavior if the destination buffer is reused.  Thus, the
behavior of the API is at best undefined if a calling application
attempts to reuse a destination buffer while specifying a destination
buffer size of 0.  If that ever worked, it only worked in libjpeg-turbo
1.3.x and prior.

This issue was exposed only through API abuse, and calling applications
that abused the API in that manner would not have worked for the last 11
years.  Thus, the issue did not represent a security threat.  This
commit merely hardens the API against such abuse, by modifying
TurboJPEG's destination manager so that it refuses to re-allocate the
JPEG destination buffer if the buffer pointer is reused and the
specified buffer size is 0.  That is consistent with the most permissive
interpretation of the TurboJPEG API documentation.  (The API already
ignored the destination buffer size if the destination buffer pointer
was reused and the specified buffer size was non-zero.  It makes sense
for it to do likewise if the specified buffer size is 0.)  This commit
also modifies TJUnitTest so that it verifies whether the API is hardened
against the aforementioned abuse.
2025-10-08 12:28:54 -04:00
DRC
8af737edbe ChangeLog.md: List CVE ID for 2a9e3bd7 and c30b1e7 2025-09-23 08:26:20 -04:00
DRC
98c458381f Fix issues with Windows Arm64EC builds
Arm64EC basically wraps native Arm64 functions with an emulated
Windows/x64 ABI, which can improve performance for Windows/x64
applications running under the x64 emulator on Windows/Arm.  When
building for Arm64EC, the compiler defines _M_X64 and _M_ARM64EC but not
_M_ARM64.
2025-08-21 13:03:56 -04:00
DRC
f158143ec0 jpeg_skip_scanlines: Fix UAF w/merged upsamp/quant
jpeg_skip_scanlines() (more specifically, read_and_discard_scanlines())
should check whether merged upsampling is disabled before attempting
to dereference cinfo->cconvert, and it should check whether color
quantization is enabled before attempting to dereference
cinfo->cquantize.  Otherwise, executing one of the following sequences
with the same libjpeg API instance and any 4:2:0 or 4:2:2 JPEG image
will cause a use-after-free issue:

- Disable merged upsampling (default)
- jpeg_start_decompress()
- jpeg_finish_decompress()
  (frees but doesn't zero cinfo->cconvert)
- Enable merged upsampling
- jpeg_start_decompress()
  (doesn't re-allocate cinfo->cconvert, because
  j*init_color_deconverter() isn't called)
- jpeg_skip_scanlines()

- Enable 1-pass color quantization
- jpeg_start_decompress()
- jpeg_finish_decompress()
  (frees but doesn't zero cinfo->cquantize)
- Disable 1-pass color quantization
- jpeg_start_decompress()
  (doesn't re-allocate cinfo->cquantize, because j*init_*_quantizer()
  isn't called)
- jpeg_skip_scanlines()

These sequences are very unlikely to occur in a real-world application.
In practice, this issue does not even cause a segfault or other
user-visible errant behavior, so it is only detectable with ASan.  That
is because the memory region is small enough that it doesn't get
reclaimed by either the application or the O/S between the point at
which it is freed and the point at which it is used (even though a
subsequent malloc() call requests exactly the same amount of memory.)
Thus, this is an undefined behavior issue, but it is unlikely to be
exploitable.
2025-07-28 21:32:11 -04:00
DRC
51cee03629 Build: Use wrappers rather than CMake object libs
Some downstream projects need to adapt the libjpeg-turbo source code to
non-CMake build systems, and the use of CMake object libraries made that
difficult.  Referring to #754, the use of CMake object libraries also
caused the libjpeg-turbo libraries to contain duplicate object names,
which caused problems with certain development tools.  This commit
modifies the build system so that it uses wrappers, rather than CMake
object libraries, to compile source files for multiple data precisions.
For convenience, the wrappers are included in the source tree, but they
can be re-generated by building the "wrappers" target.

In addition to facilitating downstream integration, using wrappers
improves code readability, since multiple data precisions are now
handled at the source code level instead of at the build system level.

Since this will be pushed to a bug-fix release, the goal was to avoid
changing any existing source code.  A future major release of
libjpeg-turbo may restructure the libjpeg API source code so that only
the functions that need to be compiled for multiple data precisions are
wrapped.  (That is how the TurboJPEG API source code is structured.)

Closes #817
2025-06-13 17:27:57 -04:00
DRC
c889b1da56 TJBench: Require additional argument with -copy
(oversight from e4c67aff50)
2025-06-12 10:08:21 -04:00
DRC
e0e18dea54 Ensure methods called by global funcs are init'd
If a hypothetical calling application does something really stupid and
changes cinfo->data_precision after calling jpeg_start_*compress(), then
the precision-specific methods called by jpeg_write_scanlines(),
jpeg_write_raw_data(), jpeg_finish_compress(), jpeg_read_scanlines(),
jpeg_read_raw_data(), or jpeg_start_output() may not be initialized.

Ensure that the first precision-specific method (which will always be
cinfo->main->process_data*(), cinfo->coef->compress_data*(), or
cinfo->coef->decompress_data()) called by any global function that may
be called after jpeg_start_*compress() is initialized and non-NULL.
This increases the likelihood (but does not guarantee) that a
hypothetical stupid calling application will fail gracefully rather than
segfault if it changes cinfo->data_precision after calling
jpeg_start_*compress().  A hypothetical stupid calling application can
still bork itself by changing cinfo->data_precision after initializing
the source manager but before calling jpeg_start_compress(), or after
initializing the destination manager but before calling
jpeg_start_decompress().
2024-12-18 16:31:41 -05:00
DRC
c3446d64d7 Bump version to 3.1.0 2024-12-12 11:01:53 -05:00
DRC
6da05150ef Allow disabling prog/opt/lossless if prev. enabled
- Due to an oversight, a113506d17
  (libjpeg-turbo 1.4 beta1) effectively made the call to
  std_huff_tables() in jpeg_set_defaults() a no-op if the Huffman tables
  were previously defined, which made it impossible to disable Huffman
  table optimization or progressive mode if they were previously enabled
  in the same API instance.  std_huff_tables() retains its previous
  behavior for decompression instances, but it now force-enables the
  standard (baseline) Huffman tables for compression instances.

- Due to another oversight, there was no way to disable lossless mode
  if it was previously enabled in a particular API instance.
  jpeg_set_defaults() now accomplishes this, which makes
  TJ*PARAM_LOSSLESS behave as intended/documented.

- Due to yet another oversight, setCompDefaults() in the TurboJPEG API
  library permanently modified the value of TJ*PARAM_SUBSAMP when
  generating a lossless JPEG image, which affected subsequent lossy
  compression operations.  This issue was hidden by the issue above and
  thus does not need to be publicly documented.

Fixes #792
2024-10-24 18:02:13 -04:00
DRC
bfe77d319f ChangeLog: Document accidental fix from 9983840e
Closes #789
2024-09-23 15:49:41 -04:00
DRC
ad9c02c6f5 tj3Set(): Allow TJPARAM_LOSSLESSPT vals from 0..15
The target data precision isn't necessarily known at the time that the
calling program sets TJPARAM_LOSSLESSPT, so tj3Set() needs to allow all
possible values (from 0 to 15.)  jpeg_enable_lossless(), which is called
within the body of tj3Compress*(), will throw an error if the point
transform value is greater than {data precision} - 1.
2024-09-23 14:58:36 -04:00
DRC
9d76821f98 3.1 beta1 2024-09-14 16:00:53 -04:00
DRC
9b01f5a057 TJ: Add func/method for computing xformed buf size 2024-09-14 13:23:32 -04:00
DRC
a272858212 TurboJPEG: ICC profile support 2024-09-06 19:55:41 -04:00
DRC
c519d7b679 Don't ignore JPEG buf size with TJPARAM_NOREALLOC
Since the introduction of TJFLAG_NOREALLOC in libjpeg-turbo 1.2.x, the
TurboJPEG C API documentation has (confusingly) stated that:

- if the JPEG buffer pointer points to a pre-allocated buffer, then the
JPEG buffer size must be specified, and

- the JPEG buffer size should be specified if the JPEG buffer is
pre-allocated to an arbitrary size.

The documentation never explicitly stated that the JPEG buffer size
should be specified if the JPEG buffer is pre-allocated to a worst-case
size, but since focus does not imply exclusion, it also never explicitly
stated the reverse.  Furthermore, the documentation never stated that
this was contingent upon TJPARAM_NOREALLOC/TJFLAG_NOREALLOC.  However,
effectively the compression and lossless transformation functions
ignored the JPEG buffer size(s) passed to them, and assumed that the
JPEG buffer(s) had been allocated to a worst-case size, if
TJPARAM_NOREALLOC/TJFLAG_NOREALLOC was set.  This behavior was an
accidental and undocumented throwback to libjpeg-turbo 1.1.x, in which
the tjCompress() function provided no way to specify the JPEG buffer
size.  It was always a bad idea for applications to rely upon that
behavior (although our own TJBench application unfortunately did.)
However, if such applications exist in the wild, the new behavior would
constitute a breaking change, so it has been introduced only into
libjpeg-turbo 3.1.x and only into TurboJPEG 3 API functions.  The
previous behavior has been retained when calling functions from the
TurboJPEG 2.1.x API and prior versions.

Did I mention that APIs are hard?
2024-09-06 19:55:27 -04:00
DRC
e4c67aff50 TJBench: More argument consistification
-copynone --> -copy none

Add '-copy all', even though it's the default.

-rgb, -bgr, -rgbx, -bgrx, -xbgr, -xrgb, -gray, -cmyk -->
-pixelformat {rgb|bgr|rgbx|bgrx|xbgr|xrgb|gray|cmyk}
(This is mainly so -gray won't interfere with -grayscale.)

Fix an ArrayIndexOutOfBoundsException that occurred when passing -dct
to the Java version without specifying the DCT algorithm (oversight from
24fbf64d31a0758c63bcc27cf5d92fc5611717d0.)
2024-09-04 12:41:15 -04:00
DRC
d43ed7a1ff Merge branch 'main' into dev 2024-09-04 08:38:13 -04:00
DRC
e7e9344db1 TJ: Honor TJ*OPT_COPYNONE for individual xforms
jcopy_markers_execute() has historically ignored its option argument,
which is OK for jpegtran, but tj*Transform() needs to be able to save a
set of markers from the source image and write a subset of those markers
to each destination image.  Without that ability, the function
effectively behaved as if TJ*OPT_COPYNONE was not specified unless all
transforms specified it.
2024-09-04 07:34:42 -04:00
DRC
37851a32c0 TurboJPEG: Add restart markers when transforming 2024-09-03 09:26:33 -04:00
DRC
f464728a2b ChangeLog.md: Minor wordsmithing 2024-09-02 08:00:27 -04:00
DRC
fad6100704 Replace TJExample with IJG workalike programs 2024-09-01 14:05:15 -04:00
DRC
3e303e7235 TJBench: Allow British spellings in arguments 2024-08-31 18:42:19 -04:00
DRC
9b1198968b Move test scripts into test/ 2024-08-31 18:07:13 -04:00
DRC
645673810f Merge branch 'main' into dev 2024-08-31 17:41:03 -04:00
DRC
eb75363004 Update URLs
- Eliminate unnecessary "www."
- Use HTTPS.
- Update Java, MSYS, tdm-gcc, and NSIS URLs.
- Update URL and title of Agner Fog's assembly language optimization
  manual.
- Remove extraneous information about MASM and Borland Turbo Assembler
  and outdated NASM URLs from the x86 assembly headers, and mention
  Yasm.
2024-08-31 16:50:08 -04:00
DRC
8d76e4e550 Doc: "EXIF" = "Exif" 2024-08-31 15:33:55 -04:00
DRC
9983840eb6 TJ/xform: Check crop region against dest. image
Lossless cropping is performed after other lossless transform
operations, so the cropping region must be specified relative to the
destination image dimensions and level of chrominance subsampling, not
the source image dimensions and level of chrominance subsampling.

More specifically, if the lossless transform operation swaps the X and Y
axes, or if the image is converted to grayscale, then that changes the
cropping region requirements.
2024-08-31 15:04:30 -04:00
DRC
8456d2b98c Doc: "MCU block" = "iMCU" or "MCU"
The JPEG-1 spec never uses the term "MCU block".  That term is rarely
used in other literature to describe the equivalent of an MCU in an
interleaved JPEG image, but the libjpeg documentation uses "iMCU" to
describe the same thing.  "iMCU" is a better term, since the equivalent
of an interleaved MCU can contain multiple DCT blocks (or samples in
lossless mode) that are only grouped together if the image is
interleaved.

In the case of restart markers, "MCU block" was used in the libjpeg
documentation instead of "MCU", but "MCU" is more accurate and less
confusing.  (The restart interval is literally in MCUs, where one MCU
is one data unit in a non-interleaved JPEG image and multiple data units
in a multi-component interleaved JPEG image.)

In the case of 9b704f96b2, the issue was
actually with progressive JPEG images exactly two DCT blocks wide, not
two MCU blocks wide.

This commit also defines "MCU" and "MCU row" in the description of the
various restart marker options/parameters.  Although an MCU row is
technically always a row of samples in lossless mode, "sample row" was
confusing, since it is used in other places to describe a row of samples
for a single component (whereas an MCU row in a typical lossless JPEG
image consists of a row of interleaved samples for all components.)
2024-08-30 14:16:09 -04:00
DRC
6a9565ce6e Merge branch 'main' into dev 2024-08-26 16:45:41 -04:00
DRC
4851cbe406 djpeg/jpeg_crop_scanline(): Disallow crop vals < 0
Because the crop spec was parsed using unsigned 32-bit integers,
negative numbers were interpreted as values ~= UINT_MAX (4,294,967,295).
This had the following ramifications:

- If the cropping region width was negative and the adjusted width + the
  adjusted left boundary was greater than 0, then the 32-bit unsigned
  integer bounds checks in djpeg and jpeg_crop_scanline() overflowed and
  failed to detect the out-of-bounds width, jpeg_crop_scanline() set
  cinfo->output_width to a value ~= UINT_MAX, and a buffer overrun and
  subsequent segfault occurred in the upsampling or color conversion
  routine.  The segfault occurred in the body of
  jpeg_skip_scanlines() --> read_and_discard_scanlines() if the cropping
  region upper boundary was greater than 0 and the JPEG image used
  chrominance subsampling and in the body of jpeg_read_scanlines()
  otherwise.

- If the cropping region width was negative and the adjusted width + the
  adjusted left boundary was 0, then a zero-width output image was
  generated.

- If the cropping region left boundary was negative, then an output
  image with bogus data was generated.

This commit modifies djpeg and jpeg_crop_scanline() so that the
aforementioned bounds checks use 64-bit unsigned integers, thus guarding
against overflow.  It similarly modifies jpeg_skip_scanlines().  In the
case of jpeg_skip_scanlines(), the issue was not reproducible with
djpeg, but passing a negative number of lines to jpeg_skip_scanlines()
caused a similar overflow if the number of lines +
cinfo->output_scanline was greater than 0.  That caused
jpeg_skip_scanlines() to read past the end of the JPEG image, throw a
warning ("Corrupt JPEG data: premature end of data segment"), and fail
to return unless warnings were treated as fatal.  Also, djpeg now parses
the crop spec using signed integers and checks for negative values.
2024-08-26 16:24:33 -04:00
DRC
de4bbac55e TJCompressor.compress(): Fix lossls buf size calc 2024-08-23 12:48:34 -04:00
DRC
79b8d65f0f Java: Add official packed-pixel image I/O methods 2024-08-22 18:19:09 -04:00
DRC
e2932b68ac ChangeLog.md: Formatting tweak
(oversight from d6ce7df352)
2024-08-22 17:14:45 -04:00
DRC
24fbf64d31 TJBench: Consistify args with djpeg
-fastdct --> -dct fast
-fastupsample --> -nosmooth
2024-08-21 13:07:46 -04:00
DRC
d6ce7df352 TJBench: Consistify args with cjpeg/djpeg/jpegtran
-hflip --> -flip horizontal
-limitscans --> -maxscans N
-rot90 --> -rotate 90
-rot180 --> -rotate 180
-rot270 --> -rotate 270
-stoponwarning --> -strict
-vflip --> -flip vertical
2024-08-21 13:07:46 -04:00
DRC
0737844235 Java: Remove deprecated constants and methods 2024-08-21 13:07:42 -04:00
DRC
26d978b661 Merge branch 'main' into dev 2024-08-16 11:58:02 -04:00
DRC
0c23b0ad60 Various doc tweaks
- "Optimized baseline entropy coding" = "Huffman table optimization"

  "Optimized baseline entropy coding" was meant to emphasize that the
  feature is only useful when generating baseline (single-scan lossy
  8-bit-per-sample Huffman-coded) JPEG images, because it is
  automatically enabled when generating Huffman-coded progressive
  (multi-scan), 12-bit-per-sample, and lossless JPEG images.  However,
  Huffman table optimization isn't actually an integral part of those
  non-baseline modes.  You can forego Huffman table optimization with
  12-bit data precision if you supply your own Huffman tables.  The spec
  doesn't require it with progressive or lossless mode, either, although
  our implementation does.  Furthermore, "baseline" describes more than
  just the type of entropy coding used.  It was incorrect to say that
  optimized "baseline" entropy coding is automatically enabled for
  Huffman-coded progressive, 12-bit-per-sample, and lossless JPEG
  images, since those are clearly not baseline images.

- "Progressive entropy coding" = "Progressive JPEG"

  "Progressive" describes more than just the type of entropy coding
  used.  (In fact, both Huffman-coded and arithmetic-coded images can be
  progressive.)

- Mention that TJPARAM_OPTIMIZE/TJ.PARAM_OPTIMIZE can be used with
  lossless transformation as well.

- General wordsmithing

- Formatting tweaks
2024-08-16 11:49:00 -04:00
DRC
6ec8e41f50 Handle lossless JPEG images w/2-15 bits per sample
Closes #768
Closes #769
2024-06-24 23:14:04 -04:00
DRC
3290711d9c cjpeg: Only support 8-bit precision w/ GIF input
Creating 12-bit-per-sample JPEG images from GIF input images was a
useful testing feature when the data precision was a compile-time
setting.  However, now that the data precision is a runtime setting,
it doesn't make sense for cjpeg to allow data precisions other than
8-bit with GIF input images.  GIF images are limited to 256 colors from
a palette of 8-bit-per-component RGB values, so they cannot take
advantage of the additional gamut afforded by higher data precisions.
2024-06-24 22:17:26 -04:00
DRC
ed79114acb TJBench: Test end-to-end grayscale comp./decomp.
Because the TurboJPEG API originated in VirtualGL and TurboVNC as a
means of compressing from/decompressing to extended RGB framebuffers,
its earliest incarnations did not handle grayscale packed-pixel images.
Thus, TJBench has always converted the input image (even if it is
grayscale) to an extended RGB source buffer prior to compression, and it
has always decompressed JPEG images (even if they are grayscale) into an
extended RGB destination buffer.  That allows TJBench to benchmark the
RGB-to-grayscale and grayscale-to-RGB color conversion paths used by
VirtualGL and TurboVNC when grayscale subsampling (AKA the grayscale
JPEG colorspace) is selected.  However, more recent versions of the
TurboJPEG API handle grayscale packed-pixel images, so it is beneficial
to allow TJBench to benchmark the end-to-end grayscale compression and
decompression paths.  This commit accomplishes that by adding a new
command-line option (-gray) that causes TJBench to use a grayscale
source buffer (which only works if the input image is PGM or grayscale
BMP), to decompress JPEG images (even if they are full-color) into a
grayscale destination buffer, and to save output images in PGM or
grayscale BMP format.
2024-06-24 22:17:26 -04:00
DRC
55bcad88e1 Merge branch 'main' into dev 2024-06-24 22:16:07 -04:00
DRC
51d021bf01 TurboJPEG: Fix 12-bit-per-sample arith-coded compr
(Regression introduced by 7bb958b732)

Because of 7bb958b732, the TurboJPEG
compression and encoding functions no longer transfer the value of
TJPARAM_OPTIMIZE into cinfo->data_precision unless the data precision
is 8.  The intent of that was to prevent using_std_huff_tables() from
being called more than once when reusing the same compressor object to
generate multiple 12-bit-per-sample JPEG images.  However, because
cinfo->optimize_coding is always set to TRUE by jpeg_set_defaults() if
the data precision is 12, calling applications that use 12-bit data
precision had to unset cinfo->optimize_coding if they set
cinfo->arith_code after calling jpeg_set_defaults().  Because of
7bb958b732, the TurboJPEG API stopped
doing that except with 8-bit data precision.  Thus, attempting to
generate a 12-bit-per-sample arithmetic-coded lossy JPEG image using
the TurboJPEG API failed with "Requested features are incompatible."

Since the compressor will always fail if cinfo->arith_code and
cinfo->optimize_coding are both set, and since cinfo->optimize_coding
has no relevance for arithmetic coding, the most robust and user-proof
solution is for jinit_c_master_control() to set cinfo->optimize_coding
to FALSE if cinfo->arith_code is TRUE.

This commit also:
- modifies TJBench so that it no longer reports that it is using
  optimized baseline entropy coding in modes where that setting
  is irrelevant,
- amends the cjpeg documentation to clarify that -optimize is implied
  when specifying -progressive or '-precision 12' without -arithmetic,
  and
- prevents jpeg_set_defaults() from uselessly checking the value of
  cinfo->arith_code immediately after it has been set to FALSE.
2024-06-24 22:15:55 -04:00
DRC
bb3ab53157 cjpeg: Don't enable lossless until precision known
jpeg_enable_lossless() checks the point transform value against the data
precision, so we need to defer calling jpeg_enable_lossless() until
after all command-line options have been parsed.
2024-06-24 22:15:55 -04:00
DRC
94c64ead85 Various doc tweaks
- "bits per component" = "bits per sample"

  Describing the data precision of a JPEG image using "bits per
  component" is technically correct, but "bits per sample" is the
  terminology that the JPEG-1 spec uses.  Also, "bits per component" is
  more commonly used to describe the precision of packed-pixel formats
  (as opposed to "bits per pixel") rather than planar formats, in which
  all components are grouped together.

- Unmention legacy display technologies.  Colormapped and monochrome
  displays aren't a thing anymore, and even when they were still a
  thing, it was possible to display full-color images to them.  In 1991,
  when JPEG decompression time was measured in minutes per megapixel, it
  made sense to keep a decompressed copy of JPEG images on disk, in a
  format that could be displayed without further color conversion (since
  color conversion was slow and memory-intensive.)  In 2024, JPEG
  decompression time is measured in milliseconds per megapixel, and
  color conversion is even faster.  Thus, JPEG images can be
  decompressed, displayed, and color-converted (if necessary) "on the
  fly" at speeds too fast for human vision to perceive.  (In fact, your
  TV performs much more complicated decompression algorithms at least 60
  times per second.)

- Document that color quantization (and associated features), GIF
  input/output, Targa input/output, and OS/2 BMP input/output are legacy
  features.  Legacy status doesn't necessarily mean that the features
  are deprecated.  Rather, it is meant to discourage users from using
  features that may be of little or no benefit on modern machines (such
  as low-quality modes that had significant performance advantages in
  the early 1990s but no longer do) and that are maintained on a
  break/fix basis only.

- General wordsmithing, grammar/punctuation policing, and formatting
  tweaks

- Clarify which data precisions each cjpeg input format and each djpeg
  output format supports.

- cjpeg.1: Remove unnecessary and impolitic statement about the -targa
  switch.

- Adjust or remove performance claims to reflect the fact that:
  * On modern machines, the djpeg "-fast" switch has a negligible effect
    on performance.
  * There is a measurable difference between the performance of Floyd-
    Steinberg dithering and no dithering, but it is not likely
    perceptible to most users.
  * There is a measurable difference between the performance of 1-pass
    and 2-pass color quantization, but it is not likely perceptible to
    most users.
  * There is a measurable difference between the performance of
    full-color and grayscale output when decompressing a full-color JPEG
    image, but it is not likely perceptible to most users.
  * IDCT scaling does not necessarily improve performance.  (It
    generally does if the scaling factor is <= 1/2 and generally doesn't
    if the scaling factor is > 1/2, at least on my machine.  The
    performance claim made in jpeg-6b was probably invalidated when we
    merged the additional scaling factors from jpeg-7.)

- Clarify which djpeg switches/output formats cannot be used when
  decompressing lossless JPEG images.

- Remove djpeg hints, since those involve quality vs. speed tradeoffs
  that are no longer relevant for modern machines.

- Remove documentation regarding using color quantization with 16-bit
  data precision.  (Color quantization requires lossy mode.)

- Java: Fix typos in TJDecompressor.decompress12() and
  TJDecompressor.decompress16() documentation.

- jpegtran.1: Fix truncated paragraph

  In a man page, a single quote at the start of a line is interpreted as
  a macro.

  Closes #775

- libjpeg.txt:
  * Mention J16SAMPLE data type (oversight.)
  * Remove statement about extending jdcolor.c.  (libjpeg-turbo is not
    quite as DIY as libjpeg once was.)
  * Remove paragraph about tweaking the various typedefs in jmorecfg.h.
    It is no longer relevant for modern machines.
  * Remove caveat regarding systems with ints less than 16 bits wide.
    (ANSI/ISO C requires an int to be at least 16 bits wide, and
    libjpeg-turbo has never supported non-ANSI compilers.)

- usage.txt:
  * Add copyright header.
  * Document cjpeg -icc, -memdst, -report, -strict, and -version
    switches.
  * Document djpeg -icc, -maxscans, -memsrc, -report, -skip, -crop,
    -strict, and -version switches.
  * Document jpegtran -icc, -maxscans, -report, -strict, and -version
    switches.
2024-06-24 22:11:43 -04:00