summaryrefslogtreecommitdiff
path: root/cipher/cipher-gcm.c
AgeCommit message (Collapse)AuthorFilesLines
2016-03-23cipher: Avoid NULL-segv in GCM mode if a key has not been set.Werner Koch1-4/+16
* cipher/cipher-gcm.c (_gcry_cipher_gcm_encrypt): Check that GHASH_FN has been initialized. (_gcry_cipher_gcm_decrypt): Ditto. (_gcry_cipher_gcm_authenticate): Ditto. (_gcry_cipher_gcm_initiv): Ditto. (_gcry_cipher_gcm_tag): Ditto. -- Avoid a crash if certain functions are used before setkey. Reported-by: Peter Wu <peter@lekensteyn.nl> One crash is not fixed, that is the crash when setkey is not invoked before using the GCM ciphers (introduced in the 1.7.0 cycle). Either these functions should check that the key is present, or they should initialize the ghash table earlier. Affected functions: _gcry_cipher_gcm_encrypt _gcry_cipher_gcm_decrypt _gcry_cipher_gcm_authenticate _gcry_cipher_gcm_initiv (via _gcry_cipher_gcm_setiv) _gcry_cipher_gcm_tag (via _gcry_cipher_gcm_get_tag, _gcry_cipher_gcm_check_tag) Regression-due-to: 4a0795af021305f9240f23626a3796157db46bd7 Signed-off-by: Werner Koch <wk@gnupg.org>
2016-03-23cipher: Check length of supplied tag in _gcry_cipher_gcm_check_tag.Werner Koch1-3/+8
* cipher/cipher-gcm.c (_gcry_cipher_gcm_tag): Check that the provided tag length matches the actual tag length. Avoid gratuitous return statements. -- Signed-off-by: Werner Koch <wk@gnupg.org>
2016-03-23Fix buffer overrun in gettag for GCMPeter Wu1-2/+2
* cipher/cipher-gcm.c: copy a fixed length instead of the user-supplied number. -- The outbuflen is used to check the minimum size, the real tag is always of fixed length. Signed-off-by: Peter Wu <peter@lekensteyn.nl> Actually this is not a buffer overrun because we copy not more than has been allocated for OUTBUF. However a too long OUTBUFLEN accesses data outside of the source buffer. -wk
2016-03-18Always require a 64 bit integer typeWerner Koch1-2/+2
* configure.ac (available_digests_64): Merge with available_digests. (available_kdfs_64): Merge with available_kdfs. <64 bit datatype test>: Bail out if no such type is available. * src/types.h: Emit #error if no u64 can be defined. (PROPERLY_ALIGNED_TYPE): Always add u64 type. * cipher/bithelp.h: Remove all code paths which handle the case of !HAVE_U64_TYPEDEF. * cipher/bufhelp.h: Ditto. * cipher/cipher-ccm.c: Ditto. * cipher/cipher-gcm.c: Ditto. * cipher/cipher-internal.h: Ditto. * cipher/cipher.c: Ditto. * cipher/hash-common.h: Ditto. * cipher/md.c: Ditto. * cipher/poly1305.c: Ditto. * cipher/scrypt.c: Ditto. * cipher/tiger.c: Ditto. * src/g10lib.h: Ditto. * tests/basic.c: Ditto. * tests/bench-slope.c: Ditto. * tests/benchmark.c: Ditto. -- Given that SHA-2 and some other algorithms require a 64 bit type it does not make anymore sense to conditionally compile some part when the platform does not provide such a type. GnuPG-bug-id: 1815. Signed-off-by: Werner Koch <wk@gnupg.org>
2015-07-26Fix undefined behavior wrt memcpyPeter Wu1-1/+1
* cipher/cipher-gcm.c: Do not copy zero bytes from an empty buffer. Let the function continue to add padding as needed though. * cipher/mac-poly1305.c: If the caller requested to finish the hash function without a copy of the result, return immediately. -- Caught by UndefinedBehaviorSanitizer. Signed-off-by: Peter Wu <peter@lekensteyn.nl>
2014-12-23gcm: do not pass extra key pointer for setupM/fillMJussi Kivilinna1-7/+8
* cipher/cipher-gcm-intel-pclmul.c (_gcry_ghash_setup_intel_pclmul): Remove 'h' parameter. * cipher/cipher-gcm.c (_gcry_ghash_setup_intel_pclmul): Ditto. (fillM): Get 'h' pointer from 'c'. (setupM): Remome 'h' parameter. (_gcry_cipher_gcm_setkey): Only pass 'c' to setupM. -- Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
2014-12-12GCM: move Intel PCLMUL accelerated implementation to separate fileJussi Kivilinna1-370/+25
* cipher/Makefile.am: Add 'cipher-gcm-intel-pclmul.c'. * cipher/cipher-gcm-intel-pclmul.c: New. * cipher/cipher-gcm.c [GCM_USE_INTEL_PCLMUL] (_gcry_ghash_setup_intel_pclmul, _gcry_ghash_intel_pclmul): New prototypes. [GCM_USE_INTEL_PCLMUL] (gfmul_pclmul, gfmul_pclmul_aggr4): Move to 'cipher-gcm-intel-pclmul.c'. (ghash): Rename to... (ghash_internal): ...this and move GCM_USE_INTEL_PCLMUL part to new function in 'cipher-gcm-intel-pclmul.c'. (setupM): Move GCM_USE_INTEL_PCLMUL part to new function in 'cipher-gcm-intel-pclmul.c'; Add selection of ghash function based on available HW acceleration. (do_ghash_buf): Change use of 'ghash' to 'c->u_mode.gcm.ghash_fn'. * cipher/internal.h (ghash_fn_t): New. (gcry_cipher_handle): Remove 'use_intel_pclmul'; Add 'ghash_fn'. -- Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
2014-01-16Replace ath based mutexes by gpgrt based locks.Werner Koch1-1/+0
* configure.ac (NEED_GPG_ERROR_VERSION): Require 1.13. (gl_LOCK): Remove. * src/ath.c, src/ath.h: Remove. Remove from all files. Replace all mutexes by gpgrt based statically initialized locks. * src/global.c (global_init): Remove ath_init. (_gcry_vcontrol): Make ath install a dummy function. (print_config): Remove threads info line. * doc/gcrypt.texi: Simplify the multi-thread related documentation. -- The current code does only work on ELF systems with weak symbol support. In particular no locks were used under Windows. With the new gpgrt_lock functions from the soon to be released libgpg-error 1.13 we have a better portable scheme which also allows for static initialized mutexes. Signed-off-by: Werner Koch <wk@gnupg.org>
2013-12-18Change utf-8 copyright characters to '(C)'Jussi Kivilinna1-1/+1
cipher/blowfish-amd64.S: Change utf-8 encoded copyright character to '(C)'. cipher/blowfish-arm.S: Ditto. cipher/bufhelp.h: Ditto. cipher/camellia-aesni-avx-amd64.S: Ditto. cipher/camellia-aesni-avx2-amd64.S: Ditto. cipher/camellia-arm.S: Ditto. cipher/cast5-amd64.S: Ditto. cipher/cast5-arm.S: Ditto. cipher/cipher-ccm.c: Ditto. cipher/cipher-cmac.c: Ditto. cipher/cipher-gcm.c: Ditto. cipher/cipher-selftest.c: Ditto. cipher/cipher-selftest.h: Ditto. cipher/mac-cmac.c: Ditto. cipher/mac-gmac.c: Ditto. cipher/mac-hmac.c: Ditto. cipher/mac-internal.h: Ditto. cipher/mac.c: Ditto. cipher/rijndael-amd64.S: Ditto. cipher/rijndael-arm.S: Ditto. cipher/salsa20-amd64.S: Ditto. cipher/salsa20-armv7-neon.S: Ditto. cipher/serpent-armv7-neon.S: Ditto. cipher/serpent-avx2-amd64.S: Ditto. cipher/serpent-sse2-amd64.S: Ditto. -- Avoid use of '©' for easier parsing of source for copyright information. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
2013-11-21GCM: Move gcm_table initialization to setkeyJussi Kivilinna1-9/+16
* cipher/cipher-gcm.c: Change all 'c->u_iv.iv' to 'c->u_mode.gcm.u_ghash_key.key'. (_gcry_cipher_gcm_setkey): New. (_gcry_cipher_gcm_initiv): Move ghash initialization to function above. * cipher/cipher-internal.h (gcry_cipher_handle): Add 'u_mode.gcm.u_ghash_key'; Reorder 'u_mode.gcm' members for partial clearing in gcry_cipher_reset. (_gcry_cipher_gcm_setkey): New prototype. * cipher/cipher.c (cipher_setkey): Add GCM setkey. (cipher_reset): Clear 'u_mode' only partially for GCM. -- GHASH tables can be generated at setkey time. No need to regenerate for every new IV. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
2013-11-20GCM: Add support for split data buffers and online operationJussi Kivilinna1-28/+85
* cipher/cipher-gcm.c (do_ghash_buf): Add buffering for less than blocksize length input and padding handling. (_gcry_cipher_gcm_encrypt, _gcry_cipher_gcm_decrypt): Add handling for AAD padding and check if data has already being padded. (_gcry_cipher_gcm_authenticate): Check that AAD or data has not being padded yet. (_gcry_cipher_gcm_initiv): Clear padding marks. (_gcry_cipher_gcm_tag): Add finalization and padding; Clear sensitive data from cipher handle, since they are not used after generating tag. * cipher/cipher-internal.h (gcry_cipher_handle): Add 'u_mode.gcm.macbuf', 'u_mode.gcm.mac_unused', 'u_mode.gcm.ghash_data_finalized' and 'u_mode.gcm.ghash_aad_finalized'. * tests/basic.c (check_gcm_cipher): Rename to... (_check_gcm_cipher): ...this and add handling for different buffer step lengths; Enable per byte buffer testing. (check_gcm_cipher): Call _check_gcm_cipher with different buffer step sizes. -- Until now, GCM was expecting full data to be input in one go. This patch adds support for feeding data continuously (for encryption/decryption/aad). Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
2013-11-20GCM: Use size_t for buffer sizesJussi Kivilinna1-12/+18
* cipher/cipher-gcm.c (ghash, gcm_bytecounter_add, do_ghash_buf) (_gcry_cipher_gcm_encrypt, _gcry_cipher_gcm_decrypt) (_gcry_cipher_gcm_authenticate, _gcry_cipher_gcm_geniv) (_gcry_cipher_gcm_tag): Use size_t for buffer lengths. * cipher/cipher-internal.h (_gcry_cipher_gcm_encrypt) (_gcry_cipher_gcm_decrypt, _gcry_cipher_gcm_authenticate): Use size_t for buffer lengths. -- Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
2013-11-20GCM: add FIPS mode restrictionsJussi Kivilinna1-2/+59
* cipher/cipher-gcm.c (_gcry_cipher_gcm_encrypt) (_gcry_cipher_gcm_get_tag): Do not allow using in FIPS mode is setiv was invocated directly. (_gcry_cipher_gcm_setiv): Rename to... (_gcry_cipher_gcm_initiv): ...this. (_gcry_cipher_gcm_setiv): New setiv function with check for FIPS mode. [TODO] (_gcry_cipher_gcm_getiv): New. * cipher/cipher-internal.h (gcry_cipher_handle): Add 'u_mode.gcm.disallow_encryption_because_of_setiv_in_fips_mode'. -- Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
2013-11-20GCM: Add clearing and checking of marks.tagJussi Kivilinna1-0/+7
* cipher/cipher-gcm.c (_gcry_cipher_gcm_encrypt) (_gcry_cipher_gcm_decrypt, _gcry_cipher_gcm_authenticate): Make sure that tag has not been finalized yet. (_gcry_cipher_gcm_setiv): Clear 'marks.tag'. -- Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
2013-11-20GCM: Add stack burningJussi Kivilinna1-9/+26
* cipher/cipher-gcm.c (do_ghash, ghash): Return stack burn depth. (setupM): Wipe 'tmp' buffer. (do_ghash_buf): Wipe 'tmp' buffer and add stack burning. -- Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
2013-11-20Add aggregated bulk processing for GCM on x86-64Jussi Kivilinna1-9/+219
* cipher/cipher-gcm.c [__x86_64__] (gfmul_pclmul_aggr4): New. (ghash) [GCM_USE_INTEL_PCLMUL]: Add aggregated bulk processing for __x86_64__. (setupM) [__x86_64__]: Add initialization for aggregated bulk processing. -- Intel Haswell (x86-64): Old: AES GCM enc | 0.990 ns/B 963.3 MiB/s 3.17 c/B GCM dec | 0.982 ns/B 970.9 MiB/s 3.14 c/B GCM auth | 0.711 ns/B 1340.8 MiB/s 2.28 c/B New: AES GCM enc | 0.535 ns/B 1783.8 MiB/s 1.71 c/B GCM dec | 0.531 ns/B 1796.2 MiB/s 1.70 c/B GCM auth | 0.255 ns/B 3736.4 MiB/s 0.817 c/B Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
2013-11-20GCM: Tweak Intel PCLMUL ghash loop for small speed-upJussi Kivilinna1-55/+65
* cipher/cipher-gcm.c (do_ghash): Mark 'inline'. [GCM_USE_INTEL_PCLMUL] (do_ghash_pclmul): Rename to... [GCM_USE_INTEL_PCLMUL] (gfmul_pclmul): ..this and make inline function. (ghash) [GCM_USE_INTEL_PCLMUL]: Preload data before ghash-pclmul loop. -- Intel Haswell: Old: AES GCM enc | 1.12 ns/B 853.5 MiB/s 3.58 c/B GCM dec | 1.12 ns/B 853.4 MiB/s 3.58 c/B GCM auth | 0.843 ns/B 1131.5 MiB/s 2.70 c/B New: AES GCM enc | 0.990 ns/B 963.3 MiB/s 3.17 c/B GCM dec | 0.982 ns/B 970.9 MiB/s 3.14 c/B GCM auth | 0.711 ns/B 1340.8 MiB/s 2.28 c/B Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
2013-11-20GCM: Use counter mode code for speed-upJussi Kivilinna1-147/+215
* cipher/cipher-gcm.c (ghash): Add process for multiple blocks. (gcm_bytecounter_add, gcm_add32_be128, gcm_check_datalen) (gcm_check_aadlen_or_ivlen, do_ghash_buf): New functions. (_gcry_cipher_gcm_encrypt, _gcry_cipher_gcm_decrypt) (_gcry_cipher_gcm_authenticate, _gcry_cipher_gcm_set_iv) (_gcry_cipher_gcm_tag): Adjust to use above new functions and counter mode functions for encryption/decryption. * cipher/cipher-internal.h (gcry_cipher_handle): Remove 'length'; Add 'u_mode.gcm.(addlen|datalen|tagiv|datalen_over_limits)'. (_gcry_cipher_gcm_setiv): Return gcry_err_code_t. * cipher/cipher.c (cipher_setiv): Return error code. (_gcry_cipher_setiv): Handle error code from 'cipher_setiv'. -- Patch changes GCM to use counter mode code for bulk speed up and also adds data length checks as given in NIST SP-800-38D section 5.2.1.1. Bit length requirements from section 5.2.1.1: len(plaintext) <= 2^39-256 bits == 2^36-32 bytes == 2^32-2 blocks len(aad) <= 2^64-1 bits ~= 2^61-1 bytes len(iv) <= 2^64-1 bit ~= 2^61-1 bytes Intel Haswell: Old: AES GCM enc | 3.00 ns/B 317.4 MiB/s 9.61 c/B GCM dec | 1.96 ns/B 486.9 MiB/s 6.27 c/B GCM auth | 0.848 ns/B 1124.7 MiB/s 2.71 c/B New: AES GCM enc | 1.12 ns/B 851.8 MiB/s 3.58 c/B GCM dec | 1.12 ns/B 853.7 MiB/s 3.57 c/B GCM auth | 0.843 ns/B 1131.4 MiB/s 2.70 c/B Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
2013-11-20Add Intel PCLMUL acceleration for GCMJussi Kivilinna1-29/+176
* cipher/cipher-gcm.c (fillM): Rename... (do_fillM): ...to this. (ghash): Remove. (fillM): New macro. (GHASH): Use 'do_ghash' instead of 'ghash'. [GCM_USE_INTEL_PCLMUL] (do_ghash_pclmul): New. (ghash): New. (setupM): New. (_gcry_cipher_gcm_encrypt, _gcry_cipher_gcm_decrypt) (_gcry_cipher_gcm_authenticate, _gcry_cipher_gcm_setiv) (_gcry_cipher_gcm_tag): Use 'ghash' instead of 'GHASH' and 'c->u_mode.gcm.u_tag.tag' instead of 'c->u_tag.tag'. * cipher/cipher-internal.h (GCM_USE_INTEL_PCLMUL): New. (gcry_cipher_handle): Move 'u_tag' and 'gcm_table' under 'u_mode.gcm'. * configure.ac (pclmulsupport, gcry_cv_gcc_inline_asm_pclmul): New. * src/g10lib.h (HWF_INTEL_PCLMUL): New. * src/global.c: Add "intel-pclmul". * src/hwf-x86.c (detect_x86_gnuc): Add check for Intel PCLMUL. -- Speed-up GCM for Intel CPUs. Intel Haswell (x86-64): Old: AES GCM enc | 5.17 ns/B 184.4 MiB/s 16.55 c/B GCM dec | 4.38 ns/B 218.0 MiB/s 14.00 c/B GCM auth | 3.17 ns/B 300.4 MiB/s 10.16 c/B New: AES GCM enc | 3.01 ns/B 317.2 MiB/s 9.62 c/B GCM dec | 1.96 ns/B 486.9 MiB/s 6.27 c/B GCM auth | 0.848 ns/B 1124.8 MiB/s 2.71 c/B Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
2013-11-20GCM: GHASH optimizationsJussi Kivilinna1-111/+222
* cipher/cipher-gcm.c [GCM_USE_TABLES] (gcmR, ghash): Replace with new. [GCM_USE_TABLES] [GCM_TABLES_USE_U64] (bshift, fillM, do_ghash): New. [GCM_USE_TABLES] [!GCM_TABLES_USE_U64] (bshift, fillM): Replace with new. [GCM_USE_TABLES] [!GCM_TABLES_USE_U64] (do_ghash): New. (_gcry_cipher_gcm_tag): Remove extra memcpy to outbuf and use buf_eq_const for comparing authentication tag. * cipher/cipher-internal.h (gcry_cipher_handle): Different 'gcm_table' for 32-bit and 64-bit platforms. -- Patch improves GHASH speed. Intel Haswell (x86-64): Old: GCM auth | 26.22 ns/B 36.38 MiB/s 83.89 c/B New: GCM auth | 3.18 ns/B 300.0 MiB/s 10.17 c/B Intel Haswell (mingw32): Old: GCM auth | 27.27 ns/B 34.97 MiB/s 87.27 c/B New: GCM auth | 7.58 ns/B 125.7 MiB/s 24.27 c/B Cortex-A8: Old: GCM auth | 231.4 ns/B 4.12 MiB/s 233.3 c/B New: GCM auth | 30.82 ns/B 30.94 MiB/s 31.07 c/B Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
2013-11-19Initial implementation of GCMDmitry Eremin-Solenikov1-0/+483
* cipher/Makefile.am: Add 'cipher-gcm.c'. * cipher/cipher-ccm.c (_gcry_ciphert_ccm_set_lengths) (_gcry_cipher_ccm_authenticate, _gcry_cipher_ccm_tag) (_gcry_cipher_ccm_encrypt, _gcry_cipher_ccm_decrypt): Change 'c->u_mode.ccm.tag' to 'c->marks.tag'. * cipher/cipher-gcm.c: New. * cipher/cipher-internal.h (GCM_USE_TABLES): New. (gcry_cipher_handle): Add 'marks.tag', 'u_tag', 'length' and 'gcm_table'; Remove 'u_mode.ccm.tag'. (_gcry_cipher_gcm_encrypt, _gcry_cipher_gcm_decrypt) (_gcry_cipher_gcm_setiv, _gcry_cipher_gcm_authenticate) (_gcry_cipher_gcm_get_tag, _gcry_cipher_gcm_check_tag): New. * cipher/cipher.c (_gcry_cipher_open_internal, cipher_setkey) (cipher_encrypt, cipher_decrypt, _gcry_cipher_authenticate) (_gcry_cipher_gettag, _gcry_cipher_checktag): Add GCM mode handling. * src/gcrypt.h.in (gcry_cipher_modes): Add GCRY_CIPHER_MODE_GCM. (GCRY_GCM_BLOCK_LEN): New. * tests/basic.c (check_gcm_cipher): New. (check_ciphers): Add GCM check. (check_cipher_modes): Call 'check_gcm_cipher'. * tests/bench-slope.c (bench_gcm_encrypt_do_bench) (bench_gcm_decrypt_do_bench, bench_gcm_authenticate_do_bench) (gcm_encrypt_ops, gcm_decrypt_ops, gcm_authenticate_ops): New. (cipher_modes): Add GCM enc/dec/auth. (cipher_bench_one): Limit GCM to block ciphers with 16 byte block-size. * tests/benchmark.c (cipher_bench): Add GCM. -- Currently it is still quite slow. Still no support for generate_iv(). Is it really necessary? TODO: Merge/reuse cipher-internal state used by CCM. Changelog entry will be present in final patch submission. Changes since v1: - 6x-7x speedup. - added bench-slope support Signed-off-by: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com> [jk: mangle new file throught 'indent -nut'] [jk: few fixes] [jk: changelog]