Fix many MarkDown issues in {NOTES*,README*,HACKING,LICENSE}.md files

Reviewed-by: Tim Hudson <tjh@openssl.org>
(Merged from https://github.com/openssl/openssl/pull/12109)
This commit is contained in:
Dr. David von Oheimb 2020-06-10 17:49:25 +02:00
parent 036cbb6bbf
commit 1dc1ea182b
28 changed files with 871 additions and 845 deletions

View File

@ -174,12 +174,12 @@ OpenSSL 3.0
*Richard Levitte*
* Project text documents not yet having a proper file name extension
(HACKING, LICENSE, NOTES*, README*, VERSION) have been renamed to *.md
as far as reasonable, else to *.txt, for better use with file managers.
(`HACKING`, `LICENSE`, `NOTES*`, `README*`, `VERSION`) have been renamed to
`*.md` as far as reasonable, else `*.txt`, for better use with file managers.
*David von Oheimb*
* The main project documents (README, NEWS, CHANGES, INSTALL, SUPPORT)
* The main project documents (README, NEWS, CHANGES, INSTALL, SUPPORT)
have been converted to Markdown with the goal to produce documents
which not only look pretty when viewed online in the browser, but
remain well readable inside a plain text editor.
@ -1060,7 +1060,7 @@ OpenSSL 3.0
* Added EVP_MAC, an EVP layer MAC API, to simplify adding MAC
implementations. This includes a generic EVP_PKEY to EVP_MAC bridge,
to facilitate the continued use of MACs through raw private keys in
functionality such as EVP_DigestSign* and EVP_DigestVerify*.
functionality such as `EVP_DigestSign*` and `EVP_DigestVerify*`.
*Richard Levitte*
@ -1732,9 +1732,9 @@ OpenSSL 1.1.1
*Paul Yang*
* Add SM3 implemented according to GB/T 32905-2016
* Jack Lloyd <jack.lloyd@ribose.com>,
Ronald Tse <ronald.tse@ribose.com>,
Erick Borsboom <erick.borsboom@ribose.com> *
*Jack Lloyd <jack.lloyd@ribose.com>,*
*Ronald Tse <ronald.tse@ribose.com>,*
*Erick Borsboom <erick.borsboom@ribose.com>*
* Add 'Maximum Fragment Length' TLS extension negotiation and support
as documented in RFC6066.
@ -1743,9 +1743,9 @@ OpenSSL 1.1.1
*Filipe Raimundo da Silva*
* Add SM4 implemented according to GB/T 32907-2016.
* Jack Lloyd <jack.lloyd@ribose.com>,
Ronald Tse <ronald.tse@ribose.com>,
Erick Borsboom <erick.borsboom@ribose.com> *
*Jack Lloyd <jack.lloyd@ribose.com>,*
*Ronald Tse <ronald.tse@ribose.com>,*
*Erick Borsboom <erick.borsboom@ribose.com>*
* Reimplement -newreq-nodes and ERR_error_string_n; the
original author does not agree with the license change.
@ -2931,7 +2931,7 @@ OpenSSL 1.1.0
Makefile. Instead, Configure produces a perl module in
configdata.pm which holds most of the config data (in the hash
table %config), the target data that comes from the target
configuration in one of the `Configurations/*.conf~ files (in
configuration in one of the `Configurations/*.conf` files (in
%target).
*Richard Levitte*
@ -3062,21 +3062,21 @@ OpenSSL 1.1.0
opaque. For HMAC_CTX, the following constructors and destructors
were added:
HMAC_CTX *HMAC_CTX_new(void);
void HMAC_CTX_free(HMAC_CTX *ctx);
HMAC_CTX *HMAC_CTX_new(void);
void HMAC_CTX_free(HMAC_CTX *ctx);
For EVP_MD and EVP_CIPHER, complete APIs to create, fill and
destroy such methods has been added. See EVP_MD_meth_new(3) and
EVP_CIPHER_meth_new(3) for documentation.
Additional changes:
1) EVP_MD_CTX_cleanup(), EVP_CIPHER_CTX_cleanup() and
HMAC_CTX_cleanup() were removed. HMAC_CTX_reset() and
EVP_MD_CTX_reset() should be called instead to reinitialise
1) `EVP_MD_CTX_cleanup()`, `EVP_CIPHER_CTX_cleanup()` and
`HMAC_CTX_cleanup()` were removed. `HMAC_CTX_reset()` and
`EVP_MD_CTX_reset()` should be called instead to reinitialise
an already created structure.
2) For consistency with the majority of our object creators and
destructors, EVP_MD_CTX_(create|destroy) were renamed to
EVP_MD_CTX_(new|free). The old names are retained as macros
destructors, `EVP_MD_CTX_(create|destroy)` were renamed to
`EVP_MD_CTX_(new|free)`. The old names are retained as macros
for deprecated builds.
*Richard Levitte*
@ -3174,8 +3174,8 @@ OpenSSL 1.1.0
*Emilia Käsper*
* Fix no-stdio build.
* David Woodhouse <David.Woodhouse@intel.com> and also
Ivan Nestlerode <ivan.nestlerode@sonos.com> *
*David Woodhouse <David.Woodhouse@intel.com> and also*
*Ivan Nestlerode <ivan.nestlerode@sonos.com>*
* New testing framework
The testing framework has been largely rewritten and is now using
@ -3579,7 +3579,7 @@ OpenSSL 1.1.0
*Steve Henson*
* Rename old X9.31 PRNG functions of the form FIPS_rand* to FIPS_x931*.
* Rename old X9.31 PRNG functions of the form `FIPS_rand*` to `FIPS_x931*`.
This shouldn't present any incompatibility problems because applications
shouldn't be using these directly and any that are will need to rethink
anyway as the X9.31 PRNG is now deprecated by FIPS 140-2
@ -4458,11 +4458,11 @@ OpenSSL 1.0.2
* Fix BN_hex2bn/BN_dec2bn NULL pointer deref/heap corruption
In the BN_hex2bn function the number of hex digits is calculated using an
int value |i|. Later |bn_expand| is called with a value of |i * 4|. For
large values of |i| this can result in |bn_expand| not allocating any
memory because |i * 4| is negative. This can leave the internal BIGNUM data
int value `i`. Later `bn_expand` is called with a value of `i * 4`. For
large values of `i` this can result in `bn_expand` not allocating any
memory because `i * 4` is negative. This can leave the internal BIGNUM data
field as NULL leading to a subsequent NULL ptr deref. For very large values
of |i|, the calculation |i * 4| could be a positive value smaller than |i|.
of `i`, the calculation `i * 4` could be a positive value smaller than `i`.
In this case memory is allocated to the internal BIGNUM data field, but it
is insufficiently sized leading to heap corruption. A similar issue exists
in BN_dec2bn. This could have security consequences if BN_hex2bn/BN_dec2bn
@ -4482,11 +4482,11 @@ OpenSSL 1.0.2
* Fix memory issues in `BIO_*printf` functions
The internal |fmtstr| function used in processing a "%s" format string in
The internal `fmtstr` function used in processing a "%s" format string in
the `BIO_*printf` functions could overflow while calculating the length of a
string and cause an OOB read when printing very long strings.
Additionally the internal |doapr_outch| function can attempt to write to an
Additionally the internal `doapr_outch` function can attempt to write to an
OOB memory location (at an offset from the NULL pointer) in the event of a
memory allocation failure. In 1.0.2 and below this could be caused where
the size of a buffer to be allocated is greater than INT_MAX. E.g. this
@ -5660,11 +5660,11 @@ OpenSSL 1.0.1
* Fix BN_hex2bn/BN_dec2bn NULL pointer deref/heap corruption
In the BN_hex2bn function the number of hex digits is calculated using an
int value |i|. Later |bn_expand| is called with a value of |i * 4|. For
large values of |i| this can result in |bn_expand| not allocating any
memory because |i * 4| is negative. This can leave the internal BIGNUM data
int value `i`. Later `bn_expand` is called with a value of `i * 4`. For
large values of `i` this can result in `bn_expand` not allocating any
memory because `i * 4` is negative. This can leave the internal BIGNUM data
field as NULL leading to a subsequent NULL ptr deref. For very large values
of |i|, the calculation |i * 4| could be a positive value smaller than |i|.
of `i`, the calculation `i * 4` could be a positive value smaller than `i`.
In this case memory is allocated to the internal BIGNUM data field, but it
is insufficiently sized leading to heap corruption. A similar issue exists
in BN_dec2bn. This could have security consequences if BN_hex2bn/BN_dec2bn
@ -5684,11 +5684,11 @@ OpenSSL 1.0.1
* Fix memory issues in `BIO_*printf` functions
The internal |fmtstr| function used in processing a "%s" format string in
The internal `fmtstr` function used in processing a "%s" format string in
the `BIO_*printf` functions could overflow while calculating the length of a
string and cause an OOB read when printing very long strings.
Additionally the internal |doapr_outch| function can attempt to write to an
Additionally the internal `doapr_outch` function can attempt to write to an
OOB memory location (at an offset from the NULL pointer) in the event of a
memory allocation failure. In 1.0.2 and below this could be caused where
the size of a buffer to be allocated is greater than INT_MAX. E.g. this
@ -6505,8 +6505,8 @@ OpenSSL 1.0.1
disable just protocol X, but all protocols above X *if* there are
protocols *below* X still enabled. In more practical terms it means
that if application wants to disable TLS1.0 in favor of TLS1.1 and
above, it's not sufficient to pass SSL_OP_NO_TLSv1, one has to pass
SSL_OP_NO_TLSv1|SSL_OP_NO_SSLv3|SSL_OP_NO_SSLv2. This applies to
above, it's not sufficient to pass `SSL_OP_NO_TLSv1`, one has to pass
`SSL_OP_NO_TLSv1|SSL_OP_NO_SSLv3|SSL_OP_NO_SSLv2`. This applies to
client side.
*Andy Polyakov*
@ -12328,8 +12328,8 @@ s-cbc 3624.96k 5258.21k 5530.91k 5624.30k 5628.26k
*Geoff Thorpe, Lutz Jaenicke*
* Modify mkdef.pl to recognise and parse preprocessor conditionals
of the form '#if defined(...) || defined(...) || ...' and
'#if !defined(...) && !defined(...) && ...'. This also avoids
of the form `#if defined(...) || defined(...) || ...` and
`#if !defined(...) && !defined(...) && ...`. This also avoids
the growing number of special cases it was previously handling.
*Richard Levitte*
@ -12902,9 +12902,9 @@ s-cbc 3624.96k 5258.21k 5530.91k 5624.30k 5628.26k
*Bodo Moeller*
* Move `BN_mod_...` functions into new file crypto/bn/bn_mod.c
(except for exponentiation, which stays in crypto/bn/bn_exp.c,
and BN_mod_mul_reciprocal, which stays in crypto/bn/bn_recp.c)
* Move `BN_mod_...` functions into new file `crypto/bn/bn_mod.c`
(except for exponentiation, which stays in `crypto/bn/bn_exp.c`,
and `BN_mod_mul_reciprocal`, which stays in `crypto/bn/bn_recp.c`)
and add new functions:
BN_nnmod
@ -12920,16 +12920,16 @@ s-cbc 3624.96k 5258.21k 5530.91k 5624.30k 5628.26k
These functions always generate non-negative results.
BN_nnmod otherwise is like BN_mod (if BN_mod computes a remainder r
such that |m| < r < 0, BN_nnmod will output rem + |m| instead).
`BN_nnmod` otherwise is `like BN_mod` (if `BN_mod` computes a remainder `r`
such that `|m| < r < 0`, `BN_nnmod` will output `rem + |m|` instead).
BN_mod_XXX_quick(r, a, [b,] m) generates the same result as
BN_mod_XXX(r, a, [b,] m, ctx), but requires that a [and b]
be reduced modulo m.
`BN_mod_XXX_quick(r, a, [b,] m)` generates the same result as
`BN_mod_XXX(r, a, [b,] m, ctx)`, but requires that `a` [and `b`]
be reduced modulo `m`.
*Lenka Fibikova <fibikova@exp-math.uni-essen.de>, Bodo Moeller*
f 0
<!--
The following entry accidentally appeared in the CHANGES file
distributed with OpenSSL 0.9.7. The modifications described in
it do *not* apply to OpenSSL 0.9.7.
@ -12943,7 +12943,7 @@ f 0
differing sizes.
*Richard Levitte*
ndif
-->
* In 'openssl passwd', verify passwords read from the terminal
unless the '-salt' option is used (which usually means that
@ -14683,7 +14683,7 @@ ndif
* Change the handling of OID objects as follows:
- New object identifiers are inserted in objects.txt, following
the syntax given in objects.README.
the syntax given in [crypto/objects/README.md](crypto/objects/README.md).
- objects.pl is used to process obj_mac.num and create a new
obj_mac.h.
- obj_dat.pl is used to create a new obj_dat.h, using the data in
@ -17399,10 +17399,10 @@ ndif
*Steve Henson*
* Be less restrictive and allow also `perl util/perlpath.pl
/path/to/bin/perl' in addition to `perl util/perlpath.pl /path/to/bin',
because this way one can also use an interpreter named `perl5' (which is
/path/to/bin/perl` in addition to `perl util/perlpath.pl /path/to/bin`,
because this way one can also use an interpreter named `perl5` (which is
usually the name of Perl 5.xxx on platforms where an Perl 4.x is still
installed as `perl').
installed as `perl`).
*Matthias Loepfe <Matthias.Loepfe@adnovum.ch>*
@ -17435,7 +17435,7 @@ ndif
*Steve Henson*
* Make `openssl version' output lines consistent.
* Make `openssl version` output lines consistent.
*Ralf S. Engelschall*
@ -17492,7 +17492,7 @@ ndif
*Ben Laurie*
* Allow DSO flags like -fpic, -fPIC, -KPIC etc. to be specified
on the `perl Configure ...' command line. This way one can compile
on the `perl Configure ...` command line. This way one can compile
OpenSSL libraries with Position Independent Code (PIC) which is needed
for linking it into DSOs.
@ -17511,9 +17511,9 @@ ndif
*Ralf S. Engelschall*
* General source tree makefile cleanups: Made `making xxx in yyy...'
display consistent in the source tree and replaced `/bin/rm' by `rm'.
Additionally cleaned up the `make links' target: Remove unnecessary
* General source tree makefile cleanups: Made `making xxx in yyy...`
display consistent in the source tree and replaced `/bin/rm` by `rm`.
Additionally cleaned up the `make links` target: Remove unnecessary
semicolons, subsequent redundant removes, inline point.sh into mklink.sh
to speed processing and no longer clutter the display with confusing
stuff. Instead only the actually done links are displayed.
@ -17640,12 +17640,12 @@ ndif
*Ralf S. Engelschall*
* Make `openssl x509 -noout -modulus' functional also for DSA certificates
* Make `openssl x509 -noout -modulus`' functional also for DSA certificates
(in addition to RSA certificates) to match the behaviour of `openssl dsa
-noout -modulus' as it's already the case for `openssl rsa -noout
-modulus'. For RSA the -modulus is the real "modulus" while for DSA
-noout -modulus` as it's already the case for `openssl rsa -noout
-modulus`. For RSA the -modulus is the real "modulus" while for DSA
currently the public key is printed (a decision which was already done by
`openssl dsa -modulus' in the past) which serves a similar purpose.
`openssl dsa -modulus` in the past) which serves a similar purpose.
Additionally the NO_RSA no longer completely removes the whole -modulus
option; it now only avoids using the RSA stuff. Same applies to NO_DSA
now, too.

View File

@ -54,8 +54,8 @@ guidelines:
(usually by rebasing) before it will be acceptable.
4. Patches should follow our [coding style][] and compile without warnings.
Where gcc or clang is available you should use the
--strict-warnings Configure option. OpenSSL compiles on many varied
Where `gcc` or `clang` is available you should use the
`--strict-warnings` `Configure` option. OpenSSL compiles on many varied
platforms: try to ensure you only use portable features. Clean builds
via Travis and AppVeyor are required, and they are started automatically
whenever a PR is created or updated.
@ -64,7 +64,7 @@ guidelines:
5. When at all possible, patches should include tests. These can
either be added to an existing test, or completely new. Please see
test/README.md for information on the test framework.
[test/README.md](test/README.md) for information on the test framework.
6. New features or changed functionality must include
documentation. Please look at the "pod" files in doc/man[1357] for
@ -77,7 +77,7 @@ guidelines:
explain the grander details.
Have a look through existing entries for inspiration.
Please note that this is NOT simply a copy of git-log one-liners.
Also note that security fixes get an entry in CHANGES.md.
Also note that security fixes get an entry in [CHANGES.md](CHANGES.md).
This file helps users get more in depth information of what comes
with a specific release without having to sift through the higher
noise ratio in git-log.
@ -89,3 +89,6 @@ guidelines:
OpenSSL 1.1.0).
This file helps users get a very quick summary of what comes with a
specific release, to see if an upgrade is worth the effort.
9. Guidelines how to integrate error output of new crypto library modules
can be found in [crypto/err/README.md](crypto/err/README.md).

View File

@ -4,17 +4,17 @@ Design document for the unified scheme data
How are things connected?
-------------------------
The unified scheme takes all its data from the build.info files seen
The unified scheme takes all its data from the `build.info` files seen
throughout the source tree. These files hold the minimum information
needed to build end product files from diverse sources. See the
section on build.info files below.
section on `build.info` files below.
From the information in build.info files, Configure builds up an
information database as a hash table called %unified_info, which is
From the information in `build.info` files, `Configure` builds up an
information database as a hash table called `%unified_info`, which is
stored in configdata.pm, found at the top of the build tree (which may
or may not be the same as the source tree).
Configurations/common.tmpl uses the data from %unified_info to
[`Configurations/common.tmpl`](common.tmpl) uses the data from `%unified_info` to
generate the rules for building end product files as well as
intermediary files with the help of a few functions found in the
build-file templates. See the section on build-file templates further
@ -23,36 +23,35 @@ down for more information.
build.info files
----------------
As mentioned earlier, build.info files are meant to hold the minimum
As mentioned earlier, `build.info` files are meant to hold the minimum
information needed to build output files, and therefore only (with a
few possible exceptions [1]) have information about end products (such
as scripts, library files and programs) and source files (such as C
files, C header files, assembler files, etc). Intermediate files such
as object files are rarely directly referred to in build.info files (and
when they are, it's always with the file name extension .o), they are
inferred by Configure. By the same rule of minimalism, end product
file name extensions (such as .so, .a, .exe, etc) are never mentioned
in build.info. Their file name extensions will be inferred by the
as object files are rarely directly referred to in `build.info` files (and
when they are, it's always with the file name extension `.o`), they are
inferred by `Configure`. By the same rule of minimalism, end product
file name extensions (such as `.so`, `.a`, `.exe`, etc) are never mentioned
in `build.info`. Their file name extensions will be inferred by the
build-file templates, adapted for the platform they are meant for (see
sections on %unified_info and build-file templates further down).
sections on `%unified_info` and build-file templates further down).
The variables PROGRAMS, LIBS, MODULES and SCRIPTS are used to declare
end products. There are variants for them with '_NO_INST' as suffix
(PROGRAM_NO_INST etc) to specify end products that shouldn't get
installed.
The variables `PROGRAMS`, `LIBS`, `MODULES` and `SCRIPTS` are used to declare
end products. There are variants for them with `_NO_INST` as suffix
(`PROGRAM_NO_INST` etc) to specify end products that shouldn't get installed.
The variables SOURCE, DEPEND, INCLUDE and DEFINE are indexed by a
The variables `SOURCE`, `DEPEND`, `INCLUDE` and `DEFINE` are indexed by a
produced file, and their values are the source used to produce that
particular produced file, extra dependencies, include directories
needed, or C macros to be defined.
All their values in all the build.info throughout the source tree are
All their values in all the `build.info` throughout the source tree are
collected together and form a set of programs, libraries, modules and
scripts to be produced, source files, dependencies, etc etc etc.
Let's have a pretend example, a very limited contraption of OpenSSL,
composed of the program 'apps/openssl', the libraries 'libssl' and
'libcrypto', an module 'engines/ossltest' and their sources and
composed of the program `apps/openssl`, the libraries `libssl` and
`libcrypto`, an module `engines/ossltest` and their sources and
dependencies.
# build.info
@ -61,11 +60,11 @@ dependencies.
INCLUDE[libssl]=include
DEPEND[libssl]=libcrypto
This is the top directory build.info file, and it tells us that two
libraries are to be built, the include directory 'include/' shall be
This is the top directory `build.info` file, and it tells us that two
libraries are to be built, the include directory `include/` shall be
used throughout when building anything that will end up in each
library, and that the library 'libssl' depend on the library
'libcrypto' to function properly.
library, and that the library `libssl` depend on the library
`libcrypto` to function properly.
# apps/build.info
PROGRAMS=openssl
@ -73,15 +72,15 @@ library, and that the library 'libssl' depend on the library
INCLUDE[openssl]=.. ../include
DEPEND[openssl]=../libssl
This is the build.info file in 'apps/', one may notice that all file
paths mentioned are relative to the directory the build.info file is
This is the `build.info` file in `apps/`, one may notice that all file
paths mentioned are relative to the directory the `build.info` file is
located in. This one tells us that there's a program to be built
called 'apps/openssl' (the file name extension will depend on the
platform and is therefore not mentioned in the build.info file). It's
built from one source file, 'apps/openssl.c', and building it requires
the use of '.' and 'include' include directories (both are declared
from the point of view of the 'apps/' directory), and that the program
depends on the library 'libssl' to function properly.
called `apps/openss` (the file name extension will depend on the
platform and is therefore not mentioned in the `build.info` file). It's
built from one source file, `apps/openssl.c`, and building it requires
the use of `.` and `include/` include directories (both are declared
from the point of view of the `apps/` directory), and that the program
depends on the library `libssl` to function properly.
# crypto/build.info
LIBS=../libcrypto
@ -92,32 +91,32 @@ depends on the library 'libssl' to function properly.
DEPEND[buildinf.h]=../Makefile
DEPEND[../util/mkbuildinf.pl]=../util/Foo.pm
This is the build.info file in 'crypto', and it tells us a little more
about what's needed to produce 'libcrypto'. LIBS is used again to
declare that 'libcrypto' is to be produced. This declaration is
really unnecessary as it's already mentioned in the top build.info
This is the `build.info` file in `crypto/`, and it tells us a little more
about what's needed to produce `libcrypto`. LIBS is used again to
declare that `libcrypto` is to be produced. This declaration is
really unnecessary as it's already mentioned in the top `build.info`
file, but can make the info file easier to understand. This is to
show that duplicate information isn't an issue.
This build.info file informs us that 'libcrypto' is built from a few
source files, 'crypto/aes.c', 'crypto/evp.c' and 'crypto/cversion.c'.
This `build.info` file informs us that `libcrypto` is built from a few
source files, `crypto/aes.c`, `crypto/evp.c` and `crypto/cversion.c`.
It also shows us that building the object file inferred from
'crypto/cversion.c' depends on 'crypto/buildinf.h'. Finally, it
`crypto/cversion.c` depends on `crypto/buildinf.h`. Finally, it
also shows the possibility to declare how some files are generated
using some script, in this case a perl script, and how such scripts
can be declared to depend on other files, in this case a perl module.
Two things are worth an extra note:
'DEPEND[cversion.o]' mentions an object file. DEPEND indexes is the
`DEPEND[cversion.o]` mentions an object file. DEPEND indexes is the
only location where it's valid to mention them
# ssl/build.info
LIBS=../libssl
SOURCE[../libssl]=tls.c
This is the build.info file in 'ssl/', and it tells us that the
library 'libssl' is built from the source file 'ssl/tls.c'.
This is the build.info file in `ssl/`, and it tells us that the
library `libssl` is built from the source file `ssl/tls.c`.
# engines/build.info
MODULES=dasync
@ -130,17 +129,17 @@ library 'libssl' is built from the source file 'ssl/tls.c'.
DEPEND[ossltest]=../libcrypto.a
INCLUDE[ossltest]=../include
This is the build.info file in 'engines/', telling us that two modules
called 'engines/dasync' and 'engines/ossltest' shall be built, that
dasync's source is 'engines/e_dasync.c' and ossltest's source is
'engines/e_ossltest.c' and that the include directory 'include/' may
This is the `build.info` file in `engines/`, telling us that two modules
called `engines/dasync` and `engines/ossltest` shall be built, that
`dasync`'s source is `engines/e_dasync.c` and `ossltest`'s source is
`engines/e_ossltest.c` and that the include directory `include/` may
be used when building anything that will be part of these modules.
Also, both modules depend on the library 'libcrypto' to function
properly. ossltest is explicitly linked with the static variant of
the library 'libcrypto'. Finally, only dasync is being installed, as
ossltest is only for internal testing.
Also, both modules depend on the library `libcrypto` to function
properly. `ossltest` is explicitly linked with the static variant of
the library `libcrypto`. Finally, only `dasync` is being installed, as
`ossltest` is only for internal testing.
When Configure digests these build.info files, the accumulated
When `Configure` digests these `build.info` files, the accumulated
information comes down to this:
LIBS=libcrypto libssl
@ -170,83 +169,81 @@ information comes down to this:
DEPEND[crypto/buildinf.h]=Makefile
DEPEND[util/mkbuildinf.pl]=util/Foo.pm
A few notes worth mentioning:
LIBS may be used to declare routine libraries only.
`LIBS` may be used to declare routine libraries only.
PROGRAMS may be used to declare programs only.
`PROGRAMS` may be used to declare programs only.
MODULES may be used to declare modules only.
`MODULES` may be used to declare modules only.
The indexes for SOURCE must only be end product files, such as
libraries, programs or modules. The values of SOURCE variables must
The indexes for `SOURCE` must only be end product files, such as
libraries, programs or modules. The values of `SOURCE` variables must
only be source files (possibly generated).
INCLUDE and DEPEND shows a relationship between different files
`INCLUDE` and `DEPEND` shows a relationship between different files
(usually produced files) or between files and directories, such as a
program depending on a library, or between an object file and some
extra source file.
When Configure processes the build.info files, it will take it as
When `Configure` processes the `build.info` files, it will take it as
truth without question, and will therefore perform very few checks.
If the build tree is separate from the source tree, it will assume
that all built files and up in the build directory and that all source
files are to be found in the source tree, if they can be found there.
Configure will assume that source files that can't be found in the
source tree (such as 'crypto/bildinf.h' in the example above) are
`Configure` will assume that source files that can't be found in the
source tree (such as `crypto/bildinf.h` in the example above) are
generated and will be found in the build tree.
The `%unified_info` database
----------------------------
The %unified_info database
--------------------------
The information in all the build.info get digested by Configure and
collected into the %unified_info database, divided into the following
The information in all the `build.info` get digested by `Configure` and
collected into the `%unified_info` database, divided into the following
indexes:
depends => a hash table containing 'file' => [ 'dependency' ... ]
pairs. These are directly inferred from the DEPEND
variables in build.info files.
depends => a hash table containing 'file' => [ 'dependency' ... ]
pairs. These are directly inferred from the DEPEND
variables in build.info files.
modules => a list of modules. These are directly inferred from
the MODULES variable in build.info files.
modules => a list of modules. These are directly inferred from
the MODULES variable in build.info files.
generate => a hash table containing 'file' => [ 'generator' ... ]
pairs. These are directly inferred from the GENERATE
variables in build.info files.
generate => a hash table containing 'file' => [ 'generator' ... ]
pairs. These are directly inferred from the GENERATE
variables in build.info files.
includes => a hash table containing 'file' => [ 'include' ... ]
pairs. These are directly inferred from the INCLUDE
variables in build.info files.
includes => a hash table containing 'file' => [ 'include' ... ]
pairs. These are directly inferred from the INCLUDE
variables in build.info files.
install => a hash table containing 'type' => [ 'file' ... ] pairs.
The types are 'programs', 'libraries', 'modules' and
'scripts', and the array of files list the files of
that type that should be installed.
install => a hash table containing 'type' => [ 'file' ... ] pairs.
The types are 'programs', 'libraries', 'modules' and
'scripts', and the array of files list the files of
that type that should be installed.
libraries => a list of libraries. These are directly inferred from
the LIBS variable in build.info files.
libraries => a list of libraries. These are directly inferred from
the LIBS variable in build.info files.
programs => a list of programs. These are directly inferred from
the PROGRAMS variable in build.info files.
programs => a list of programs. These are directly inferred from
the PROGRAMS variable in build.info files.
scripts => a list of scripts. There are directly inferred from
the SCRIPTS variable in build.info files.
scripts => a list of scripts. There are directly inferred from
the SCRIPTS variable in build.info files.
sources => a hash table containing 'file' => [ 'sourcefile' ... ]
pairs. These are indirectly inferred from the SOURCE
variables in build.info files. Object files are
mentioned in this hash table, with source files from
SOURCE variables, and AS source files for programs and
libraries.
sources => a hash table containing 'file' => [ 'sourcefile' ... ]
pairs. These are indirectly inferred from the SOURCE
variables in build.info files. Object files are
mentioned in this hash table, with source files from
SOURCE variables, and AS source files for programs and
libraries.
shared_sources =>
a hash table just like 'sources', but only as source
files (object files) for building shared libraries.
shared_sources =>
a hash table just like 'sources', but only as source
files (object files) for building shared libraries.
As an example, here is how the build.info files example from the
section above would be digested into a %unified_info table:
As an example, here is how the `build.info` files example from the
section above would be digested into a `%unified_info` table:
our %unified_info = (
"depends" =>
@ -399,20 +396,19 @@ section above would be digested into a %unified_info table:
},
);
As can be seen, everything in %unified_info is fairly simple suggest
As can be seen, everything in `%unified_info` is fairly simple suggest
of information. Still, it tells us that to build all programs, we
must build 'apps/openssl', and to build the latter, we will need to
build all its sources ('apps/openssl.o' in this case) and all the
other things it depends on (such as 'libssl'). All those dependencies
need to be built as well, using the same logic, so to build 'libssl',
we need to build 'ssl/tls.o' as well as 'libcrypto', and to build the
must build `apps/openssl`, and to build the latter, we will need to
build all its sources (`apps/openssl.o` in this case) and all the
other things it depends on (such as `libssl`). All those dependencies
need to be built as well, using the same logic, so to build `libssl`,
we need to build `ssl/tls.o` as well as `libcrypto`, and to build the
latter...
Build-file templates
--------------------
Build-file templates are essentially build-files (such as Makefile on
Build-file templates are essentially build-files (such as `Makefile` on
Unix) with perl code fragments mixed in. Those perl code fragment
will generate all the configuration dependent data, including all the
rules needed to build end product files and intermediary files alike.
@ -461,7 +457,7 @@ etc.
incs => [ "INCL/PATH", ... ]
intent => one of "lib", "dso", "bin" );
'obj' has the intended object file with '.o'
'obj' has the intended object file with `.o`
extension, src2obj() is expected to change it to
something more suitable for the platform.
'srcs' has the list of source files to build the
@ -557,13 +553,13 @@ etc.
resulting script from.
Along with the build-file templates is the driving template
Configurations/common.tmpl, which looks through all the information in
%unified_info and generates all the rulesets to build libraries,
[`Configurations/common.tmpl`](common.tmpl), which looks through all the
information in `%unified_info` and generates all the rulesets to build libraries,
programs and all intermediate files, using the rule generating
functions defined in the build-file template.
As an example with the smaller build.info set we've seen as an
example, producing the rules to build 'libcrypto' would result in the
As an example with the smaller `build.info` set we've seen as an
example, producing the rules to build `libcrypto` would result in the
following calls:
# Note: obj2shlib will only be called if shared libraries are

View File

@ -14,7 +14,6 @@ configuration in diverse ways:
script. See 'Configure helper scripts for more
information.
Configurations of OpenSSL target platforms
==========================================
@ -54,12 +53,12 @@ In each table entry, the following keys are significant:
usually good enough.
cppflags => Default C preprocessor flags [4].
defines => As an alternative, macro definitions may be
given here instead of in `cppflags' [4].
given here instead of in 'cppflags' [4].
If given here, they MUST be as an array of
the string such as "MACRO=value", or just
"MACRO" for definitions without value.
includes => As an alternative, inclusion directories
may be given here instead of in `cppflags'
may be given here instead of in 'cppflags'
[4]. If given here, the MUST be an array
of strings, one directory specification
each.
@ -99,9 +98,9 @@ In each table entry, the following keys are significant:
module_cppflags
module_cflags
module_ldflags => Has the same function as the corresponding
`shared_' attributes, but for building DSOs.
'shared_' attributes, but for building DSOs.
When unset, they get the same values as the
corresponding `shared_' attributes.
corresponding 'shared_' attributes.
ar => The library archive command, the default is
"ar".
@ -237,31 +236,30 @@ In each table entry, the following keys are significant:
RC4_INT RC4 key schedule is made
up of 'unsigned int's;
[1] as part of the target configuration, one can have a key called
'inherit_from' that indicate what other configurations to inherit
data from. These are resolved recursively.
`inherit_from` that indicates what other configurations to inherit
data from. These are resolved recursively.
Inheritance works as a set of default values that can be overridden
by corresponding key values in the inheriting configuration.
Inheritance works as a set of default values that can be overridden
by corresponding key values in the inheriting configuration.
Note 1: any configuration table can be used as a template.
Note 2: pure templates have the attribute 'template => 1' and
cannot be used as build targets.
Note 1: any configuration table can be used as a template.
Note 2: pure templates have the attribute `template => 1` and
cannot be used as build targets.
If several configurations are given in the 'inherit_from' array,
the values of same attribute are concatenated with space
separation. With this, it's possible to have several smaller
templates for different configuration aspects that can be combined
into a complete configuration.
If several configurations are given in the `inherit_from` array,
the values of same attribute are concatenated with space
separation. With this, it's possible to have several smaller
templates for different configuration aspects that can be combined
into a complete configuration.
instead of a scalar value or an array, a value can be a code block
of the form 'sub { /* your code here */ }'. This code block will
be called with the list of inherited values for that key as
arguments. In fact, the concatenation of strings is really done
by using 'sub { join(" ",@_) }' on the list of inherited values.
Instead of a scalar value or an array, a value can be a code block
of the form `sub { /* your code here */ }`. This code block will
be called with the list of inherited values for that key as
arguments. In fact, the concatenation of strings is really done
by using `sub { join(" ",@_) }` on the list of inherited values.
An example:
An example:
"foo" => {
template => 1,
@ -291,21 +289,21 @@ In each table entry, the following keys are significant:
}
[2] OpenSSL is built with threading capabilities unless the user
specifies 'no-threads'. The value of the key 'thread_scheme' may
be "(unknown)", in which case the user MUST give some compilation
flags to Configure.
specifies `no-threads`. The value of the key `thread_scheme` may
be `(unknown)`, in which case the user MUST give some compilation
flags to `Configure`.
[3] OpenSSL has three types of things to link from object files or
static libraries:
static libraries:
- shared libraries; that would be libcrypto and libssl.
- shared objects (sometimes called dynamic libraries); that would
be the modules.
- applications; those are apps/openssl and all the test apps.
- shared libraries; that would be libcrypto and libssl.
- shared objects (sometimes called dynamic libraries); that would
be the modules.
- applications; those are apps/openssl and all the test apps.
Very roughly speaking, linking is done like this (words in braces
represent the configuration settings documented at the beginning
of this file):
Very roughly speaking, linking is done like this (words in braces
represent the configuration settings documented at the beginning
of this file):
shared libraries:
{ld} $(CFLAGS) {lflags} {shared_ldflag} -o libfoo.so \
@ -319,38 +317,43 @@ In each table entry, the following keys are significant:
{ld} $(CFLAGS) {lflags} -o app \
app1.o utils.o -lssl -lcrypto {ex_libs}
[4] There are variants of these attribute, prefixed with `lib_',
`dso_' or `bin_'. Those variants replace the unprefixed attribute
when building library, DSO or program modules specifically.
[4] There are variants of these attribute, prefixed with `lib_`,
`dso_` or `bin_`. Those variants replace the unprefixed attribute
when building library, DSO or program modules specifically.
Historically, the target configurations came in form of a string with
values separated by colons. This use is deprecated. The string form
looked like this:
"target" => "{cc}:{cflags}:{unistd}:{thread_cflag}:{sys_id}:{lflags}:{bn_ops}:{cpuid_obj}:{bn_obj}:{ec_obj}:{des_obj}:{aes_obj}:{bf_obj}:{md5_obj}:{sha1_obj}:{cast_obj}:{rc4_obj}:{rmd160_obj}:{rc5_obj}:{wp_obj}:{cmll_obj}:{modes_obj}:{padlock_obj}:{perlasm_scheme}:{dso_scheme}:{shared_target}:{shared_cflag}:{shared_ldflag}:{shared_extension}:{ranlib}:{arflags}:{multilib}"
"target" => "{cc}:{cflags}:{unistd}:{thread_cflag}:{sys_id}:{lflags}:
{bn_ops}:{cpuid_obj}:{bn_obj}:{ec_obj}:{des_obj}:{aes_obj}:
{bf_obj}:{md5_obj}:{sha1_obj}:{cast_obj}:{rc4_obj}:
{rmd160_obj}:{rc5_obj}:{wp_obj}:{cmll_obj}:{modes_obj}:
{padlock_obj}:{perlasm_scheme}:{dso_scheme}:{shared_target}:
{shared_cflag}:{shared_ldflag}:{shared_extension}:{ranlib}:
{arflags}:{multilib}"
Build info files
================
The build.info files that are spread over the source tree contain the
The `build.info` files that are spread over the source tree contain the
minimum information needed to build and distribute OpenSSL. It uses a
simple and yet fairly powerful language to determine what needs to be
built, from what sources, and other relationships between files.
For every build.info file, all file references are relative to the
directory of the build.info file for source files, and the
For every `build.info` file, all file references are relative to the
directory of the `build.info` file for source files, and the
corresponding build directory for built files if the build tree
differs from the source tree.
When processed, every line is processed with the perl module
Text::Template, using the delimiters "{-" and "-}". The hashes
%config and %target are passed to the perl fragments, along with
Text::Template, using the delimiters `{-` and `-}`. The hashes
`%config` and `%target` are passed to the perl fragments, along with
$sourcedir and $builddir, which are the locations of the source
directory for the current build.info file and the corresponding build
directory for the current `build.info` file and the corresponding build
directory, all relative to the top of the build tree.
'Configure' only knows inherently about the top build.info file. For
`Configure` only knows inherently about the top `build.info` file. For
any other directory that has one, further directories to look into
must be indicated like this:
@ -393,7 +396,7 @@ This should be rarely used, and care should be taken to make sure it's
only used when supported. For example, native Windows build doesn't
support building static libraries and DLLs at the same time, so using
static libraries on Windows can only be done when configured
'no-shared'.
`no-shared`.
In some cases, it's desirable to include some source files in the
shared form of a library only:
@ -435,7 +438,7 @@ be used in that case:
NOTE: GENERATE lines are limited to one command only per GENERATE.
Finally, you can have some simple conditional use of the build.info
Finally, you can have some simple conditional use of the `build.info`
information, looking like this:
IF[1]
@ -461,37 +464,37 @@ conditions based on something in the passed variables, for example:
SOURCE[libfoo]=...
ENDIF
Build-file programming with the "unified" build system
======================================================
"Build files" are called "Makefile" on Unix-like operating systems,
"descrip.mms" for MMS on VMS, "makefile" for nmake on Windows, etc.
"Build files" are called `Makefile` on Unix-like operating systems,
`descrip.mms` for MMS on VMS, `makefile` for `nmake` on Windows, etc.
To use the "unified" build system, the target configuration needs to
set the three items 'build_scheme', 'build_file' and 'build_command'.
In the rest of this section, we will assume that 'build_scheme' is set
set the three items `build_scheme`, `build_file` and `build_command`.
In the rest of this section, we will assume that `build_scheme` is set
to "unified" (see the configurations documentation above for the
details).
For any name given by 'build_file', the "unified" system expects a
template file in Configurations/ named like the build file, with
".tmpl" appended, or in case of possible ambiguity, a combination of
the second 'build_scheme' list item and the 'build_file' name. For
example, if 'build_file' is set to "Makefile", the template could be
Configurations/Makefile.tmpl or Configurations/unix-Makefile.tmpl.
In case both Configurations/unix-Makefile.tmpl and
Configurations/Makefile.tmpl are present, the former takes
For any name given by `build_file`, the "unified" system expects a
template file in `Configurations/` named like the build file, with
`.tmpl` appended, or in case of possible ambiguity, a combination of
the second `build_scheme` list item and the `build_file` name. For
example, if `build_file` is set to `Makefile`, the template could be
[`Configurations/Makefile.tmpl`](Makefile.tmpl) or
[`Configurations/unix-Makefile.tmpl`](unix-Makefile.tmpl).
In case both [`Configurations/unix-Makefile.tmpl`](Makefile.tmpl) and
[`Configurations/Makefile.tmpl`](Makefile.tmpl) are present, the former takes
precedence.
The build-file template is processed with the perl module
Text::Template, using "{-" and "-}" as delimiters that enclose the
Text::Template, using `{-` and `-}` as delimiters that enclose the
perl code fragments that generate configuration-dependent content.
Those perl fragments have access to all the hash variables from
configdata.pem.
The build-file template is expected to define at least the following
perl functions in a perl code fragment enclosed with "{-" and "-}".
perl functions in a perl code fragment enclosed with `{-` and `-}`.
They are all expected to return a string with the lines they produce.
generatesrc - function that produces build file lines to generate
@ -640,7 +643,6 @@ else, end it like this:
""; # Make sure no lingering values end up in the Makefile
-}
Configure helper scripts
========================
@ -651,10 +653,10 @@ Checker scripts
These scripts are per platform family, to check the integrity of the
tools used for configuration and building. The checker script used is
either {build_platform}-{build_file}-checker.pm or
{build_platform}-checker.pm, where {build_platform} is the second
'build_scheme' list element from the configuration target data, and
{build_file} is 'build_file' from the same target data.
either `{build_platform}-{build_file}-checker.pm` or
`{build_platform}-checker.pm`, where `{build_platform}` is the second
`build_scheme` list element from the configuration target data, and
`{build_file}` is `build_file` from the same target data.
If the check succeeds, the script is expected to end with a non-zero
expression. If the check fails, the script can end with a zero, or

View File

@ -2739,7 +2739,7 @@ sub death_handler {
my @message = ( <<"_____", @_ );
Failure! $build_file wasn't produced.
Please read INSTALL.md and associated NOTES files. You may also have to
Please read INSTALL.md and associated NOTES-* files. You may also have to
look over your available compiler tool chain or change your configuration.
_____

View File

@ -1,10 +1,13 @@
MODIFYING OPENSSL SOURCE
------------------------
This document describes the way to add custom modifications to OpenSSL sources.
MODIFYING OPENSSL SOURCE
========================
This document describes the way to add custom modifications to OpenSSL sources.
If you are adding new public functions to the custom library build, you need to
either add a prototype in one of the existing OpenSSL header files;
or provide a new header file and edit Configurations/unix-Makefile.tmpl to pick up that file.
or provide a new header file and edit
[Configurations/unix-Makefile.tmpl](Configurations/unix-Makefile.tmpl)
to pick up that file.
After that perform the following steps:
@ -13,14 +16,18 @@
make
make test
"make update" ensures that your functions declarations are added to util/libcrypto.num or util/libssl.num
If you plan to submit the changes you made to OpenSSL (see CONTRIBUTING), it's worth running:
`make update` ensures that your functions declarations are added to
`util/libcrypto.num` or `util/libssl.num`.
If you plan to submit the changes you made to OpenSSL
(see [CONTRIBUTING.md](CONTRIBUTING.md)), it's worth running:
make doc-nits
after running "make update" to ensure that documentation has correct format.
after running `make update` to ensure that documentation has correct format.
"make update" also generates files related to OIDs (in the crypto/objects/ folder) and errors.
If a merge error occurs in one of these generated files then the generated files need to be removed
and regenerated using "make update".
To aid in this process the generated files can be committed separately so they can be removed easily.
`make update` also generates files related to OIDs (in the `crypto/objects/`
folder) and errors.
If a merge error occurs in one of these generated files then the
generated files need to be removed and regenerated using `make update`.
To aid in this process the generated files can be committed separately
so they can be removed easily.

View File

@ -48,8 +48,8 @@ Prerequisites
To install OpenSSL, you will need:
* A "make" implementation
* Perl 5 with core modules (please read [NOTES.PERL](NOTES.PERL))
* The Perl module Text::Template (please read [NOTES.PERL](NOTES.PERL))
* Perl 5 with core modules (please read [NOTES-Perl.md](NOTES-Perl.md))
* The Perl module `Text::Template` (please read [NOTES-PERL.md](NOTES-Perl.md))
* an ANSI C compiler
* a development environment in the form of development libraries and C
header files
@ -58,13 +58,13 @@ To install OpenSSL, you will need:
For additional platform specific requirements, solutions to specific
issues and other details, please read one of these:
* [NOTES.UNIX](NOTES.UNIX) - notes for Unix like systems
* [NOTES.VMS](NOTES.VMS) - notes related to OpenVMS
* [NOTES.WIN](NOTES.WIN) - notes related to the Windows platform
* [NOTES.DJGPP](NOTES.DJGPP) - building for DOS with DJGPP
* [NOTES.ANDROID](NOTES.ANDROID) - building for Android platforms (using NDK)
* [NOTES.VALGRIND](NOTES.VALGRIND) - testing with Valgrind
* [NOTES.PERL](NOTES.PERL) - some notes on Perl
* [NOTES-Unix.md](NOTES-Unix.md) - notes for Unix like systems
* [NOTES-VMS.md](NOTES-VMS.md) - notes related to OpenVMS
* [NOTES-Windows.txt](NOTES-Windows.txt) - notes related to the Windows platform
* [NOTES-DJGPP.md](NOTES-DJGPP.md) - building for DOS with DJGPP
* [NOTES-Android.md](NOTES-Android.md) - building for Android platforms (using NDK)
* [NOTES-Valgrind.md](NOTES-Valgrind.md) - testing with Valgrind
* [NOTES-Perl.m](NOTES-Perl.md) - some notes on Perl
Notational conventions
======================
@ -275,7 +275,7 @@ On OpenVMS:
$ perl Configure --prefix=PROGRAM:[INSTALLS] --openssldir=SYS$MANAGER:[OPENSSL]
Note: if you do add options to the configuration command, please make sure
you've read more than just this Quick Start, such as relevant `NOTES.*` files,
you've read more than just this Quick Start, such as relevant `NOTES-*` files,
the options outline below, as configuration options may change the outcome
in otherwise unexpected ways.
@ -285,7 +285,7 @@ Configuration Options
There are several options to `./Configure` to customize the build (note that
for Windows, the defaults for `--prefix` and `--openssldir` depend on what
configuration is used and what Windows implementation OpenSSL is built on.
More notes on this in [NOTES.WIN](NOTES.WIN)):
More notes on this in [NOTES-Windows.txt](NOTES-Windows.txt):
API Level
---------
@ -1505,7 +1505,7 @@ cases it does not succeed. You will see a message like the following:
$ ./Configure
Operating system: x86-whatever-minix
This system (minix) is not supported. See file INSTALL for details.
This system (minix) is not supported. See file INSTALL.md for details.
Even if the automatic target selection by the `./Configure` script fails,
chances are that you still might find a suitable target in the `Configurations`

View File

@ -1,6 +1,5 @@
NOTES FOR ANDROID PLATFORMS
===========================
NOTES FOR ANDROID PLATFORMS
===========================
Requirement details
-------------------
@ -15,27 +14,27 @@
Configuration
-------------
Android is a cross-compiled target and you can't rely on ./Configure
Android is a cross-compiled target and you can't rely on `./Configure`
to find out the configuration target for you. You have to name your
target explicitly; there are android-arm, android-arm64, android-mips,
android-mip64, android-x86 and android-x86_64 (*MIPS targets are no
target explicitly; there are `android-arm`, `android-arm64`, `android-mips`,
`android-mip64`, `android-x86` and `android-x86_64` (`*MIPS` targets are no
longer supported with NDK R20+).
Do not pass --cross-compile-prefix (as you might be tempted), as it
will be "calculated" automatically based on chosen platform. However,
you still need to know the prefix to extend your PATH, in order to
invoke $(CROSS_COMPILE)clang [*gcc on NDK 19 and lower] and company.
(Configure will fail and give you a hint if you get it wrong.)
invoke `$(CROSS_COMPILE)clang` [`*gcc` on NDK 19 and lower] and company.
(`./Configure` will fail and give you a hint if you get it wrong.)
Apart from PATH adjustment you need to set ANDROID_NDK_ROOT environment
to point at the NDK directory. If you're using a side-by-side NDK the path
will look something like /some/where/android-sdk/ndk/<ver>, and for a
standalone NDK the path will be something like /some/where/android-ndk-<ver>.
Apart from `PATH` adjustment you need to set `ANDROID_NDK_ROOT` environment
to point at the `NDK` directory. If you're using a side-by-side NDK the path
will look something like `/some/where/android-sdk/ndk/<ver>`, and for a
standalone NDK the path will be something like `/some/where/android-ndk-<ver>`.
Both variables are significant at both configuration and compilation times.
The NDK customarily supports multiple Android API levels, e.g. android-14,
android-21, etc. By default latest API level is chosen. If you need to
target an older platform pass the argument -D__ANDROID_API__=N to Configure,
with N being the numerical value of the target platform version. For example,
The NDK customarily supports multiple Android API levels, e.g. `android-14`,
`android-21`, etc. By default latest API level is chosen. If you need to target
an older platform pass the argument `-D__ANDROID_API__=N` to `Configure`,
with `N` being the numerical value of the target platform version. For example,
to compile for Android 10 arm64 with a side-by-side NDK r20.0.5594570
export ANDROID_NDK_ROOT=/home/whoever/Android/android-sdk/ndk/20.0.5594570
@ -52,13 +51,13 @@
./Configure android-arm -D__ANDROID_API__=14
make
Caveat lector! Earlier OpenSSL versions relied on additional CROSS_SYSROOT
variable set to $ANDROID_NDK_ROOT/platforms/android-<api>/arch-<arch> to
Caveat lector! Earlier OpenSSL versions relied on additional `CROSS_SYSROOT`
variable set to `$ANDROID_NDK_ROOT/platforms/android-<api>/arch-<arch>` to
appoint headers-n-libraries' location. It's still recognized in order
to facilitate migration from older projects. However, since API level
appears in CROSS_SYSROOT value, passing -D__ANDROID_API__=N can be in
appears in `CROSS_SYSROOT` value, passing `-D__ANDROID_API__=N` can be in
conflict, and mixing the two is therefore not supported. Migration to
CROSS_SYSROOT-less setup is recommended.
`CROSS_SYSROOT`-less setup is recommended.
One can engage clang by adjusting PATH to cover same NDK's clang. Just
keep in mind that if you miss it, Configure will try to use gcc...
@ -68,9 +67,9 @@
Another option is to create so called "standalone toolchain" tailored
for single specific platform including Android API level, and assign its
location to ANDROID_NDK_ROOT. In such case you have to pass matching
target name to Configure and shouldn't use -D__ANDROID_API__=N. PATH
adjustment becomes simpler, $ANDROID_NDK_ROOT/bin:$PATH suffices.
location to `ANDROID_NDK_ROOT`. In such case you have to pass matching
target name to Configure and shouldn't use `-D__ANDROID_API__=N`. `PATH`
adjustment becomes simpler, `$ANDROID_NDK_ROOT/bin:$PATH` suffices.
Running tests (on Linux)
------------------------

View File

@ -1,7 +1,5 @@
INSTALLATION ON THE DOS PLATFORM WITH DJGPP
-------------------------------------------
INSTALLATION ON THE DOS PLATFORM WITH DJGPP
===========================================
OpenSSL has been ported to DJGPP, a Unix look-alike 32-bit run-time
environment for 16-bit DOS, but only with long filename support.
@ -11,28 +9,28 @@
You should have a full DJGPP environment installed, including the
latest versions of DJGPP, GCC, BINUTILS, BASH, etc. This package
requires that PERL and the PERL module Text::Template also be
installed (see NOTES.PERL).
requires that PERL and the PERL module `Text::Template` also be
installed (see [NOTES-Perl.md](NOTES-Perl.md)).
All of these can be obtained from the usual DJGPP mirror sites or
directly at "http://www.delorie.com/pub/djgpp". For help on which
directly at <http://www.delorie.com/pub/djgpp>. For help on which
files to download, see the DJGPP "ZIP PICKER" page at
"http://www.delorie.com/djgpp/zip-picker.html". You also need to have
<http://www.delorie.com/djgpp/zip-picker.html>. You also need to have
the WATT-32 networking package installed before you try to compile
OpenSSL. This can be obtained from "http://www.watt-32.net/".
OpenSSL. This can be obtained from <http://www.watt-32.net/>.
The Makefile assumes that the WATT-32 code is in the directory
specified by the environment variable WATT_ROOT. If you have watt-32
in directory "watt32" under your main DJGPP directory, specify
WATT_ROOT="/dev/env/DJDIR/watt32".
in directory `watt32` under your main DJGPP directory, specify
`WATT_ROOT="/dev/env/DJDIR/watt32"`.
To compile OpenSSL, start your BASH shell, then configure for DJGPP by
running "./Configure" with appropriate arguments:
running `./Configure` with appropriate arguments:
./Configure no-threads --prefix=/dev/env/DJDIR DJGPP
./Configure no-threads --prefix=/dev/env/DJDIR DJGPP
And finally fire up "make". You may run out of DPMI selectors when
And finally fire up `make`. You may run out of DPMI selectors when
running in a DOS box under Windows. If so, just close the BASH
shell, go back to Windows, and restart BASH. Then run "make" again.
shell, go back to Windows, and restart BASH. Then run `make` again.
RUN-TIME CAVEAT LECTOR
--------------
@ -41,8 +39,8 @@
"Cryptographic software needs a source of unpredictable data to work
correctly. Many open source operating systems provide a "randomness
device" (/dev/urandom or /dev/random) that serves this purpose."
device" (`/dev/urandom` or `/dev/random`) that serves this purpose."
As of version 0.9.7f DJGPP port checks upon /dev/urandom$ for a 3rd
party "randomness" DOS driver. One such driver, NOISE.SYS, can be
obtained from "http://www.rahul.net/dkaufman/index.html".
As of version 0.9.7f DJGPP port checks upon `/dev/urandom$` for a 3rd
party "randomness" DOS driver. One such driver, `NOISE.SYS`, can be
obtained from <http://www.rahul.net/dkaufman/index.html>.

View File

@ -1,5 +1,5 @@
TOC
===
TOC
===
- Notes on Perl
- Notes on Perl on Windows
@ -18,10 +18,10 @@
installed properly. We do not claim to know them all, but experience
has told us the following:
- on Linux distributions based on Debian, the package 'perl' will
- on Linux distributions based on Debian, the package `perl` will
install the core Perl modules as well, so you will be fine.
- on Linux distributions based on RPMs, you will need to install
'perl-core' rather than just 'perl'.
`perl-core` rather than just `perl`.
You MUST have at least Perl version 5.10.0 installed. This minimum
requirement is due to our use of regexp backslash sequence \R among
@ -31,23 +31,23 @@
------------------------
There are a number of build targets that can be viewed as "Windows".
Indeed, there are VC-* configs targeting VisualStudio C, as well as
Indeed, there are `VC-*` configs targeting VisualStudio C, as well as
MinGW and Cygwin. The key recommendation is to use "matching" Perl,
one that matches build environment. For example, if you will build
on Cygwin be sure to use the Cygwin package manager to install Perl.
For MSYS builds use the MSYS provided Perl.
For VC-* builds we recommend Strawberry Perl, from http://strawberryperl.com.
An alternative is ActiveState Perl, from http://www.activestate.com/ActivePerl
For VC-* builds we recommend Strawberry Perl, from <http://strawberryperl.com>.
An alternative is ActiveState Perl, from <http://www.activestate.com/ActivePerl>
for which you may need to explicitly select the Perl module Win32/Console.pm
available via https://platform.activestate.com/ActiveState.
available via <https://platform.activestate.com/ActiveState>.
Notes on Perl on VMS
--------------------
You will need to install Perl separately. One way to do so is to
download the source from http://perl.org/, unpacking it, reading
README.vms and follow the instructions. Another way is to download a
.PCSI file from http://www.vmsperl.com/ and install it using the
download the source from <http://perl.org/>, unpacking it, reading
`README-VMS.md` and follow the instructions. Another way is to download a
`.PCSI` file from <http://www.vmsperl.com/> and install it using the
POLYCENTER install tool.
Notes on Perl modules we use
@ -57,18 +57,22 @@
ourselves to core Perl modules to keep the requirements down. There
are just a few exceptions:
Test::More We require the minimum version to be 0.96, which
appeared in Perl 5.13.4, because that version was
the first to have all the features we're using.
This module is required for testing only! If you
don't plan on running the tests, you don't need to
bother with this one.
* `Test::More`
Text::Template This module is not part of the core Perl modules.
As a matter of fact, the core Perl modules do not
include any templating module to date.
This module is absolutely needed, configuration
depends on it.
We require the minimum version to be 0.96, which
appeared in Perl 5.13.4, because that version was
the first to have all the features we're using.
This module is required for testing only!
If you don't plan on running the tests,
you don't need to bother with this one.
* `Text::Template`
This module is not part of the core Perl modules.
As a matter of fact, the core Perl modules do not
include any templating module to date.
This module is absolutely needed,
configuration depends on it.
To avoid unnecessary initial hurdles, we have bundled a copy of the
following modules in our source. They will work as fallbacks if
@ -80,7 +84,7 @@
---------------------------------
There are a number of ways to install a perl module. In all
descriptions below, Text::Template will serve as an example.
descriptions below, `Text::Template` will serve as an example.
1. for Linux users, the easiest is to install with the use of your
favorite package manager. Usually, all you need to do is search

View File

@ -1,9 +1,8 @@
NOTES FOR UNIX-LIKE PLATFORMS
=============================
NOTES FOR UNIX LIKE PLATFORMS
=============================
For Unix/POSIX runtime systems on Windows, please see NOTES.WIN.
For Unix/POSIX runtime systems on Windows,
please see [NOTES-Windows.txt](NOTES-Windows.txt).
OpenSSL uses the compiler to link programs and shared libraries
---------------------------------------------------------------
@ -13,21 +12,20 @@
objects. Because of this, any linking option that's given to the
configuration scripts MUST be in a form that the compiler can accept.
This varies between systems, where some have compilers that accept
linker flags directly, while others take them in '-Wl,' form. You need
linker flags directly, while others take them in `-Wl,` form. You need
to read your compiler documentation to figure out what is acceptable,
and ld(1) to figure out what linker options are available.
and `ld(1)` to figure out what linker options are available.
Shared libraries and installation in non-default locations
----------------------------------------------------------
Every Unix system has its own set of default locations for shared
libraries, such as /lib, /usr/lib or possibly /usr/local/lib. If
libraries, such as `/lib`, `/usr/lib` or possibly `/usr/local/lib`. If
libraries are installed in non-default locations, dynamically linked
binaries will not find them and therefore fail to run, unless they get
a bit of help from a defined runtime shared library search path.
For OpenSSL's application (the 'openssl' command), our configuration
For OpenSSL's application (the `openssl` command), our configuration
scripts do NOT generally set the runtime shared library search path for
you. It's therefore advisable to set it explicitly when configuring,
unless the libraries are to be installed in directories that you know
@ -42,15 +40,15 @@
Possible options to set the runtime shared library search path include
the following:
-Wl,-rpath,/whatever/path # Linux, *BSD, etc.
-R /whatever/path # Solaris
-Wl,-R,/whatever/path # AIX (-bsvr4 is passed internally)
-Wl,+b,/whatever/path # HP-UX
-rpath /whatever/path # Tru64, IRIX
-Wl,-rpath,/whatever/path # Linux, *BSD, etc.
-R /whatever/path # Solaris
-Wl,-R,/whatever/path # AIX (-bsvr4 is passed internally)
-Wl,+b,/whatever/path # HP-UX
-rpath /whatever/path # Tru64, IRIX
OpenSSL's configuration scripts recognise all these options and pass
them to the Makefile that they build. (In fact, all arguments starting
with '-Wl,' are recognised as linker options.)
with `-Wl,` are recognised as linker options.)
Please do not use verbatim directories in your runtime shared library
search path! Some OpenSSL config targets add an extra directory level
@ -63,28 +61,27 @@
'-Wl,-rpath,$(LIBRPATH)'
On modern ELF based systems, there are two runtime search paths tags to
consider, DT_RPATH and DT_RUNPATH. Shared objects are searched for in
consider, `DT_RPATH` and `DT_RUNPATH`. Shared objects are searched for in
this order:
1. Using directories specified in DT_RPATH, unless DT_RUNPATH is
also set.
2. Using the environment variable LD_LIBRARY_PATH
3. Using directories specified in DT_RUNPATH.
4. Using system shared object caches and default directories.
1. Using directories specified in DT_RPATH, unless DT_RUNPATH is also set.
2. Using the environment variable LD_LIBRARY_PATH
3. Using directories specified in DT_RUNPATH.
4. Using system shared object caches and default directories.
This means that the values in the environment variable LD_LIBRARY_PATH
won't matter if the library is found in the paths given by DT_RPATH
(and DT_RUNPATH isn't set).
This means that the values in the environment variable `LD_LIBRARY_PATH`
won't matter if the library is found in the paths given by `DT_RPATH`
(and `DT_RUNPATH` isn't set).
Exactly which of DT_RPATH or DT_RUNPATH is set by default appears to
Exactly which of `DT_RPATH` or `DT_RUNPATH` is set by default appears to
depend on the system. For example, according to documentation,
DT_RPATH appears to be deprecated on Solaris in favor of DT_RUNPATH,
while on Debian GNU/Linux, either can be set, and DT_RPATH is the
`DT_RPATH` appears to be deprecated on Solaris in favor of `DT_RUNPATH`,
while on Debian GNU/Linux, either can be set, and `DT_RPATH` is the
default at the time of writing.
How to choose which runtime search path tag is to be set depends on
your system, please refer to ld(1) for the exact information on your
system. As an example, the way to ensure the DT_RUNPATH is set on
system. As an example, the way to ensure the `DT_RUNPATH` is set on
Debian GNU/Linux systems rather than DT_RPATH is to tell the linker to
set new dtags, like this:
@ -93,7 +90,7 @@
It might be worth noting that some/most ELF systems implement support
for runtime search path relative to the directory containing current
executable, by interpreting $ORIGIN along with some other internal
executable, by interpreting `$ORIGIN` along with some other internal
variables. Consult your system documentation.
Linking your application
@ -104,7 +101,7 @@
The OpenSSL config options mentioned above might or might not have bearing
on linking of the target application. "Might" means that under some
circumstances it would be sufficient to link with OpenSSL shared library
"naturally", i.e. with -L/whatever/path -lssl -lcrypto. But there are
"naturally", i.e. with `-L/whatever/path -lssl -lcrypto`. But there are
also cases when you'd have to explicitly specify runtime search path
when linking your application. Consult your system documentation and use
above section as inspiration...
@ -114,4 +111,4 @@
for shared libraries first and tend to remain "blind" to static OpenSSL
libraries. Referring to system documentation would suffice, if not for
a corner case. On AIX static libraries (in shared build) are named
differently, add _a suffix to link with them, e.g. -lcrypto_a.
differently, add `_a` suffix to link with them, e.g. `-lcrypto_a`.

View File

@ -1,17 +1,15 @@
NOTES FOR THE OPENVMS PLATFORM
==============================
NOTES FOR THE OPENVMS PLATFORM
==============================
Requirement details
-------------------
In addition to the requirements and instructions listed in INSTALL,
this are required as well:
In addition to the requirements and instructions listed
in [INSTALL.md](INSTALL.md), this are required as well:
* At least ODS-5 disk organization for source and build.
Installation can be done on any existing disk organization.
About ANSI C compiler
---------------------
@ -22,20 +20,19 @@
version 7.1 or later. Compiling with a different ANSI C compiler may
require some work.
Please avoid using C RTL feature logical names DECC$* when building
Please avoid using C RTL feature logical names `DECC$*` when building
and testing OpenSSL. Most of all, they can be disruptive when
running the tests, as they affect the Perl interpreter.
About ODS-5 directory names and Perl
------------------------------------
It seems that the perl function canonpath() in the File::Spec module
It seems that the perl function canonpath() in the `File::Spec` module
doesn't treat file specifications where the last directory name
contains periods very well. Unfortunately, some versions of VMS tar
will keep the periods in the OpenSSL source directory instead of
converting them to underscore, thereby leaving your source in
something like [.openssl-1^.1^.0]. This will lead to issues when
something like `[.openssl-1^.1^.0]`. This will lead to issues when
configuring and building OpenSSL.
We have no replacement for Perl's canonpath(), so the best workaround
@ -44,7 +41,6 @@
$ rename openssl-1^.1^.0.DIR openssl-1_1_0.DIR
About MMS and DCL
-----------------
@ -55,7 +51,6 @@
yourself up a few logical names for the directory trees you're going
to use.
About debugging
---------------
@ -68,7 +63,7 @@
directly for debugging. Do not try to use them from a script, such
as running the test suite.
*The following is not available on Alpha*
### The following is not available on Alpha
As a compromise, we're turning off the flag that makes the debugger
start automatically. If there is a program that you need to debug,
@ -81,7 +76,6 @@
$ set image /flag=nocall_debug [.test]evp_test.exe
Checking the distribution
-------------------------
@ -92,16 +86,16 @@
The easiest way to check if everything got through as it should is to
check for one of the following files:
[.crypto]opensslconf^.h.in
[.crypto]opensslconf^.h.in
The best way to get a correct distribution is to download the gzipped
tar file from ftp://ftp.openssl.org/source/, use GZIP -d to uncompress
it and VMSTAR to unpack the resulting tar file.
tar file from ftp://ftp.openssl.org/source/, use `GZIP -d` to uncompress
it and `VMSTAR` to unpack the resulting tar file.
Gzip and VMSTAR are available here:
http://antinode.info/dec/index.html#Software
<http://antinode.info/dec/index.html#Software>
Should you need it, you can find UnZip for VMS here:
http://www.info-zip.org/UnZip.html
<http://www.info-zip.org/UnZip.html>

View File

@ -1,4 +1,3 @@
NOTES FOR VALGRIND
==================
@ -14,11 +13,11 @@ Requirements
------------
1. Platform supported by Valgrind
See: http://valgrind.org/info/platforms.html
See <http://valgrind.org/info/platforms.html>
2. Valgrind installed on the platform
See: http://valgrind.org/downloads/current.html
See <http://valgrind.org/downloads/current.html>
3. OpensSSL compiled
See: [INSTALL.md](INSTALL.md)
See [INSTALL.md](INSTALL.md)
Running Tests
-------------
@ -28,18 +27,19 @@ Test behavior can be modified by adjusting environment variables.
`EXE_SHELL`
This variable is used to specify the shell used to execute OpenSSL test
programs. The default wrapper (util/wrap.pl) initializes the environment
programs. The default wrapper (`util/wrap.pl`) initializes the environment
to allow programs to find shared libraries. The variable can be modified
to specify a different executable environment.
EXE_SHELL="`/bin/pwd`/util/wrap.pl valgrind --error-exitcode=1 --leak-check=full -q"
EXE_SHELL=\
"`/bin/pwd`/util/wrap.pl valgrind --error-exitcode=1 --leak-check=full -q"
This will start up Valgrind with the default checker (memcheck).
The --error-exitcode=1 option specifies that Valgrind should exit with an
This will start up Valgrind with the default checker (`memcheck`).
The `--error-exitcode=1` option specifies that Valgrind should exit with an
error code of 1 when memory leaks occur.
The --leak-check=full option specifies extensive memory checking.
The -q option prints only error messages.
Additional Valgrind options may be added to the EXE_SHELL variable.
The `--leak-check=full` option specifies extensive memory checking.
The `-q` option prints only error messages.
Additional Valgrind options may be added to the `EXE_SHELL` variable.
`OPENSSL_ia32cap`
@ -55,16 +55,18 @@ supported. Setting the following disables instructions beyond AVX2:
This variable may need to be set to something different based on the
processor and Valgrind version you are running tests on. More information
may be found in [docs/man3/OPENSSL_ia32cap.pod](docs/man3/OPENSSL_ia32cap.pod).
may be found in [doc/man3/OPENSSL_ia32cap.pod](doc/man3/OPENSSL_ia32cap.pod).
Additional variables (such as `VERBOSE` and `TESTS`) are described in the
file [test/README.md](test/README.md).
Example command line:
$ make test EXE_SHELL="`/bin/pwd`/util/wrap.pl valgrind --error-exitcode=1 --leak-check=full -q" OPENSSL_ia32cap=":0"
$ make test EXE_SHELL="`/bin/pwd`/util/wrap.pl valgrind --error-exitcode=1 \
--leak-check=full -q" OPENSSL_ia32cap=":0"
If an error occurs, you can then run the specific test via the `TESTS`
variable with the VERBOSE option to gather additional information.
If an error occurs, you can then run the specific test via the `TESTS` variable
with the `VERBOSE` or `VF` or `VFP` options to gather additional information.
$ make test VERBOSE=1 TESTS=test_test EXE_SHELL="`/bin/pwd`/util/wrap.pl valgrind --error-exitcode=1 --leak-check=full -q" OPENSSL_ia32cap=":0"
$ make test VERBOSE=1 TESTS=test_test EXE_SHELL="`/bin/pwd`/util/wrap.pl \
valgrind --error-exitcode=1 --leak-check=full -q" OPENSSL_ia32cap=":0"

View File

@ -1,5 +1,5 @@
ENGINE
======
ENGINES
=======
With OpenSSL 0.9.6, a new component was added to support alternative
cryptography implementations, most commonly for interfacing with external
@ -13,9 +13,9 @@
There are currently built-in ENGINE implementations for the following
crypto devices:
o Microsoft CryptoAPI
o VIA Padlock
o nCipher CHIL
* Microsoft CryptoAPI
* VIA Padlock
* nCipher CHIL
In addition, dynamic binding to external ENGINE implementations is now
provided by a special ENGINE called "dynamic". See the "DYNAMIC ENGINE"
@ -23,16 +23,22 @@
At this stage, a number of things are still needed and are being worked on:
1 Integration of EVP support.
2 Configuration support.
3 Documentation!
1. Integration of EVP support.
2. Configuration support.
3. Documentation!
1 With respect to EVP, this relates to support for ciphers and digests in
Integration of EVP support
--------------------------
With respect to EVP, this relates to support for ciphers and digests in
the ENGINE model so that alternative implementations of existing
algorithms/modes (or previously unimplemented ones) can be provided by
ENGINE implementations.
2 Configuration support currently exists in the ENGINE API itself, in the
Configuration support
---------------------
Configuration support currently exists in the ENGINE API itself, in the
form of "control commands". These allow an application to expose to the
user/admin the set of commands and parameter types a given ENGINE
implementation supports, and for an application to directly feed string
@ -47,10 +53,14 @@
Presently however, applications must use the ENGINE API itself to provide
such functionality. To see first hand the types of commands available
with the various compiled-in ENGINEs (see further down for dynamic
ENGINEs), use the "engine" openssl utility with full verbosity, ie;
ENGINEs), use the "engine" openssl utility with full verbosity, i.e.:
openssl engine -vvvv
3 Documentation? Volunteers welcome! The source code is reasonably well
Documentation
-------------
Documentation? Volunteers welcome! The source code is reasonably well
self-documenting, but some summaries and usage instructions are needed -
moreover, they are needed in the same POD format the existing OpenSSL
documentation is provided in. Any complete or incomplete contributions
@ -73,12 +83,12 @@
ENGINE API itself (ie. not necessarily specific to a particular ENGINE
implementation) then you should mail complete details to the relevant
OpenSSL mailing list. For a definition of "complete details", refer to
the OpenSSL "README" file. As for which list to send it to;
the OpenSSL "README" file. As for which list to send it to:
openssl-users: if you are *using* the ENGINE abstraction, either in an
pre-compiled application or in your own application code.
* openssl-users: if you are *using* the ENGINE abstraction, either in an
pre-compiled application or in your own application code.
openssl-dev: if you are discussing problems with OpenSSL source code.
* openssl-dev: if you are discussing problems with OpenSSL source code.
USAGE
=====
@ -132,150 +142,161 @@
How does "dynamic" work?
------------------------
The dynamic ENGINE has a special flag in its implementation such that
every time application code asks for the 'dynamic' ENGINE, it in fact
gets its own copy of it. As such, multi-threaded code (or code that
multiplexes multiple uses of 'dynamic' in a single application in any
way at all) does not get confused by 'dynamic' being used to do many
independent things. Other ENGINEs typically don't do this so there is
only ever 1 ENGINE structure of its type (and reference counts are used
to keep order). The dynamic ENGINE itself provides absolutely no
cryptographic functionality, and any attempt to "initialise" the ENGINE
automatically fails. All it does provide are a few "control commands"
that can be used to control how it will load an external ENGINE
implementation from a shared-library. To see these control commands,
use the command-line;
openssl engine -vvvv dynamic
The dynamic ENGINE has a special flag in its implementation such that
every time application code asks for the 'dynamic' ENGINE, it in fact
gets its own copy of it. As such, multi-threaded code (or code that
multiplexes multiple uses of 'dynamic' in a single application in any
way at all) does not get confused by 'dynamic' being used to do many
independent things. Other ENGINEs typically don't do this so there is
only ever 1 ENGINE structure of its type (and reference counts are used
to keep order). The dynamic ENGINE itself provides absolutely no
cryptographic functionality, and any attempt to "initialise" the ENGINE
automatically fails. All it does provide are a few "control commands"
that can be used to control how it will load an external ENGINE
implementation from a shared-library. To see these control commands,
use the command-line;
The "SO_PATH" control command should be used to identify the
shared-library that contains the ENGINE implementation, and "NO_VCHECK"
might possibly be useful if there is a minor version conflict and you
(or a vendor helpdesk) is convinced you can safely ignore it.
"ID" is probably only needed if a shared-library implements
multiple ENGINEs, but if you know the engine id you expect to be using,
it doesn't hurt to specify it (and this provides a sanity check if
nothing else). "LIST_ADD" is only required if you actually wish the
loaded ENGINE to be discoverable by application code later on using the
ENGINE's "id". For most applications, this isn't necessary - but some
application authors may have nifty reasons for using it. The "LOAD"
command is the only one that takes no parameters and is the command
that uses the settings from any previous commands to actually *load*
the shared-library ENGINE implementation. If this command succeeds, the
(copy of the) 'dynamic' ENGINE will magically morph into the ENGINE
that has been loaded from the shared-library. As such, any control
commands supported by the loaded ENGINE could then be executed as per
normal. Eg. if ENGINE "foo" is implemented in the shared-library
"libfoo.so" and it supports some special control command "CMD_FOO", the
following code would load and use it (NB: obviously this code has no
error checking);
openssl engine -vvvv dynamic
ENGINE *e = ENGINE_by_id("dynamic");
ENGINE_ctrl_cmd_string(e, "SO_PATH", "/lib/libfoo.so", 0);
ENGINE_ctrl_cmd_string(e, "ID", "foo", 0);
ENGINE_ctrl_cmd_string(e, "LOAD", NULL, 0);
ENGINE_ctrl_cmd_string(e, "CMD_FOO", "some input data", 0);
The "SO_PATH" control command should be used to identify the
shared-library that contains the ENGINE implementation, and "NO_VCHECK"
might possibly be useful if there is a minor version conflict and you
(or a vendor helpdesk) is convinced you can safely ignore it.
"ID" is probably only needed if a shared-library implements
multiple ENGINEs, but if you know the engine id you expect to be using,
it doesn't hurt to specify it (and this provides a sanity check if
nothing else). "LIST_ADD" is only required if you actually wish the
loaded ENGINE to be discoverable by application code later on using the
ENGINE's "id". For most applications, this isn't necessary - but some
application authors may have nifty reasons for using it. The "LOAD"
command is the only one that takes no parameters and is the command
that uses the settings from any previous commands to actually *load*
the shared-library ENGINE implementation. If this command succeeds, the
(copy of the) 'dynamic' ENGINE will magically morph into the ENGINE
that has been loaded from the shared-library. As such, any control
commands supported by the loaded ENGINE could then be executed as per
normal. Eg. if ENGINE "foo" is implemented in the shared-library
"libfoo.so" and it supports some special control command "CMD_FOO", the
following code would load and use it (NB: obviously this code has no
error checking);
For testing, the "openssl engine" utility can be useful for this sort
of thing. For example the above code excerpt would achieve much the
same result as;
ENGINE *e = ENGINE_by_id("dynamic");
ENGINE_ctrl_cmd_string(e, "SO_PATH", "/lib/libfoo.so", 0);
ENGINE_ctrl_cmd_string(e, "ID", "foo", 0);
ENGINE_ctrl_cmd_string(e, "LOAD", NULL, 0);
ENGINE_ctrl_cmd_string(e, "CMD_FOO", "some input data", 0);
openssl engine dynamic \
-pre SO_PATH:/lib/libfoo.so \
-pre ID:foo \
-pre LOAD \
-pre "CMD_FOO:some input data"
For testing, the "openssl engine" utility can be useful for this sort
of thing. For example the above code excerpt would achieve much the
same result as;
Or to simply see the list of commands supported by the "foo" ENGINE;
openssl engine dynamic \
-pre SO_PATH:/lib/libfoo.so \
-pre ID:foo \
-pre LOAD \
-pre "CMD_FOO:some input data"
openssl engine -vvvv dynamic \
-pre SO_PATH:/lib/libfoo.so \
-pre ID:foo \
-pre LOAD
Or to simply see the list of commands supported by the "foo" ENGINE;
Applications that support the ENGINE API and more specifically, the
"control commands" mechanism, will provide some way for you to pass
such commands through to ENGINEs. As such, you would select "dynamic"
as the ENGINE to use, and the parameters/commands you pass would
control the *actual* ENGINE used. Each command is actually a name-value
pair and the value can sometimes be omitted (eg. the "LOAD" command).
Whilst the syntax demonstrated in "openssl engine" uses a colon to
separate the command name from the value, applications may provide
their own syntax for making that separation (eg. a win32 registry
key-value pair may be used by some applications). The reason for the
"-pre" syntax in the "openssl engine" utility is that some commands
might be issued to an ENGINE *after* it has been initialised for use.
Eg. if an ENGINE implementation requires a smart-card to be inserted
during initialisation (or a PIN to be typed, or whatever), there may be
a control command you can issue afterwards to "forget" the smart-card
so that additional initialisation is no longer possible. In
applications such as web-servers, where potentially volatile code may
run on the same host system, this may provide some arguable security
value. In such a case, the command would be passed to the ENGINE after
it has been initialised for use, and so the "-post" switch would be
used instead. Applications may provide a different syntax for
supporting this distinction, and some may simply not provide it at all
("-pre" is almost always what you're after, in reality).
openssl engine -vvvv dynamic \
-pre SO_PATH:/lib/libfoo.so \
-pre ID:foo \
-pre LOAD
Applications that support the ENGINE API and more specifically, the
"control commands" mechanism, will provide some way for you to pass
such commands through to ENGINEs. As such, you would select "dynamic"
as the ENGINE to use, and the parameters/commands you pass would
control the *actual* ENGINE used. Each command is actually a name-value
pair and the value can sometimes be omitted (eg. the "LOAD" command).
Whilst the syntax demonstrated in "openssl engine" uses a colon to
separate the command name from the value, applications may provide
their own syntax for making that separation (eg. a win32 registry
key-value pair may be used by some applications). The reason for the
"-pre" syntax in the "openssl engine" utility is that some commands
might be issued to an ENGINE *after* it has been initialised for use.
Eg. if an ENGINE implementation requires a smart-card to be inserted
during initialisation (or a PIN to be typed, or whatever), there may be
a control command you can issue afterwards to "forget" the smart-card
so that additional initialisation is no longer possible. In
applications such as web-servers, where potentially volatile code may
run on the same host system, this may provide some arguable security
value. In such a case, the command would be passed to the ENGINE after
it has been initialised for use, and so the "-post" switch would be
used instead. Applications may provide a different syntax for
supporting this distinction, and some may simply not provide it at all
("-pre" is almost always what you're after, in reality).
How do I build a "dynamic" ENGINE?
----------------------------------
This question is trickier - currently OpenSSL bundles various ENGINE
implementations that are statically built in, and any application that
calls the "ENGINE_load_builtin_engines()" function will automatically
have all such ENGINEs available (and occupying memory). Applications
that don't call that function have no ENGINEs available like that and
would have to use "dynamic" to load any such ENGINE - but on the other
hand such applications would only have the memory footprint of any
ENGINEs explicitly loaded using user/admin provided control commands.
The main advantage of not statically linking ENGINEs and only using
"dynamic" for hardware support is that any installation using no
"external" ENGINE suffers no unnecessary memory footprint from unused
ENGINEs. Likewise, installations that do require an ENGINE incur the
overheads from only *that* ENGINE once it has been loaded.
Sounds good? Maybe, but currently building an ENGINE implementation as
a shared-library that can be loaded by "dynamic" isn't automated in
OpenSSL's build process. It can be done manually quite easily however.
Such a shared-library can either be built with any OpenSSL code it
needs statically linked in, or it can link dynamically against OpenSSL
if OpenSSL itself is built as a shared library. The instructions are
the same in each case, but in the former (statically linked any
dependencies on OpenSSL) you must ensure OpenSSL is built with
position-independent code ("PIC"). The default OpenSSL compilation may
already specify the relevant flags to do this, but you should consult
with your compiler documentation if you are in any doubt.
This question is trickier - currently OpenSSL bundles various ENGINE
implementations that are statically built in, and any application that
calls the "ENGINE_load_builtin_engines()" function will automatically
have all such ENGINEs available (and occupying memory). Applications
that don't call that function have no ENGINEs available like that and
would have to use "dynamic" to load any such ENGINE - but on the other
hand such applications would only have the memory footprint of any
ENGINEs explicitly loaded using user/admin provided control commands.
The main advantage of not statically linking ENGINEs and only using
"dynamic" for hardware support is that any installation using no
"external" ENGINE suffers no unnecessary memory footprint from unused
ENGINEs. Likewise, installations that do require an ENGINE incur the
overheads from only *that* ENGINE once it has been loaded.
This example will show building the "atalla" ENGINE in the
crypto/engine/ directory as a shared-library for use via the "dynamic"
ENGINE.
1) "cd" to the crypto/engine/ directory of a pre-compiled OpenSSL
source tree.
2) Recompile at least one source file so you can see all the compiler
flags (and syntax) being used to build normally. Eg;
touch hw_atalla.c ; make
will rebuild "hw_atalla.o" using all such flags.
3) Manually enter the same compilation line to compile the
"hw_atalla.c" file but with the following two changes;
(a) add "-DENGINE_DYNAMIC_SUPPORT" to the command line switches,
(b) change the output file from "hw_atalla.o" to something new,
eg. "tmp_atalla.o"
4) Link "tmp_atalla.o" into a shared-library using the top-level
OpenSSL libraries to resolve any dependencies. The syntax for doing
this depends heavily on your system/compiler and is a nightmare
known well to anyone who has worked with shared-library portability
before. 'gcc' on Linux, for example, would use the following syntax;
gcc -shared -o dyn_atalla.so tmp_atalla.o -L../.. -lcrypto
5) Test your shared library using "openssl engine" as explained in the
previous section. Eg. from the top-level directory, you might try;
apps/openssl engine -vvvv dynamic \
-pre SO_PATH:./crypto/engine/dyn_atalla.so -pre LOAD
If the shared-library loads successfully, you will see both "-pre"
commands marked as "SUCCESS" and the list of control commands
displayed (because of "-vvvv") will be the control commands for the
*atalla* ENGINE (ie. *not* the 'dynamic' ENGINE). You can also add
the "-t" switch to the utility if you want it to try and initialise
the atalla ENGINE for use to test any possible hardware/driver
issues.
Sounds good? Maybe, but currently building an ENGINE implementation as
a shared-library that can be loaded by "dynamic" isn't automated in
OpenSSL's build process. It can be done manually quite easily however.
Such a shared-library can either be built with any OpenSSL code it
needs statically linked in, or it can link dynamically against OpenSSL
if OpenSSL itself is built as a shared library. The instructions are
the same in each case, but in the former (statically linked any
dependencies on OpenSSL) you must ensure OpenSSL is built with
position-independent code ("PIC"). The default OpenSSL compilation may
already specify the relevant flags to do this, but you should consult
with your compiler documentation if you are in any doubt.
This example will show building the "atalla" ENGINE in the
crypto/engine/ directory as a shared-library for use via the "dynamic"
ENGINE.
1. "cd" to the crypto/engine/ directory of a pre-compiled OpenSSL
source tree.
2. Recompile at least one source file so you can see all the compiler
flags (and syntax) being used to build normally. Eg;
touch hw_atalla.c ; make
will rebuild "hw_atalla.o" using all such flags.
3. Manually enter the same compilation line to compile the
"hw_atalla.c" file but with the following two changes;
* add "-DENGINE_DYNAMIC_SUPPORT" to the command line switches,
* change the output file from "hw_atalla.o" to something new,
eg. "tmp_atalla.o"
4. Link "tmp_atalla.o" into a shared-library using the top-level
OpenSSL libraries to resolve any dependencies. The syntax for doing
this depends heavily on your system/compiler and is a nightmare
known well to anyone who has worked with shared-library portability
before. 'gcc' on Linux, for example, would use the following syntax;
gcc -shared -o dyn_atalla.so tmp_atalla.o -L../.. -lcrypto
5. Test your shared library using "openssl engine" as explained in the
previous section. Eg. from the top-level directory, you might try
apps/openssl engine -vvvv dynamic \
-pre SO_PATH:./crypto/engine/dyn_atalla.so -pre LOAD
If the shared-library loads successfully, you will see both "-pre"
commands marked as "SUCCESS" and the list of control commands
displayed (because of "-vvvv") will be the control commands for the
*atalla* ENGINE (ie. *not* the 'dynamic' ENGINE). You can also add
the "-t" switch to the utility if you want it to try and initialise
the atalla ENGINE for use to test any possible hardware/driver issues.
PROBLEMS
========

View File

@ -1 +1,4 @@
OpenSSL FIPS support
====================
This release does not support a FIPS 140-2 validated module.

View File

@ -105,13 +105,13 @@ detailed instructions about building and installing OpenSSL. For some
platforms, the installation instructions are amended by a platform specific
document.
* [NOTES.ANDROID](NOTES.ANDROID)
* [NOTES.DJGPP](NOTES.DJGPP)
* [NOTES.PERL](NOTES.PERL)
* [NOTES.UNIX](NOTES.UNIX)
* [NOTES.VALGRIND](NOTES.VALGRIND)
* [NOTES.VMS](NOTES.VMS)
* [NOTES.WIN](NOTES.WIN)
* [NOTES-Android.md](NOTES-Android.md)
* [NOTES-DJGPP.md](NOTES-DJGPP.md)
* [NOTES-Unix.md](NOTES-Unix.md)
* [NOTES-VMS.md](NOTES-VMS.md)
* [NOTES-Windows.txt](NOTES-Windows.txt)
* [NOTES-Perl.m](NOTES-Perl.md)
* [NOTES-Valgrind.md](NOTES-Valgrind.md)
Specific notes on upgrading to OpenSSL 3.0 from previous versions, as well as
known issues are available on the OpenSSL

View File

@ -1,7 +0,0 @@
MAJOR=3
MINOR=0
PATCH=0
PRE_RELEASE_TAG=alpha4-dev
BUILD_METADATA=
RELEASE_DATE=""
SHLIB_VERSION=3

View File

@ -1,4 +1,7 @@
The sparse_array.c file contains an implementation of a sparse array that
Sparse Arrays
=============
The `sparse_array.c` file contains an implementation of a sparse array that
attempts to be both space and time efficient.
The sparse array is represented using a tree structure. Each node in the
@ -13,13 +16,14 @@ There are a number of parameters used to define the block size:
SA_BLOCK_MAX_LEVELS Indicates the maximum possible height of the tree
These constants are inter-related:
SA_BLOCK_MAX = 2 ^ OPENSSL_SA_BLOCK_BITS
SA_BLOCK_MASK = SA_BLOCK_MAX - 1
SA_BLOCK_MAX_LEVELS = number of bits in size_t divided by
OPENSSL_SA_BLOCK_BITS rounded up to the next multiple
of OPENSSL_SA_BLOCK_BITS
OPENSSL_SA_BLOCK_BITS can be defined at compile time and this overrides the
`OPENSSL_SA_BLOCK_BITS` can be defined at compile time and this overrides the
built in setting.
As a space and performance optimisation, the height of the tree is usually
@ -67,7 +71,6 @@ brevity):
+----+
Index 0
Inserting at element 2N+1 creates a new root node and pushes down the old root
node. It then creates a second second level node to hold the pointer to the
user's new data:
@ -102,7 +105,6 @@ user's new data:
+----+ +----+
Index 0 Index 2N+1
The nodes themselves are allocated in a sparse manner. Only nodes which exist
along a path from the root of the tree to an added leaf will be allocated.
The complexity is hidden and nodes are allocated on an as needed basis.
@ -144,12 +146,11 @@ result in:
+----+
Index 2N+1
Accesses to elements in the sparse array take O(log n) time where n is the
largest element. The base of the logarithm is SA_BLOCK_MAX, so for moderately
largest element. The base of the logarithm is `SA_BLOCK_MAX`, so for moderately
small indices (e.g. NIDs), single level (constant time) access is achievable.
Space usage is O(minimum(m, n log(n)) where m is the number of elements in the
array.
Note: sparse arrays only include pointers to types. Thus, SPARSE_ARRAY_OF(char)
can be used to store a string.
Note: sparse arrays only include pointers to types.
Thus, `SPARSE_ARRAY_OF(char)` can be used to store a string.

View File

@ -1,12 +1,12 @@
Notes: 2001-09-24
-----------------
Notes on engines of 2001-09-24
==============================
This "description" (if one chooses to call it that) needed some major updating
so here goes. This update addresses a change being made at the same time to
OpenSSL, and it pretty much completely restructures the underlying mechanics of
the "ENGINE" code. So it serves a double purpose of being a "ENGINE internals
for masochists" document *and* a rather extensive commit log message. (I'd get
lynched for sticking all this in CHANGES or the commit mails :-).
lynched for sticking all this in CHANGES.md or the commit mails :-).
ENGINE_TABLE underlies this restructuring, as described in the internal header
"eng_local.h", implemented in eng_table.c, and used in each of the "class" files;
@ -21,16 +21,16 @@ or can be loaded "en masse" into EVP storage so that they can be catalogued and
searched in various ways, ie. two ways of encrypting with the "des_cbc"
algorithm/mode pair are;
(i) directly;
const EVP_CIPHER *cipher = EVP_des_cbc();
EVP_EncryptInit(&ctx, cipher, key, iv);
[ ... use EVP_EncryptUpdate() and EVP_EncryptFinal() ...]
(i) directly;
const EVP_CIPHER *cipher = EVP_des_cbc();
EVP_EncryptInit(&ctx, cipher, key, iv);
[ ... use EVP_EncryptUpdate() and EVP_EncryptFinal() ...]
(ii) indirectly;
OpenSSL_add_all_ciphers();
cipher = EVP_get_cipherbyname("des_cbc");
EVP_EncryptInit(&ctx, cipher, key, iv);
[ ... etc ... ]
(ii) indirectly;
OpenSSL_add_all_ciphers();
cipher = EVP_get_cipherbyname("des_cbc");
EVP_EncryptInit(&ctx, cipher, key, iv);
[ ... etc ... ]
The latter is more generally used because it also allows ciphers/digests to be
looked up based on other identifiers which can be useful for automatic cipher
@ -177,7 +177,7 @@ is deliberately a distinct step. Moreover, registration and unregistration has
nothing to do with whether an ENGINE is *functional* or not (ie. you can even
register an ENGINE and its implementations without it being operational, you may
not even have the drivers to make it operate). What actually happens with
respect to cleanup is managed inside eng_lib.c with the "engine_cleanup_***"
respect to cleanup is managed inside eng_lib.c with the `engine_cleanup_***`
functions. These functions are internal-only and each part of ENGINE code that
could require cleanup will, upon performing its first allocation, register a
callback with the "engine_cleanup" code. The other part of this that makes it
@ -208,4 +208,3 @@ hooking of ENGINE is now automatic (and passive, it can internally use a NULL
ENGINE pointer to simply ignore ENGINE from then on).
Hell, that should be enough for now ... comments welcome.

View File

@ -1,17 +1,17 @@
Adding new libraries
--------------------
====================
When adding a new sub-library to OpenSSL, assign it a library number
ERR_LIB_XXX, define a macro XXXerr() (both in err.h), add its
name to ERR_str_libraries[] (in crypto/err/err.c), and add
ERR_load_XXX_strings() to the ERR_load_crypto_strings() function
(in crypto/err/err_all.c). Finally, add an entry:
`ERR_LIB_XXX`, define a macro `XXXerr()` (both in `err.h`), add its
name to `ERR_str_libraries[]` (in `crypto/err/err.c`), and add
`ERR_load_XXX_strings()` to the `ERR_load_crypto_strings()` function
(in `crypto/err/err_all.c`). Finally, add an entry:
L XXX xxx.h xxx_err.c
to crypto/err/openssl.ec, and add xxx_err.c to the Makefile.
Running make errors will then generate a file xxx_err.c, and
add all error codes used in the library to xxx.h.
to `crypto/err/openssl.ec`, and add `xxx_err.c` to the `Makefile`.
Running make errors will then generate a file `xxx_err.c`, and
add all error codes used in the library to `xxx.h`.
Additionally the library include file must have a certain form.
Typically it will initially look like this:
@ -33,12 +33,12 @@ Typically it will initially look like this:
/* BEGIN ERROR CODES */
The BEGIN ERROR CODES sequence is used by the error code
The `BEGIN ERROR CODES` sequence is used by the error code
generation script as the point to place new error codes, any text
after this point will be overwritten when make errors is run.
The closing #endif etc will be automatically added by the script.
The closing `#endif` etc will be automatically added by the script.
The generated C error code file xxx_err.c will load the header
files stdio.h, openssl/err.h and openssl/xxx.h so the
The generated C error code file `xxx_err.c` will load the header
files `stdio.h`, `openssl/err.h` and `openssl/xxx.h` so the
header file must load any additional header files containing any
definitions it uses.

View File

@ -1,44 +1,43 @@
objects.txt syntax
------------------
==================
To cover all the naming hacks that were previously in objects.h needed some
kind of hacks in objects.txt.
To cover all the naming hacks that were previously in `objects.h` needed some
kind of hacks in `objects.txt`.
The basic syntax for adding an object is as follows:
1 2 3 4 : shortName : Long Name
1 2 3 4 : shortName : Long Name
If Long Name contains only word characters and hyphen-minus
(0x2D) or full stop (0x2E) then Long Name is used as basis
for the base name in C. Otherwise, the shortName is used.
If Long Name contains only word characters and hyphen-minus
(0x2D) or full stop (0x2E) then Long Name is used as basis
for the base name in C. Otherwise, the shortName is used.
The base name (let's call it 'base') will then be used to
create the C macros SN_base, LN_base, NID_base and OBJ_base.
The base name (let's call it 'base') will then be used to
create the C macros SN_base, LN_base, NID_base and OBJ_base.
Note that if the base name contains spaces, dashes or periods,
those will be converted to underscore.
Note that if the base name contains spaces, dashes or periods,
those will be converted to underscore.
Then there are some extra commands:
!Alias foo 1 2 3 4
!Alias foo 1 2 3 4
This just makes a name foo for an OID. The C macro
OBJ_foo will be created as a result.
This just makes a name foo for an OID. The C macro
OBJ_foo will be created as a result.
!Cname foo
!Cname foo
This makes sure that the name foo will be used as base name
in C.
This makes sure that the name foo will be used as base name
in C.
!module foo
1 2 3 4 : shortName : Long Name
!global
!module foo
1 2 3 4 : shortName : Long Name
!global
The !module command was meant to define a kind of modularity.
What it does is to make sure the module name is prepended
to the base name. !global turns this off. This construction
is not recursive.
The !module command was meant to define a kind of modularity.
What it does is to make sure the module name is prepended
to the base name. !global turns this off. This construction
is not recursive.
Lines starting with # are treated as comments, as well as any line starting
Lines starting with `#` are treated as comments, as well as any line starting
with ! and not matching the commands above.

View File

@ -1,124 +1,130 @@
Perl scripts for assembler sources
==================================
The perl scripts in this directory are my 'hack' to generate
multiple different assembler formats via the one original script.
The way to use this library is to start with adding the path to this directory
and then include it.
push(@INC,"perlasm","../../perlasm");
require "x86asm.pl";
push(@INC,"perlasm","../../perlasm");
require "x86asm.pl";
The first thing we do is setup the file and type of assembler
&asm_init($ARGV[0]);
&asm_init($ARGV[0]);
The first argument is the 'type'. Currently
'cpp', 'sol', 'a.out', 'elf' or 'win32'.
Argument 2 is the file name.
`cpp`, `sol`, `a.out`, `elf` or `win32`.
The second argument is the file name.
The reciprocal function is
&asm_finish() which should be called at the end.
`&asm_finish()` which should be called at the end.
There are 2 main 'packages'. x86ms.pl, which is the Microsoft assembler,
and x86unix.pl which is the unix (gas) version.
There are two main 'packages'. `x86ms.pl`, which is the Microsoft assembler,
and `x86unix.pl` which is the unix (gas) version.
Functions of interest are:
&external_label("des_SPtrans"); declare and external variable
&LB(reg); Low byte for a register
&HB(reg); High byte for a register
&BP(off,base,index,scale) Byte pointer addressing
&DWP(off,base,index,scale) Word pointer addressing
&stack_push(num) Basically a 'sub esp, num*4' with extra
&stack_pop(num) inverse of stack_push
&function_begin(name,extra) Start a function with pushing of
edi, esi, ebx and ebp. extra is extra win32
external info that may be required.
&function_begin_B(name,extra) Same as normal function_begin but no pushing.
&function_end(name) Call at end of function.
&function_end_A(name) Standard pop and ret, for use inside functions
&function_end_B(name) Call at end but with pop or ret.
&swtmp(num) Address on stack temp word.
&wparam(num) Parameter number num, that was push
in C convention. This all works over pushes
and pops.
&comment("hello there") Put in a comment.
&label("loop") Refer to a label, normally a jmp target.
&set_label("loop") Set a label at this point.
&data_word(word) Put in a word of data.
&external_label("des_SPtrans"); declare and external variable
&LB(reg); Low byte for a register
&HB(reg); High byte for a register
&BP(off,base,index,scale) Byte pointer addressing
&DWP(off,base,index,scale) Word pointer addressing
&stack_push(num) Basically a 'sub esp, num*4' with extra
&stack_pop(num) inverse of stack_push
&function_begin(name,extra) Start a function with pushing of
edi, esi, ebx and ebp. extra is extra win32
external info that may be required.
&function_begin_B(name,extra) Same as normal function_begin but no
pushing.
&function_end(name) Call at end of function.
&function_end_A(name) Standard pop and ret, for use inside
functions.
&function_end_B(name) Call at end but with pop or ret.
&swtmp(num) Address on stack temp word.
&wparam(num) Parameter number num, that was push in
C convention. This all works over pushes
and pops.
&comment("hello there") Put in a comment.
&label("loop") Refer to a label, normally a jmp target.
&set_label("loop") Set a label at this point.
&data_word(word) Put in a word of data.
So how does this all hold together? Given
int calc(int len, int *data)
{
int i,j=0;
int calc(int len, int *data)
{
int i,j=0;
for (i=0; i<len; i++)
{
j+=other(data[i]);
}
}
for (i=0; i<len; i++)
{
j+=other(data[i]);
}
}
So a very simple version of this function could be coded as
push(@INC,"perlasm","../../perlasm");
require "x86asm.pl";
&asm_init($ARGV[0]);
push(@INC,"perlasm","../../perlasm");
require "x86asm.pl";
&external_label("other");
&asm_init($ARGV[0]);
$tmp1= "eax";
$j= "edi";
$data= "esi";
$i= "ebp";
&external_label("other");
&comment("a simple function");
&function_begin("calc");
&mov( $data, &wparam(1)); # data
&xor( $j, $j);
&xor( $i, $i);
$tmp1= "eax";
$j= "edi";
$data= "esi";
$i= "ebp";
&set_label("loop");
&cmp( $i, &wparam(0));
&jge( &label("end"));
&comment("a simple function");
&function_begin("calc");
&mov( $data, &wparam(1)); # data
&xor( $j, $j);
&xor( $i, $i);
&mov( $tmp1, &DWP(0,$data,$i,4));
&push( $tmp1);
&call( "other");
&add( $j, "eax");
&pop( $tmp1);
&inc( $i);
&jmp( &label("loop"));
&set_label("loop");
&cmp( $i, &wparam(0));
&jge( &label("end"));
&set_label("end");
&mov( "eax", $j);
&mov( $tmp1, &DWP(0,$data,$i,4));
&push( $tmp1);
&call( "other");
&add( $j, "eax");
&pop( $tmp1);
&inc( $i);
&jmp( &label("loop"));
&function_end("calc");
&set_label("end");
&mov( "eax", $j);
&asm_finish();
&function_end("calc");
&asm_finish();
The above example is very very unoptimised but gives an idea of how
things work.
There is also a cbc mode function generator in cbc.pl
&cbc( $name,
$encrypt_function_name,
$decrypt_function_name,
$true_if_byte_swap_needed,
$parameter_number_for_iv,
$parameter_number_for_encrypt_flag,
$first_parameter_to_pass,
$second_parameter_to_pass,
$third_parameter_to_pass);
&cbc($name,
$encrypt_function_name,
$decrypt_function_name,
$true_if_byte_swap_needed,
$parameter_number_for_iv,
$parameter_number_for_encrypt_flag,
$first_parameter_to_pass,
$second_parameter_to_pass,
$third_parameter_to_pass);
So for example, given
void BF_encrypt(BF_LONG *data,BF_KEY *key);
void BF_decrypt(BF_LONG *data,BF_KEY *key);
void BF_cbc_encrypt(unsigned char *in, unsigned char *out, long length,
BF_KEY *ks, unsigned char *iv, int enc);
&cbc("BF_cbc_encrypt","BF_encrypt","BF_encrypt",1,4,5,3,-1,-1);
void BF_encrypt(BF_LONG *data,BF_KEY *key);
void BF_decrypt(BF_LONG *data,BF_KEY *key);
void BF_cbc_encrypt(unsigned char *in, unsigned char *out, long length,
BF_KEY *ks, unsigned char *iv, int enc);
&cbc("des_ncbc_encrypt","des_encrypt","des_encrypt",0,4,5,3,5,-1);
&cbc("des_ede3_cbc_encrypt","des_encrypt3","des_decrypt3",0,6,7,3,4,5);
&cbc("BF_cbc_encrypt","BF_encrypt","BF_encrypt",1,4,5,3,-1,-1);
&cbc("des_ncbc_encrypt","des_encrypt","des_encrypt",0,4,5,3,5,-1);
&cbc("des_ede3_cbc_encrypt","des_encrypt3","des_decrypt3",0,6,7,3,4,5);

View File

@ -1,4 +1,8 @@
Properties are associated with algorithms and are used to select between different implementations dynamically.
Selecting algorithm implementations by properties
=================================================
Properties are associated with algorithms and are used to select between
different implementations dynamically.
This implementation is based on a number of assumptions:
@ -23,7 +27,6 @@ This implementation is based on a number of assumptions:
* Property queries can never add new property definitions.
Some consequences of these assumptions are:
* That definition is uncommon and queries are very common, we can treat
@ -52,14 +55,15 @@ Some consequences of these assumptions are:
properties are changed as doing so removes the need to index on both the
global and requested property strings.
The implementation:
* property_lock.c contains some wrapper functions to handle the global
* [property_lock.c](property_lock.c)
contains some wrapper functions to handle the global
lock more easily. The global lock is held for short periods of time with
per algorithm locking being used for longer intervals.
* property_string.c contains the string cache which converts property
* [property_string.c](property_string.c)
contains the string cache which converts property
names and values to small integer indices. Names and values are stored in
separate hash tables. The two Boolean values, the strings "yes" and "no",
are populated as the first two members of the value table. All property
@ -67,13 +71,15 @@ The implementation:
provided to convert from an index back to the original string (this can be
done by maintaining parallel stacks of strings if required).
* property_parse.c contains the property definition and query parsers.
* [property_parse.c](property_parse.c)
contains the property definition and query parsers.
These convert ASCII strings into lists of properties. The resulting
lists are sorted by the name index. Some additional utility functions
for dealing with property lists are also included: comparison of a query
against a definition and merging two queries into a single larger query.
* property.c contains the main APIs for defining and using properties.
* [property.c](property.c)
contains the main APIs for defining and using properties.
Algorithms are discovered from their NID and a query string.
The results are cached.
@ -82,6 +88,7 @@ The implementation:
without bounds and must garbage collect under-used entries. The garbage
collection does not have to be exact.
* defn_cache.c contains a cache that maps property definition strings to
* [defn_cache.c](defn_cache.c)
contains a cache that maps property definition strings to
parsed properties. It is used by property.c to improve performance when
the same definition appears multiple times.

View File

@ -1,7 +1,7 @@
NOTE: Don't expect any of these programs to work with current
OpenSSL releases, or even with later SSLeay releases.
Original README.md:
Original README:
=============================================================================
Some demo programs sent to me by various people

View File

@ -1,27 +1,30 @@
OpenSSL Documentation
=====================
README.md This file
fingerprints.txt
[fingerprints.txt](fingerprints.txt)
PGP fingerprints of authorised release signers
standards.txt
Moved to the web, https://www.openssl.org/docs/standards.html
standards.txt
Moved to the web, <https://www.openssl.org/docs/standards.html>
HOWTO/
[HOWTO/](HOWTO/)
A few how-to documents; not necessarily up-to-date
man1/
[man1/](man1/)
The openssl command-line tools; start with openssl.pod
man3/
[man3/](man3/)
The SSL library and the crypto library
man5/
[man5/](man5/)
File formats
man7/
[man7/](man7/)
Overviews; start with crypto.pod and ssl.pod, for example
Algorithm specific EVP_PKEY documentation.
Formatted versions of the manpages (apps,ssl,crypto) can be found at
https://www.openssl.org/docs/manpages.html
<https://www.openssl.org/docs/manpages.html>

View File

@ -18,10 +18,10 @@ of libssl.
The source files map to components as follows:
dtls1_bitmap.c -> DTLS1_BITMAP component
ssl3_buffer.c -> SSL3_BUFFER component
ssl3_record.c -> SSL3_RECORD component
rec_layer_s3.c, rec_layer_d1.c -> RECORD_LAYER component
dtls1_bitmap.c -> DTLS1_BITMAP component
ssl3_buffer.c -> SSL3_BUFFER component
ssl3_record.c -> SSL3_RECORD component
rec_layer_s3.c, rec_layer_d1.c -> RECORD_LAYER component
The RECORD_LAYER component is a facade pattern, i.e. it provides a simplified
interface to the record layer for the rest of libssl. The other 3 components are
@ -38,33 +38,32 @@ RECORD_LAYER_* macros.
Conceptually it looks like this:
libssl
|
---------------------------|-----record.h--------------------------------------
|
_______V______________
| |
| RECORD_LAYER |
| |
| rec_layer_s3.c |
| ^ |
| _________|__________ |
|| ||
|| DTLS1_RECORD_LAYER ||
|| ||
|| rec_layer_d1.c ||
||____________________||
|______________________|
record_local.h ^ ^ ^
_________________| | |_________________
| | |
_____V_________ ______V________ _______V________
| | | | | |
| SSL3_BUFFER | | SSL3_RECORD | | DTLS1_BITMAP |
| |--->| | | |
| ssl3_buffer.c | | ssl3_record.c | | dtls1_bitmap.c |
|_______________| |_______________| |________________|
libssl
|
-------------------------|-----record.h------------------------------------
|
_______V______________
| |
| RECORD_LAYER |
| |
| rec_layer_s3.c |
| ^ |
| _________|__________ |
|| ||
|| DTLS1_RECORD_LAYER ||
|| ||
|| rec_layer_d1.c ||
||____________________||
|______________________|
record_local.h ^ ^ ^
_________________| | |_________________
| | |
_____V_________ ______V________ _______V________
| | | | | |
| SSL3_BUFFER | | SSL3_RECORD | | DTLS1_BITMAP |
| |--->| | | |
| ssl3_buffer.c | | ssl3_record.c | | dtls1_bitmap.c |
|_______________| |_______________| |________________|
The two RECORD_LAYER source files build on each other, i.e.
the main one is rec_layer_s3.c which provides the core SSL/TLS layer. The second

View File

@ -6,23 +6,24 @@ state machine code to aid future maintenance.
The state machine code replaces an older state machine present in OpenSSL
versions 1.0.2 and below. The new state machine has the following objectives:
- Remove duplication of state code between client and server
- Remove duplication of state code between TLS and DTLS
- Simplify transitions and bring the logic together in a single location
so that it is easier to validate
- Remove duplication of code between each of the message handling functions
- Receive a message first and then work out whether that is a valid
transition - not the other way around (the other way causes lots of issues
where we are expecting one type of message next but actually get something
else)
- Separate message flow state from handshake state (in order to better
understand each)
- message flow state = when to flush buffers; handling restarts in the
event of NBIO events; handling the common flow of steps for reading a
message and the common flow of steps for writing a message etc
- handshake state = what handshake message are we working on now
- Control complexity: only the state machine can change state: keep all
the state changes local to the state machine component
- Remove duplication of state code between client and server
- Remove duplication of state code between TLS and DTLS
- Simplify transitions and bring the logic together in a single location
so that it is easier to validate
- Remove duplication of code between each of the message handling functions
- Receive a message first and then work out whether that is a valid
transition - not the other way around (the other way causes lots of issues
where we are expecting one type of message next but actually get something
else)
- Separate message flow state from handshake state (in order to better
understand each)
* message flow state = when to flush buffers; handling restarts in the
event of NBIO events; handling the common flow of steps for reading a
message and the common flow of steps for writing a message etc
* handshake state = what handshake message are we working on now
- Control complexity: only the state machine can change state: keep all
the state changes local to the state machine component
The message flow state machine is divided into a reading sub-state machine and a
writing sub-state machine. See the source comments in statem.c for a more
@ -30,34 +31,33 @@ detailed description of the various states and transitions possible.
Conceptually the state machine component is designed as follows:
libssl
|
---------------------------|-----statem.h--------------------------------------
|
_______V____________________
| |
| statem.c |
| |
| Core state machine code |
|____________________________|
statem_local.h ^ ^
_________| |_______
| |
_____________|____________ _____________|____________
| | | |
| statem_clnt.c | | statem_srvr.c |
| | | |
| TLS/DTLS client specific | | TLS/DTLS server specific |
| state machine code | | state machine code |
|__________________________| |__________________________|
| |_______________|__ |
| ________________| | |
| | | |
____________V_______V________ ________V______V_______________
| | | |
| statem_both.c | | statem_dtls.c |
| | | |
| Non core functions common | | Non core functions common to |
| to both servers and clients | | both DTLS servers and clients |
|_____________________________| |_______________________________|
libssl
|
-------------------------|-----statem.h------------------------------------
|
_______V____________________
| |
| statem.c |
| |
| Core state machine code |
|____________________________|
statem_local.h ^ ^
_________| |_______
| |
_____________|____________ _____________|____________
| | | |
| statem_clnt.c | | statem_srvr.c |
| | | |
| TLS/DTLS client specific | | TLS/DTLS server specific |
| state machine code | | state machine code |
|__________________________| |__________________________|
| |_______________|__ |
| ________________| | |
| | | |
____________V_______V________ ________V______V_______________
| | | |
| statem_both.c | | statem_dtls.c |
| | | |
| Non core functions common | | Non core functions common to |
| to both servers and clients | | both DTLS servers and clients |
|_____________________________| |_______________________________|

View File

@ -1,12 +1,10 @@
Running external test suites with OpenSSL
=========================================
It is possible to integrate external test suites into OpenSSL's "make test".
It is possible to integrate external test suites into OpenSSL's `make test`.
This capability is considered a developer option and does not work on all
platforms.
The BoringSSL test suite
========================
@ -15,31 +13,31 @@ source code into an appropriate directory. This can be done in two ways:
1) Separately from the OpenSSL checkout using:
$ git clone https://boringssl.googlesource.com/boringssl boringssl
$ git clone https://boringssl.googlesource.com/boringssl boringssl
The BoringSSL tests are only confirmed to work at a specific commit in the
BoringSSL repository. Later commits may or may not pass the test suite:
$ cd boringssl
$ git checkout 490469f850e
$ cd boringssl
$ git checkout 490469f850e
2) Using the already configured submodule settings in OpenSSL:
$ git submodule update --init
$ git submodule update --init
Configure the OpenSSL source code to enable the external tests:
$ cd ../openssl
$ ./config enable-ssl3 enable-ssl3-method enable-weak-ssl-ciphers \
enable-external-tests
$ cd ../openssl
$ ./config enable-ssl3 enable-ssl3-method enable-weak-ssl-ciphers \
enable-external-tests
Note that using other config options than those given above may cause the tests
to fail.
Run the OpenSSL tests by providing the path to the BoringSSL test runner in the
BORING_RUNNER_DIR environment variable:
`BORING_RUNNER_DIR` environment variable:
$ BORING_RUNNER_DIR=/path/to/boringssl/ssl/test/runner make test
$ BORING_RUNNER_DIR=/path/to/boringssl/ssl/test/runner make test
Note that the test suite may change directory while running so the path provided
should be absolute and not relative to the current working directory.
@ -47,9 +45,8 @@ should be absolute and not relative to the current working directory.
To see more detailed output you can run just the BoringSSL tests with the
verbose option:
$ VERBOSE=1 BORING_RUNNER_DIR=/path/to/boringssl/ssl/test/runner make \
TESTS="test_external_boringssl" test
$ VERBOSE=1 BORING_RUNNER_DIR=/path/to/boringssl/ssl/test/runner make \
TESTS="test_external_boringssl" test
Test failures and suppressions
------------------------------
@ -71,26 +68,25 @@ within the OpenSSL source code.
The community is encouraged to contribute patches which reduce the number of
suppressions that are currently present.
Python PYCA/Cryptography test suite
===================================
This python test suite runs cryptographic tests with a local OpenSSL build as
the implementation.
First checkout the PYCA/Cryptography module into ./pyca-cryptography using:
First checkout the `PYCA/Cryptography` module into `./pyca-cryptography` using:
$ git submodule update --init
$ git submodule update --init
Then configure/build OpenSSL compatible with the python module:
$ ./config shared enable-external-tests
$ make
$ ./config shared enable-external-tests
$ make
The tests will run in a python virtual environment which requires virtualenv
to be installed.
$ make test VERBOSE=1 TESTS=test_external_pyca
$ make test VERBOSE=1 TESTS=test_external_pyca
Test failures and suppressions
------------------------------
@ -98,7 +94,6 @@ Test failures and suppressions
Some tests target older (<=1.0.2) versions so will not run. Other tests target
other crypto implementations so are not relevant. Currently no tests fail.
krb5 test suite
===============
@ -107,24 +102,24 @@ tests against the local OpenSSL build.
You will need a git checkout of krb5 at the top level:
$ git clone https://github.com/krb5/krb5
$ git clone https://github.com/krb5/krb5
krb5's master has to pass this same CI, but a known-good version is
krb5-1.15.1-final if you want to be sure.
$ cd krb5
$ git checkout krb5-1.15.1-final
$ cd ..
$ cd krb5
$ git checkout krb5-1.15.1-final
$ cd ..
OpenSSL must be built with external tests enabled:
$ ./config enable-external-tests
$ make
$ ./config enable-external-tests
$ make
krb5's tests will then be run as part of the rest of the suite, or can be
explicitly run (with more debugging):
$ VERBOSE=1 make TESTS=test_external_krb5 test
$ VERBOSE=1 make TESTS=test_external_krb5 test
Test-failures suppressions
--------------------------
@ -133,7 +128,6 @@ krb5 will automatically adapt its test suite to account for the configuration
of your system. Certain tests may require more installed packages to run. No
tests are expected to fail.
GOST engine test suite
===============
@ -142,19 +136,19 @@ tests against the local OpenSSL build.
You will need a git checkout of gost-engine at the top level:
$ git submodule update --init
$ git submodule update --init
Then configure/build OpenSSL enabling external tests:
$ ./config shared enable-external-tests
$ make
$ ./config shared enable-external-tests
$ make
GOST engine requires CMake for the build process.
GOST engine tests will then be run as part of the rest of the suite, or can be
explicitly run (with more debugging):
$ make test VERBOSE=1 TESTS=test_external_gost_engine
$ make test VERBOSE=1 TESTS=test_external_gost_engine
Updating test suites
====================
@ -163,24 +157,23 @@ To update the commit for any of the above test suites:
- Make sure the submodules are cloned locally:
$ git submodule update --init --recursive
$ git submodule update --init --recursive
- Enter subdirectory and pull from the repository (use a specific branch/tag if required):
$ cd <submodule-dir>
$ git pull origin master
$ cd `<submodule-dir>`
$ git pull origin master
- Go to root directory, there should be a new git status:
$ cd ../
$ git status
...
# modified: <submodule-dir> (new commits)
...
$ cd ../
$ git status
...
# modified: `<submodule-dir>` (new commits)
...
- Add/commit/push the update
git add <submodule-dir>
git commit -m "Updated <submodule> to latest commit"
git push
$ git add `<submodule-dir>`
$ git commit -m `"Updated <submodule> to latest commit"`
$ git push