Status:
- And it works!
- We have an extensive test based on decoding a rich EK certficate.
This test exercises all of:
- decoding
- encoding with and without decoded open types
- copying of decoded values with decoded open types
- freeing of decoded values with decoded open types
Valgrind finds no memory errors.
- Added a manual page for the compiler.
- rfc2459.asn1 now has all three primary PKIX types that we care about
defined as in RFC5912, with IOS constraints and parameterization:
- `Extension` (embeds open type in an `OCTET STRING`)
- `OtherName` (embeds open type in an `ANY`-like type)
- `SingleAttribute` (embeds open type in an `ANY`-like type)
- `AttributeSet` (embeds open type in a `SET OF ANY`-like type)
All of these use OIDs as the open type type ID field, but integer
open type type ID fields are also supported (and needed, for
Kerberos).
That will cover every typed hole pattern in all our ASN.1 modules.
With this we'll be able to automatically and recursively decode
through all subject DN attributes even when the subject DN is a
directoryName SAN, and subjectDirectoryAttributes, and all
extensions, and all SANs, and all authorization-data elements, and
PA-data, and...
We're not really using `SingleAttribute` and `AttributeSet` yet
because various changes are needed in `lib/hx509` for that.
- `asn1_compile` builds and recognizes the subset of X.681/682/683 that
we need for, and now use in, rfc2459.asn1. It builds the necessary
AST, generates the correct C types, and generates templating for
object sets and open types!
- See READMEs for details.
- Codegen backend not tested; I won't make it implement automatic open
type handling, but it should at least not crash by substituting
`heim_any` for open types not embedded in `OCTET STRING`.
- We're _really_ starting to have problems with the ITU-T ASN.1
grammar and our version of it...
Type names have to start with upper-case, value names with
lower-case, but it's not enough to disambiguate.
The fact the we've allowed value and type names to violate their
respective start-with case rules is causing us trouble now that we're
adding grammar from X.681/682/683, and we're going to have to undo
that.
In preparation for that I'm capitalizing the `heim_any` and
`heim_any_set` types, and doing some additional cleanup, which
requires changes to other parts of Heimdal (all in this same commit
for now).
Problems we have because of this:
- We cannot IMPORT values into modules because we have no idea if a
symbol being imported refers to a value or a type because the only
clue we would have is the symbol's name, so we assume IMPORTed
symbols are for types.
This means we can't import OIDs, for example, which is super
annoying.
One thing we might be able to do here is mark imported symbols as
being of an undetermined-but-not-undefined type, then coerce the
symbol's type the first time it's used in a context where its type
is inferred as type, value, object, object set, or class. (Though
since we don't generate C symbols for objects or classes, we won't
be able to import them, especially since we need to know them at
compile time and cannot defer their handling to link- or
run-time.)
- The `NULL` type name, and the `NULL` value name now cause two
reduce/reduce conflicts via the `FieldSetting` production.
- Various shift/reduce conflicts involving `NULL` values in
non-top-level contexts (in constraints, for example).
- Currently I have a bug where to disambiguate the grammar I have a
CLASS_IDENTIFIER token that is all caps, while TYPE_IDENTIFIER must
start with a capital but not be all caps, but this breaks Kerberos
since all its types are all capitalized -- oof!
To fix this I made it so class names have to be all caps and
start with an underscore (ick).
TBD:
- Check all the XXX comments and address them
- Apply this treatment to Kerberos! Automatic handling of authz-data
sounds useful :)
- Apply this treatment to PKCS#10 (CSRs) and other ASN.1 modules too.
- Replace various bits of code in `lib/hx509/` with uses of this
feature.
- Add JER.
- Enhance `hxtool` and `asn1_print`.
Getting there!
This is a large commit that adds several features:
- Revamps and moves virtual host-based service principal functionality
from kdc/ to lib/hdb/ so that it may be automatically visible to
lib/kadm5/, as well as kadmin(1)/kadmind(8) and ktutil(1).
The changes are backwards-incompatible.
- Completes support for documenting a service principal's supported
enctypes in its HDB entry independently of its long-term keys. This
will reduce HDB bloat by not requiring that service principals have
more long-term keys than they need just to document the service's
supported enctypes.
- Adds support for storing krb5.conf content in principals' HDB
entries. This may eventually be used for causing Heimdal KDC
services to reconfigure primary/secondary roles automatically by
discovering the configured primary in an HDB entry for the realm.
For now this will be used to help reduce the amount of configuration
needed by clients of an upcoming HTTP binding of the kadmin service.
Refactor plugin framework to use a single list of loaded plugins; add a new
plugin API where DSOs export a load function that can declare dependencies and
export multiple plugins; refactor kadm5 hook API to use krb5 plugin framework.
More information in krb5-plugin(7).
The change to the signature of hdb_generate_key_set_password() in
Heimdal 7.1 broke API/ABI compatibility with previous releases. We
fix this by renaming it hdb_generate_key_set_password_with_ks_tuple()
and creating a new hdb_generate_key_set_password() which calls our
new function with zeroes for the added arguments.
Issue #246https://github.com/heimdal/heimdal/issues/246
When new keys are added (typically via kadm5_setkey_principal_3),
truncate the key history to remove old keys, that is keys older than
the newest key which was in effect prior longer ago than the principal's
maximum ticket lifetime. This feature is controlled via the "[kadmin]"
section's "prune-key-history" boolean parameter, which defaults to false.
Currently this happens only when kadm5_setkey_principal_3()
is called directly on the server, the client API simulates
kadm5_setkey_principal_3() via a get, update, modify sequence that does
not prune the key history. The plan is to add a new kadm5 protocol RPC
and convert clients to call that instead.
In setkey_principal_3 seal keys after entry key update
Also, for now, don't check the return value of kadm5_log_modify() in
the new kadm5_s_setkey_principal_3(). This has to be addressed more
globally.
Censor stale keys in kadm5_s_get_principal
We enable kadm5_chpass_principal_3() in the server side of the
library. The client kadm5 library calls will still return the
error KAMD5_KS_TUPLE_NO_SUPP.
Signed-off-by: Nicolas Williams <nico@cryptonector.com>
The code was generating a char ** of string representations of the
ks_tuple() array but it was not using it. We modify the code to:
1. extend the array returned by ks_tuple2str() to include
enough space for the trailing NULL and ensure that there
is a NULL at the end,
2. not free the array before exiting ks_tuple2str() as we
intend to use it in the caller,
3. re-organise the pointers in hdb_generate_key_set() to
make it more clear how we are to free things that have
been allocated.
4. free the char ** given us by ks_tuple2str() if it has
been allocated.
Signed-off-by: Nicolas Williams <nico@cryptonector.com>
Also: add support for ignoring null enctype / zero-length keys,
which *can* be found in MIT DB entries created in pre-historic
times.
Also: make the mitdb HDB backend more elegant (e.g., use the ASN.1
compiler's generated sequence/array utility functions.
Also: add a utility function needed for kadm5 kvno change
improvements and make kadmin's mod --kvno work correctly and
naturally.
Signed-off-by: Love Hornquist Astrand <lha@h5l.org>
clause and freeing key_set. Found by IBM checker.
Make sure ret == 0 after of parse error, we catch the "no entries
parsed" case later.
git-svn-id: svn://svn.h5l.se/heimdal/trunk/heimdal@16929 ec53bebd-3082-4978-b11e-865c3cabbd6b