Add support for GSS-API pre-authentication to the KDC, using a simplified
variation of draft-perez-krb-wg-gss-preauth-02 that encodes GSS-API context
tokens directly in PADATA, and uses FX-COOKIE for state management.
More information on the protocol and implementation may be found in
lib/gssapi/preauth/README.md.
Since c6bf100b password quality checks have been moved out of kadmindd and into
libkadm5. This means that all password changes are subject to quality checks,
if enforce_on_admin_set is true (the default). In rare instances it could be
possible for realm initialization to fail because the randomly generated
passwords do not pass the password quality test. Fix this by creating
principals with no password or key, rather than with a random password.
Random *keys* continue to be set immediately after the principal is created,
and before DISALLOW_ALL_TIX is unset, so there should be no functionality or
security implications from this change. It is safe to call a server-side API
such as kadm5_s_create_principal_with_key() as local_flag is asserted to be
true.
Since c6bf100b password quality checks have been moved out of kadmindd and into
libkadm5. This means that all password changes are subject to quality checks,
if enforce_on_admin_set is true (the default). In rare instances it could be
possible for realm initialization to fail because the randomly generated
passwords do not pass the password quality test. Fix this by creating
principals with no password or key, rather than with a random password.
Random *keys* continue to be set immediately after the principal is created,
and before DISALLOW_ALL_TIX is unset, so there should be no functionality or
security implications from this change. It is safe to call a server-side API
such as kadm5_s_create_principal_with_key() as local_flag is asserted to be
true.
When the master's log has all entries from version 1 to now, and no
uber entry (legacy master), then new slaves will not pull version 1,
because their uber record has version 1.
The fix is to force the uber version to 0 always, and avoid adding a
truncate nop when doing a full prop. The uber record now records the
database version even in the absence of any other log entries so that
we know what to pull going forward.
We used to update the iprop log and HDB in different orders depending on
the kadm5 operation, which then led to various race conditions.
The iprop log now functions as a two-phase commit (with roll forward)
log for HDB changes. The log is auto-truncated, keeping the latest
entries that fit in a configurable maximum number of bytes (defaults to
50MB). See the log-max-size parameter description in krb5.conf(5).
The iprop log format and the protocol remain backwards-compatible with
earlier versions of Heimdal. This is NOT a flag-day; there is NO need
to update all the slaves at once with the master, though it is advisable
in general. Rolling upgrades and downgrades should work.
The sequence of updates is now (with HDB and log open and locked):
a) check that the HDB operation will succeed if attempted,
b) append to iprop log and fsync() it,
c) write to HDB (which should fsync()),
d) mark last log record committed (no fsync in this case).
Every kadm5 write operation recover transactions not yet confirmed as
committed, thus there can be at most one unconfirmed commit on a master
KDC.
Reads via kadm5_get_principal() also attempt to lock the log, and if
successful, recover unconfirmed transactions; readers must have write
access and must win any race to lock the iprop log.
The ipropd-master daemon also attempts to recover unconfirmed
transactions when idle.
The log now starts with a nop record whose payload records the offset of
the logical end of the log: the end of the last confirmed committed
transaction. This is kown as the "uber record". Its purpose is
two-fold: act as the confirmation of committed transactions, and provide
an O(1) method of finding the end of the log (i.e., without having to
traverse the entire log front to back).
Two-phase commit makes all kadm5 writes single-operation atomic
transactions (though some kadm5 operations, such as renames of
principals, and changes to principals' aliases, use multiple low-level
HDB write operations, but still all in one transaction). One can still
hold a lock on the HDB across many operations (e.g., by using the lock
command in a kadmin -l or calling kadm5_lock()) in order to push
multiple transactions in sequence, but this sequence will not be atomic
if the process or host crashes in the middle.
As before, HDB writes which do not go through the kadm5 API are excluded
from all of this, but there should be no such writes.
Lastly, the iprop-log(1) command is enhanced as follows:
- The dump, last-version, truncate, and replay sub-commands now have an
option to not lock the log. This is useful for inspecting a running
system's log file, especially on slave KDCs.
- The dump, last-version, truncate, and replay sub-commands now take an
optional iprop log file positional argument, so that they may be used
to inspect log files other than the running system's
configured/default log file.
Extensive code review and some re-writing for clarity by Viktor Dukhovni.
though its disabled. From Andrew Bartlet <abartlet@samba.org>
git-svn-id: svn://svn.h5l.se/heimdal/trunk/heimdal@16040 ec53bebd-3082-4978-b11e-865c3cabbd6b