Subversion Repositories dev

Rev

Rev 16051 | Rev 16095 | Go to most recent revision | Blame | Compare with Previous | Last modification | View Log | Download | RSS feed

=== OPEN TODOS (HIGH PRIORITY) ===
[ ] add test case for pending transactions
[ ] add test cases for conflicts
[ ] at initial startup -> read auditlog more smartly


=== OPEN TODOS (LOWER PRIORITY) ===
[ ] adjust code for move operation (which format has the corresponding LDIF? entryUUID changes?)
        -> new operation "modrdn" !!!
        -> entryUUID == "(null)" !!!
        ### MOVE into different container ###
        # modrdn 1390485249.096233 (null) dc=ucs32,dc=qa uid=Administrator,cn=users,dc=ucs32,dc=qa IP=10.200.26.11:59946 conn=1163
        dn: uid=foobar11086,dc=ucs32,dc=qa
        changetype: modrdn
        newrdn: uid=foobar11086
        deleteoldrdn: 1
        newsuperior: cn=users,dc=ucs32,dc=qa
        # end modrdn 1390485249.096233
        ### RENAME object ###
        # modrdn 1390485624.147398 (null) dc=ucs32,dc=qa uid=Administrator,cn=users,dc=ucs32,dc=qa IP=10.200.26.11:59946 conn=1163
        dn: uid=foobar11086,cn=users,dc=ucs32,dc=qa
        changetype: modrdn
        newrdn: uid=foobar11086-2
        deleteoldrdn: 1
        # end modrdn 1390485624.147398
[ ] moving a user from A to B to A to B will result in the same md5sum
[ ] access journal state via HTTP


=== DONE ===
[x] check how conflict resolutions are handled (-> order + transaction ID)
[x] currently a MODIFY transaction might occur before the original ADD transaction
        -> added an option _previousMD5sum, i.e., a transaction is then only accepted if its predecessor has been received, otherwise it is held back
[x] delete -> entryUUID == null :-(
        -> handled via a workaround with a list of a dict dn -> entryUUID
[x] add conflict handling as described in the paper (with extra data structur and attribute based timestamps)
        [x] do not communicate XML attributes that start with '_'
        [x] do not communicate transactions with attribute _conflictResolution=1
        [x] usage of Transaction -> either all way through or only in journal module
[x] enhance log output (clearly see which output comes from which part in the code/wich process)
[x] change control flow for transactions: auditlog -> Journal -> HostDB + LDAP + disk dump
[x] save/load journal to/from disk
[x] add redirect from / -> /servers and from /servers/<server> -> /severs/<sever>/transactions
[x] check modification of LDAP objects
[x] add test cases for attribute modification etc. (maybe extend remoteLdapsearch() to parse LDAP properties, as well)
[x] add test cases for other objects than OU (creation of users fails)
[x] retry to sync pending transactions to LDAP
        -> fixed
[x] make sure that no transaction ID is left out (e.g., 1 2 3 5) - if this is the case re-query missing transactions
        -> fixed with the point above
[x] sometimes slapd seems to die -> ignore all internal properties that we do not need, the might be some race conditions occurring
        -> ignore internal properties entryCSN + modifyTimestamp for sychronization
[x] catch LDAP error SERVER_DOWN instead of reconnecting after timeout
        -> done, works fine now
[x] sometimes the dumped XML filed contains more transactions than a REST GET query
        -> seems to be fine now
[x] query all servers from each replication hosts (currently only 'localhost' is queried)
[x] binary data is not written into LDAP as binary data ('::' at origin and ':' at receiver)
        -> e.g., creating a new user: udm users/user create --set username=user$RANDOM --set lastname=foobar --set password=univention
[x] in case a transaction cannot be commited, queue it into the queue of pending transactions
[x] transactions are not necessarily atomar: several quick changes may result into one transaction in the listener, this in turn may result into overriding new values with old ones
        -> done, exchanged listener module with modified auditlog parser
[x] ignore already known transactions (to avoid endless cycles)
        -> OK now
[#] adding the same user on two different hosts may result in the same md5sum
    -> invalid, entryUUID is being hashed, as well


=== NOTES ===
* When two users with the same uid are being created, the group memberships of both will be merged
        * on the LDAP level, we cannot resolve this conflict for now
        * resolution possible by adding a unique-Attribute overlay module or by saving group memberships at the user object