ZODB v3.3.1.c1 Release Notes
-
๐ Release date: 01-Apr-2005
BTrees
Collector #1734: BTrees conflict resolution leads to index inconsistencies.
Silent data loss could occur due to BTree conflict resolution when one transaction T1 added a new key to a BTree containing at least three buckets, and a concurrent transaction T2 deleted all keys in the bucket to which the ๐ new key was added. Conflict resolution then created a bucket containing the ๐ newly added key, but the bucket remained isolated, disconnected from the BTree. In other words, the committed BTree didn't contain the new key added by T1. Conflict resolution doesn't have enough information to repair this, so
ConflictError
is now raised in such cases.ZEO
Repaired subtle race conditions in establishing ZEO connections, both client- and server-side. These account for intermittent cases where ZEO failed ๐ฒ to make a connection (or reconnection), accompanied by a log message showing an error caught in
asyncore
and having a traceback ending with:``UnpicklingError: invalid load key, 'Z'.``
or:
``ZRPCError: bad handshake '(K\x00K\x00U\x0fgetAuthProtocol)t.'``
or:
``error: (9, 'Bad file descriptor')``
or an
AttributeError
.โ These were exacerbated when running the test suite, because of an unintended โ busy loop in the test scaffolding, which could starve the thread trying to โ make a connection. The ZEO reconnection tests may run much faster now, depending on platform, and should suffer far fewer (if any) intermittent "timed out waiting for storage to connect" failures.
ZEO protocol and compatibility
ZODB 3.3 introduced multiversion concurrency control (MVCC), which required ๐ changes to the ZEO protocol. The first 3.3 release should have increased the internal ZEO protocol version number (used by ZEO protocol negotiation when a client connects), but neglected to. This has been repaired.
Compatibility between pre-3.3 and post-3.3 ZEO clients and servers remains โก๏ธ very limited. See the newly updated
Compatibility
section inREADME.txt
for details.FileStorage
โก๏ธ The
.store()
and.restore()
methods didn't update the storage's belief about the largest oid in use when passed an oid larger than the largest oid the storage already knew about. Because.restore()
in particular is used bycopyTransactionsFrom()
, and by the first stage of ZRS recovery, a large database could be created that believed the only oid in use was oid 0 (the special oid reserved for the root object). In rare cases, it could go on from there assigning duplicate oids to new objects, starting over from oid 1 again. This has been repaired. A newset_max_oid()
method was added to theBaseStorage
class so that derived storages can update the largest oid in use in a threadsafe way.A FileStorage's index file tried to maintain the index's largest oid as a separate piece of data, incrementally updated over the storage's lifetime. This scheme was more complicated than necessary, so was also more brittle and slower than necessary. It indirectly participated in a rare but critical bug: when a FileStorage was created via
copyTransactionsFrom()
, the "maximum oid" saved in the index file was always 0. Use that FileStorage, and it could then create "new" oids starting over at 0 again, despite that those oids were already in use by old objects in the database. Packing a FileStorage has no reason to try to update the maximum oid in the index file either, so this kind of damage could (and did) persist even across packing.
The index file's maximum-oid data is ignored now, but is still written out so that
.index
files can be read by older versions of ZODB. Finding the true maximum oid is done now by exploiting that the main index is really a kind of BTree (long ago, this wasn't true), and finding the largest key in a BTree is inexpensive.โก๏ธ A FileStorage's index file could be updated on disk even if the storage was opened in read-only mode. That bug has been repaired.
An efficient
maxKey()
implementation was added to classfsIndex
.
Pickle (in-memory Connection) Cache
๐ป You probably never saw this exception:
``ValueError: Can not re-register object under a different oid``
It's been changed to say what it meant:
``ValueError: A different object already has the same oid``
This happens if an attempt is made to add distinct objects to the cache that have the same oid (object identifier). ZODB should never do this, but it's possible for application code to force such an attempt.
PersistentMapping and PersistentList
Backward compatibility code has been added so that the sanest of the ZODB 3.2 dotted paths for
PersistentMapping
andPersistentList
resolve. These are still preferred:from persistent.list import PersistentList
from persistent.mapping import PersistentMapping
but these work again too:
from ZODB.PersistentList import PersistentList
from ZODB.PersistentMapping import PersistentMapping
BTrees
The BTrees interface file neglected to document the optional
excludemin
andexcludemax
arguments to thekeys()
,values()
๐ anditems()
methods. Appropriate changes were merged in from the ZODB4 BTrees interface file.Tools
- 0๏ธโฃ
mkzeoinst.py
's default port number changed from to 9999 to 8100, to match the example in Zope'szope.conf
.
fsIndex
An efficient
maxKey()
method was implemented for thefsIndex
class. This makes it possible to determine the largest oid in aFileStorage
index efficiently, directly, and reliably, replacing a more delicate scheme that tried to keep track of this by saving an oid high water mark in the โก๏ธ index file and incrementally updating it.