Details
-
Bug
-
Status: Closed
-
Normal
-
Resolution: Fixed
-
None
-
None
-
None
-
Platform sprint 124
Description
Jackrabbit contains code in org.apache.jackrabbit.core.query.lucene.SearchIndex#updateNodes that accounts for making sure that some nodes are re-indexed as a result of a change in another node. This is in the area of
// remove any aggregateRoot nodes that are new // and therefore already up-to-date aggregateRoots.keySet().removeAll(addedIds); // based on removed ids get affected aggregate root nodes retrieveAggregateRoot(removedIds, aggregateRoots);
because of our specific Hippo Document indexing however, it can happen that in 'retrieveAggregateRoot', again an id is added to 'aggregateRoots' that has already been added. Therefor we need to change the code above to
// remove any aggregateRoot nodes that are new // and therefore already up-to-date aggregateRoots.keySet().removeAll(addedIds); // based on removed ids get affected aggregate root nodes retrieveAggregateRoot(removedIds, aggregateRoots); // again remove any aggregateRoot nodes that are new // and therefore already up-to-date aggregateRoots.keySet().removeAll(addedIds);
Attachments
Issue Links
- causes
-
REPO-1439 Create and use in repo jackrabbit-h10 patched version
- Closed