Regarding the array-based DataObject concept, wouldn't this mean for name-based attribute lookups that you still need a map somewhere that translates names to indexes? That map would only be needed once per entity, however.
Instead of the array-based approach, did you also consider ConcurrentHashMap and similar classes in java.util.concurrent? It would not have all the other advantages besides concurrency, but could perhaps serve as an easy intermediate step to get rid of the locking, and be implemented even in 4.0 already.
And on the  discussion, I'd like to mention my use case again: big queries with lots of prefetches to suck in gigabytes of data for aggregate computations using DataObject business logic. During those fetches, other users expect to be able to continue their regular workload concurrently (which they mostly cannot using EOF: my main reason to switch). So however this  concept turns out, I'd like to also be able to parallelize the fetches themselves. A useful first step would be to execute disjoint prefetches in separate threads.
A second step could be to have even a single big table scan query parallelized by partioning. Databases have been able to organize large tables into partitions that can be scanned independently from each other. Back in the days with Oracle and slower spinning disks you would spread partitions between independent disks, while today with SSDs and zero seek time that could still help to increase the throughput when CPU is the limiting factor (databases also tend to generate high CPU loads when doing full table scans, but only on one core per scan). An idea could be to include a partitioning criterium in the model, which matches the database's criterium for the table in question.
In the meantime I could try partitioning the queries on the application level, which can also work, but I'm back at the Graph Manager locking problem when merging them into one context for processing.
Today's hardware with databases on SSDs that can deliver 3 GByte/s or more, and 16+ cores for processing calls for parallelization on every level.