Yves Reynhout writes about what to look for in an O/R mapping tool..
Now, Persient is more a generalized object persistence layer than a relational-database-specific-only mapping tool, but make no mistake, O/R is of course the biggest and (yet) most important part of it.
So I will briefly reflect on Yves' comments and compare and contrast how Persient Postings would fit in, were you to use his heuristics and stumble across my product in a few months time when it will be ready for prime time.
(Note: I quote Yves in the intended block quotes.)
Going to different datasources vs. going to one, or switching datasources over time
Supported---the API is completely independent from the underlying permanent storage mechanism being used. You will not be able to use vendor-specific peculiarities, but for the sake of decoupling and portability.
Read Patterns of Enterprise Application Architecture for more insight on these (and other) patterns and if they are of any importance to you. The book is very insightful (but not complete, mind you).
I have not read Martin Fowler's book but yes, a "registry" of object instances (and their type information as well as parent-child relationships) is kept at permanent storage level.
"Lazy loading" is often presented as some kind of magic that loads a related object automatically on property access. Because Persient is not based on code generation or code injection, but on an object-oriented CRUD API, this does not apply in the same way. Automatic lazy loading on property access is therefore not supported, but the important part, delay loading, is supported---only you do it manually (on property access if you will, although you can optimize stuff much more this way).
This means no (potentially expensive) object caching mechanism is necessary (other than the raw object identity tracking), which means a smaller memory footprint for your application.
How do you want your objects to get created? Using a base class, reflection, serialization, obligatory interface, pre-compilation code injection, post-compilation code-injection via emission.
Objects are materialized through reflection, no base class inheritance is required. It is "pluggable" in that you can mark any constructor as the "materializer". If no materializer is specified, a parameter-less constructor (doesn't have to be public) is assumed. Persient is deliberately not based on code injection / generation.
Determine how you want to work with tiers and layers (drawing helps here) and compare how the tool/lib does it.
Persient is built on top of ADO.NET and can be seen as a persistence replacement, so it fits in existing architectures just the same way. Put your data access code whereever you would normally put it---it is just much easier, less code to write, and more intuitive and maintainable (because now it is purely object-oriented).
To COM+ or to roll "their/your own". Mainly a choice of technology.
Again, because Persient is a higher-level replacement for ADO.NET, transactions work in a similar way. Call BeginTransaction(), do your object CRUD operations, call Commit() or Rollback(). No declarative stuff and no COM+ integration is required.
Write SQL or some OQL like incarnation?
Persient uses OQuery, an object-oriented query language that started out to be an OPath clone but has taken on a life of its own. I will describe it in the near future. See the feature matrix for the core language elements and functions.
Note that CRUD operations are performed using the Persient API, so it's not part of the query language, which unlike SQL is really only about querying, not about data manipulation (DML) or data definition (DDL). So you will not even have to write the ugly, SQLish (and without other CRUD commands completely pointless) SELECT command. Querying objects is as easy as:
Store.Select (typeof (Customer), "Name.Trim().Like ('p*ttern') or Age > 18");
The compilation to SQL is "neato" (and cached)... =)
Well ... lots of rather smart optimizations have been made and I would dare to state that only as few statements as necessary are ever sent to the data source. Particularly, not only can you delay load stuff (that's standard), but you can also "delay update": If you only need to update a single property (or two) of an object (or two), you can specify all that granularly and expressively (and object-oriented), and the DML statement will be as compact as hard-coded SQL would be.
As much as ADO.NET would demand, as far as CRUD is concerned ... :) Regarding the mappings, you currently can use attributes which is nicely declarative but binds your business layer to Persient. If you don't like this, you can do it all imperatively, which means you can have it all in one place and this place can be outside your business layer. A lot of people seem to be fond of messing manually with XML configuration files, so I may decide to include XML support (built on top of the existing imperative facility), maybe even write a plug-in for ObjectMapper (if it supports plug-ins). Otherwise no demands---no base class or interface requirements. There are optional interfaces for notifications, though.