The programming model is a collections based model that will also be refered to as the Managed Object Framework (MOFW).
Figure1: Goal and value of Bojangles programming model
The overriding concern of each role is to achieve independence. Developers need to be free to build applications in their favorite languages. They need complete control over the development resources so they can build applications incrementally and iteratively over time with one team, as well as in sequentially or in parallel with teams across departments, across the country or across the world. Finally, developers need to be able to independently focus on the business logic and platform specifics of applications in such a way that they can quickly be configured together to meet the specific needs of individual customers.
Customers, on the other hand, often only have limited programming skills available to actually install configure the applications they need to help them carry out their job responsibilities in a unique operational environment. However, they cannot be pigeonholed, as many customer shops can bring to bear sophisticated IS departments and even expert consultants. Similarly, the actual runtime distribution needs of customers are widely varied as well, ranging from standalone systems to widely distributed peer-to-peer and client-server networks. Some customers have a need for applications to run in disconnected and nomadic operational environments. Finally, customers often have need to reuse legacy components, from existing databases and applications to proprietary customized middleware.
In the midst of all this pie-in-the-sky flexibility that seems to be required, the application must remain as the "currency" that binds these two roles together. Too many failed projects simply provide a toolbox of components or technology, thinking that reuse is an end in itself. What we have learned is that the first step to success is to maintain the focus on developing real end-to-end applications, even if separating the concerns causes us to think about them as sums of their parts. In this case, reuse is merely a side effect of simplifying the programming model for each role. In other words, reuse comes naturally by letting the "right expert" do the work.
Object Providers, (OPs) who capture the business logic of an application into business objects using a well defined set of object service interfaces.
Service Providers, (SPs) who implement the object service interfaces that enable business objects on a specific platform (or family of platforms).
Application Assemblers, (AAs) who install and configure business object implementations together with object service implementations into customized application views.
Of course, the End Users are the most important role here (maintaining the application centric view of development), since they provide the reason for the system in the first place. However, they only use the application views configured by an AA and are not involved directly in development (except for requirements gathering and various levels of system test).
Figure 2: Separate the concerns using object technology
The encapsulation afforded by objects through their separation of interface from implementation is what makes this separation possible. As Figure 2 shows, each development role results in a specific kind of object that may delegate part of its work to other objects in a cooperating framework that makes up the application's operational environment.
There may be some arguments about whether there could be a further separation of concerns, giving rise to more roles than those shown in Figure 2. For example, is the role than installs and configures the software in the customer shop a different role than the one that administers it? This may very well may be, but it is our contention that this "administrator" role would simply be a "subclass" of an AA and would not diminish the importance of the need to develop a well defined set of standard object service interfaces to enable the separation of concerns. In other words, the information required by an administrator enabling him or her to "reconfigure" the system is very much like that that would help an AA "assemble" the application components in the first place.
All of this does point out the need for customized tools for each role, which will be discussed later. The next section will delve into the idea of developing a standard set of object services interfaces.
Answering the last question first, the Bojangles programming model has settled on using the Object Management Group's (OMG) Common Object Request Broker Architecture because it represents open, standard, distributed, object oriented interfaces to a set of object services required by most applications.
In answering the question about the smallest set of services, we turn to the Object Services Iceberg" analogy (see Figure 3).
Figure 3: The Object Services Iceberg
The Object Services Iceberg classifies the myriad of possible object services into three categories:
It is also likely that over time, the number of above the waterline services will grow. For example, OMG now has chartered a number of Domain Task Forces (DTFs) that represent a given industry domain, like telecommunications, finance, and health (among others). What we envision happening is that as these DTFs will add specific object services (interfaces) for things like "video channels", "bank accounts", and "medical histories" to enable applications in these domains to be quickly assembled. Therefore, the "above the waterline" services will be open ended.
What we want to accomplish here is to define the "tip" of the iceberg that can be used to implement everything below.
Figure 4 shows three kinds of OOA/OOD models, the major components of each, and the skill level of practitioners that can be expected to successfully use the modeling technique.
Figure 4: OO Analysis and Design models
The important message of Figure 4 is that there is a systematic decomposition of each model into components of a lower level model, and that each level requires more skill than the one above because the models become more abstract.
For example, a customer in an XYZ catalog ordering system may sport concrete methods that allow a "shopping experience" to be started (i.e. order to be entered). This analysis may lead to a static model that shows a customer component with an order that represents a contract with XYZ. It may be that a given order has a relatively complex lifecycle (dynamic model), from the initial selection of items in the "shopping cart" to a confirmation of the final order, to filling the order and shipping it to the customer. Additional analysis may drive out the need to handle returns and exchanges and associate these events with the original order for quality control purposes. Finally, it is probable that as orders change states, the data in the "shopping cart" is transformed into line items in an order entry system, and subsequently a bill of lading for a shipping system. These transforms can be precisely described in a message model, one of which may be associated with each transition in the dynamic model.
The analysis is more abstract as we decompose the order into its dynamic and finally message models. For example, there may be a transition object for the making and confirmation of an order, the filling and shipping, and maybe the return or exchange of item(s) on the order. One benefit of making these transitions into a first class object is that they can be further decomposed into a static, dynamic or message models without changing the basic business logic captured in the high level order and its states. For example, given a "shop" transition, we can add different work flows triggered by the total dollar amount of the order.
Another benefit is that transition objects can be handled generically by various services, assuming methods exist to cause the transitions to "flow" forward and backward, and maybe even forward again. This would allow various "beneath the waterline" services like transactions to enable recovery without having "hard coded" knowledge of the XYZ domain.
The level of abstraction is even higher as we move into the message model as the dataflow objects are used to abstractly describe the data flowing out of one transforming process into another. In most cases the "processes" are merely states in the lifecycle model, as described above. Thinking about them in this abstract manner allows them to be efficiently handled by even more generic services. For example, a UI service can be thought of as a "transforming process" that takes data from panel inputs and transforms it into the essential data needed to execute a "confirm" transition (e.g., a credit card number string parameter).
The most important message that should be derived from Figure 4 is that all three types of models must be enabled in the programming model.
For example, Rumbaugh's notation provides for at least four kinds of relationships (inheritance, by value, by reference and qualified reference). Java as a language provides for both implementation and interface inheritance, and some notations (like Booch's) have extensions to signify the difference. Yourdon and DeMarco's Dataflow Diagram (DFD) notation draws a distinction between source, sink and transforming processes.
The point is that the programming model should be based on a concrete set of classes representing abstract interfaces that directly map to these kinds of notations. They must be concrete so that programmers know exactly what to do in their favorite programming language once the analysis and design is complete. They must be abstract in that no one design or implementation is implied for the services represented by the interfaces.
Figure 5 shows the Managed Object Framework, a strawman set of concrete classes that we believe completely covers the OOA/OOD space described in the previous section.
Figure 5: MOFW - The Managed Object Framework
Each class will be described in detail, mapping it into the concepts discussed previously. An example from the XYZ scenario will be applied to each one as well.
To concretely see the reason we believe self identifiable objects are a contradiction in terms, imagine how one gets activated. What object is doing the activating? The answer to this question could be that the constructor of the object (i.e. its class) takes an identity as one of the parameters (and may manufacture one by default). This limits the object in question to a single name space (or context) per address space, and forces a major implementation decision into the business logic. For example, the OP for Customer may choose a "long" as a customerID number. In certain contexts, a long may be too "long" (i.e. there may only be a few hundred thousand customers). In others, it may be that the name of the customer is the desired identity. The recommended approach is to assume the framework provides a mechanism for adding context specific Identity to objects after the fact, and ignore identity.
This said, we recognize that programmers have been known to ignore the advice they are given and provide the interface anyway. Another reason we include the Identifiable interface is for CORBA compliance, since they call out a separate interface. Its primary "valid" use is in the "Managed" interface to be described below. That is, any object provider that insists on hard coding identity into an object should provide a full "Managed" object instead.
In any event, the primary operations here allow for identity checking. A constantRandomId attribute can be used to quickly determine "non" identity, and a heavier weight isIdentical() operation checks to see if two objects are in fact the same.
The release operator will be called whenever the object is passivated, while the remove operator is called at the end of the object's logical lifecycle. For example, a Customer implementation may wrapper a call to a foreign Credit Check system and open a connection to this system when a Customer is activated. When the object is passivated, a release() is called to enable the connection to be closed. If the Customer is deleted (say for non payment or inactivity), the remove is called that may remove the foreign Credit Check link.
In general, the internalizeFromStream() operator is invoked when the object is activated. When it is moved, copied, or passed by value for such services as caching, recovery or persistence, the externalizeToStream() operator is invoked. The "internalize" operation assumes that the stream passed in contains the essential data of the object before the call, while the "externalize" operation assumes that the stream will contain the data after the call.
See the overview.html in the MOFW documentation for details on how to write these two routines and provide the associated "schema" (the Essential Data Schema, or EDS). In the Customer example, we would expect fields like Name, Address, and Preferences to become part of the EDS.
This scheme does not prevent a "non-delegating" implementation (where the identity and business object are combined), it just means that the default value for "mySelf" can be "this" (and be set in the default constructor for a IdentityDelegating business object. Good citizen clients that do not implement using delegation might still call setSelf() on an IdentityDelegator object with its own reference, just in case the implementer did not use a default.
Many objects never hand out references to "this" (a Date, for example). For this reason, the IdentityDelegator interface is optional. The MOFW philosophy is that interfaces should only be inherited to enable implementation choices naturally made by the OP, not to enable the object to be used in various contexts. This feature is one that separates MOFW from other programming models.
Like the EDS for a Manageable object, a Dependent object will need to have a dependency map that associates a name with an expected interface type. Just as the "internalize" expects the stream to have the essential data called out in the EDS prior to the call, a Dependent object expects that the KeyCollection passed in has the name bound to an object of the interface type listed in the dependency map. For example, the Order object as described above might have a dependency map with an entry: ("customers" "CustomerFactory").
Also like the IdentityDelegator interface, simple objects that do not create other objects using factories as part of their business logic need not subclass from this one.
The only additional operation provided here is the getIdentifier() which returns a string representation of the identity of the object relative to some BaseCollection (see next section). This identity can be passed to the relative to BaseCollection to reactivate the object again.
In implementing business logic for an object, the idea is that a Managed object represents a "by reference" contract (a "by value" relationship is represented by a Manageable object). Therefore this interface represents a shift in focus from those services at the waterline to those above it. That is, the object provider may be using the services provided by a Managed object (and thus, Identity, Lifecycle and Managed via the inheritance shown). Since a Managed Object represents a "contract" we will see (in the next section) how it plays a major role in the Dynamic Model components.
By definition, a BaseCollection has to be a KeyedCollection. We have broken the link that requires it to be Queryable (and thus have added a subclass QueryableBaseCollection
Also of note, a "factory" is usually a subclass of a BaseCollection.
The QueryableCollection and KeyedCollection have special methods (dispatchAll() and dispatch(), respectively) that take a CommandOn object. These operations are generally used by SP's in implementing various services, but they can be used directly by clients to add efficiency in certain situations.
For example, a Customer may have a number of orders that need to be canceled. If a "cancel" CommandOn object exists, then one can be created and passed to the entire orders collection associated with a Customer. This eliminates the need to explicitly activate each associated Order in the client to explicitly invoke the cancel() operation.
Figure 8: The recipe for success