MDA Nice idea, shame about the ...

类别:软件工程 点击:0 评论:0 推荐:

May 2004

Discuss this Article


MDA in a Nutshell

The OMG's aim with MDA is to allow businesses developing bespoke applications to concentrate on determining their business requirements for the application, but not be concerned with what particular technology platform will realize those requirements. In other words, the application is specified and implemented in a platform-independent fashion. The OMG's claim is that systems built this way should have a much longer lifespan than any given technology platform; 20 years is suggested by the OMG as typical.

The centerpiece of MDA is the Platform Independent Model (PIM), modelled in UML. Many of the changes introduced in UML 2.0 are directly to support MDA; notable among them is the Precise Action Semantics which can be used to describe behavior.

While the PIM is independent of technology, the platform specific model (PSM) is where the "rubber hits the road". Vendors are expected to use mapping and transformations to convert any given PIM into a PSM or indeed PSMs. MDA does not prescribe how this should be performed, but indicates that typically the transformation will be model to model, or meta-model to meta-model. In concrete terms, many vendors use tags or other metadata to annotate the PIM such that the different PSMs may be generated.

A PIM may give rise to multiple PSMs for two distinct reasons. First, there may be multiple PSMs if the application in its entirety (e.g. a word processor) needs to run on different operating system platforms. More to the point of PIMs and PSMs: if a new platform emerges during the lifetime of the application, then a mapping can be defined to that platform and the application moved across. Second, and as a consequence of supporting the first, MDA is suitable for developing distributed systems that are made up of components running on different platforms/tiers. For example, there may be web tier implemented on ASP.Net, but which talks to a middle-tier using EJBs, with a back-end being a Sybase RDBMS. MDA can generate the software for the tiers, plus the code to glue it all together.

The OMG vision also has its Meta Object Facility (MOF) playing a central part in MDA. Because the PIM and PSM are expected to be written in UML, itself a MOF language, it should be possible to define transformations written in a MOF language. Currently OMG is working on the QVT (Query, Views, Transformations) specification to support this idea. In other words, QVT allows the transformations performed by vendors between PIM, PSM and code to be standardized.


Two Flavours of MDA

The various tool vendors endorsing MDA have been looking to reposition and enhance their existing tools in the context of MDA terminology. However, this has led to two different interpretations of the MDA vision.

The first flavour, termed elaborationist, is very much in keeping with how OMG's normally describe viewpoint. There are separate PIMs, PSMs and code, and the PSM is usually annotated in some fashion. Here the claim is for 50-80% code generation, with behaviour typically specified in a 3GL language. Vendors who fall into this category include Compuware OptimalJ and io-Software's ArcStyler.

The second flavour, termed translationist, says that the complete system must be specified entirely within the PIM. This approach is also sometimes called Executable UML and builds on Shlaer- Mellor's work. From the PIM both the PSM and code are 100% generated completely. Indeed in practice the PSM can be substantially ignored. Behaviour is specified using an action semantic language compliant with Precise Action Semantics. Vendors who fall into this category include Kennedy Carter and Bridgepoint.


So What's Right About MDA?

MDA has at its heart some laudable objectives. Most of us would agree with the fundamental idea of MDA that business requirements ought to be considered more important than a particular technology platform. This is not to ignore the fact that sometimes technology can create a business need, as evidenced by the web, 3G or Wi-fi. But once a business requirement has been established, it is possible that it could continue to be realized by a different technology in the future. As already noted, the OMG suggest that an MDA-devleoped system would have a lifespan of something like 20 years.

The MDA's use of UML, PIMs and PSMs means that the OMG can also claim a number of other benefits for MDA in addition to this primary goal. Notable among these are its support for rapid development and delivery, primarily through its use of transformations and code generation. Depending on the flavour they advocate, MDA vendors claim code generation of between 50%, 80% or even 100% of the system, using code generation templates that encapsulate design patterns, automatically capturing best practice.

Another benefit claimed of MDA is that, because of its use of UML as the basis for PIMs and PSMs, it puts a proper emphasis back on modelling and on design.

This is my analysis of MDA's claimed benefits, of course; yours might be different. Compuware for example state that the advantages of MDA are productivity, interoperability and quality: these emphasise the second benefit I listed. Still, I need something to structure the rest of this article by, so that's my list.

But let me now ask you:

are models important to you? are your platforms that unstable? would your business buy it? isn't having two different flavours of MDA somewhat worrying? could your development process cope? down at the coal-face, could you - honestly - make it work?

Because depending on how you answer these questions determines whether MDA is achievable for you and your organization. Let me tackle each of these points in turn.


Are Models Important?

What a bizarre question: "of course models are important to me!" you say. Well, perhaps, but does your business care about models too? The best analogy to PIMs and PSMs in current use that I know of are logical data models (LDMs; you might also call them entity/relationship diagrams) and physical data models (PDMs). If you value models then I'm sure you have plenty of LDMs and PDMs around, all nicely kept in sync. with a tool such as Embarcadero ErWin or Sybase PowerDesigner. So you have models, but do your business stakeholders take any interest in them? An LDM which describes the data that an organization owns is one of the most valuable technical documents an organization might create. The information in a PIM must at least include the information held in an LDM, and that information will be among the most valuable within that PIM. If your business have never taken an interest in LDMs, then it's seems unlikely they will take an interest in PIMs either.

Let's tackle this another way. We can make the representatives of the business interested in models if we get the incentives right. So, are your business managers rewarded and incentivised for efficiencies that we know will accrue only in the longer-term? Because that is how they must be rewarded to commission a system that will cost more to build if one considers only short-term costs. And even if your incentives are there today, are you certain they will remain in place in the future. When a new platform comes along in 10 years, will the appropriate investments be made to create a new set of transformations from the PIM to its new PSM? To be blunt: is your organization fickle or is it good at staying the course?

But even if you have incentivised business managers who understand and nurture the PIMs developed for them: is your business stable? Are you delivering the same set of customer propositions as you did 20 years ago? A rule of layered architecture is that the less stable depends upon the more stable. If the way you do business with the majority of your customers has radically changed, then your business may be moving as fast or faster than the technology platforms that support it. Certainly many organizations I know use technology churn as an opportunity to revitalize business processes that are tiring. If the business changes as rapidly as technology, then there's no need in it being independent of its supporting technology.


Are your platforms that unstable?

We all know that the IT industry is forever innovating. But the point is not how often are new platforms defined, but how long do they last before they are not supported. In a response to an earlier article of mine [1.], the OMG asserted that it was a "succession of short-lived middleware platforms ... keeping companies from realising the long-term return ... needed from their investment in software and integration". But it's worth considering what short-lived might mean:

The Java platform, still regarded as quite new, has been around since 1995. Today's JVMs can still execute code written for JDK 1.0.2, 9 years on. COM still sits at the heart of Microsoft's Office applications; it's getting on for 20 years old Microsoft has extended (yet again) the lifetime for Win16 platform, and provides 5 years notice after the end-of-life announcement before discontinuing support for a product. The TCP/IP network protocols are over 40 years old, but it'll be a while before the Internet is switched off in favour of Internet 2 running IPv6 (which in any case is backward compatible) Even CORBA, which never became the de-facto distributed platform for network interoperability, sits at the heart of EJBs. And a CORBA Orb lurks in every Java 1.4 desktop. Admittedly, ICL's venerable VME operating system is to be phased out ... in 2012.

Of course I have cherry-picked my examples. There are many other platforms which have been introduced by vendors and then discontinued shortly afterwards. But for enterprise systems at least the dominance of .Net, Java and web services for the network means that the underlying technology platforms landscape looks remarkably stable for the time being. Java is 9 years old and in that time it has grown up and attracted wide-spread acclaim. Though Sun still owns the platform, it is hard to imagine IBM and Oracle letting Sun lose interest in it. And in any case, Java is much bigger than Sun. There are several open source implementations of JVMs (such as Blackdown), while at GNU the Classpath project is a clean-room implementation of the Java APIs up to JDK 1.4.

Microsoft, meanwhile, has been playing catch up (and arguably overtake) with .Net. Clearly derivative from the Java architecture, it represents at least five years' development effort on Microsoft's part. Just as Sun is likely to remain wedded to Java for the foreseeable, it is hard to imagine Microsoft letting .Net loose for a couple of decades either. Even if they wanted to, it would take at least 5 years to develop something to replace it, and another 5 years to end-of-life the .Net architecture. In the meantime, the Mono project is porting .Net to Linux, and Microsoft themselves have ported .Net to BSD. If Microsoft were to quit .Net in the future, then - just like the Java platform - there would most likely be a substantial body of open sourcerers to look after it.

In terms of networking these platforms together, it boils down to a single "platform": web services. Everyone but everyone is committed to this: Microsoft, Sun, IBM, Oracle, BEA, the list goes on. Coupled with XML Schema, we even have a neutral standard for representing semantic content. Whether initiatives such as ebXML are able to standardize these semantics to enable seamless B2B and B2C actually doesn't matter; the point is we already have a platform independent way for systems to communicate with each other, so we are already at least as far as MDA intends to go. Even while web services continues to mature, vendors such as Borland are offering products such as Janeva that allow interoperability between the native network protocols of .Net and J2EE.

So in Java and .Net we have two mainstream business platforms that are going to be around for a very long while. Their virtual machine architecture extends that lifetime further still. And in web services we have a common way for them to interact.

In any case, the growing popularity of open source rather undermines the platform risk argument. If I choose to adopt a proprietary web framework to build my web apps, then obviously I have created a dependency. But if I choose an open source framework (e.g. Struts, WebWorks, Tapestry, Spring, Cocoon etc.) then I make my choice based upon the liveliness of that framework's community - and at worst I can support the software myself.

It's not as if MDA removes platform risk. Consider this fragment of a generated application as produced by Compuware's OptimalJ:

//GEN-GUARD:REMOTEVIEW$pidfb3196086219373fb package breakfast.application.ejb.breakfastorder; ... import com.compuware.alturadev.application.*; import com.compuware.log4j.LoggerCategory;

These dependencies on the OptimalJ libraries mean that the generated code must run in some sort of container that provides those services. In order to move a PIM over to a competing product requires at least a definition of those services, never mind the difficulty of exporting and importing the information in the PIM.

Consider also what it means to model in an environment where there is no platform. For example, how would you send an email? The programmer will need to define an abstraction of this email sending service first, then write a set of transformers to generate a realization of that service in the PSM and code. This seems to me a somewhat pointless reinvention.


Would your business buy it?

Adopting MDA has some major implications that the business needs to realize. This is assuming you can get the business to understand MDA well enough to want to adopt it in the first case. Any technology that deals in meta-meta-models is going to be a hard sell, no matter how many analyst's reports have been written.

But the business should realize that in the project lifecycle, using MDA requires that the business be prepared to invest early on in the cycle to fund the development and debugging of transformations. Since the cost of development is front-loaded, that increases exposure if the project is cancelled.

While you are at it, you ought also to come clean that adopting MDA (in its fullest expression) dramatically reduces the need for Java, C# and VB programmers. Instead, the majority become application modellers, while a smaller number being transformations specialists (in the same way that one might meet a compiler specialist today). That is a massive change of emphasis, representing a major write-off in current training investment.


... isn't having two different flavours of MDA somewhat worrying?

As was described in the introduction, the various tool vendors endorsing MDA have been looking to reposition their existing tools in the context of MDA terminology. This has led to the translationist (or executable UML) flavour and the elaborationist flavour.

In the translationist approach, behaviour is specified using state charts coupled with an implemenation of the Precise Action Semantics, that is, with an action semantic language. In effect, this is an extension of the Shlaer-Mellor method. While the translationists can point to many successes, they have predominantly been for real-time or embedded systems, almost always using C++ as the target language. Moreover, the syntax of any given action semantic language is vendor-specific. It would be possible to port them between vendors, but they do nevertheless represent a dependency on a particular vendor.

In the elaborationist approach, behaviour is typically added as a refinement in the PSM or in the code, meaning that there is no need to use an action semantic language. On the other hand, since the PSM and code are refined, there is a need to keep the different representations in sync.

There is a good degree of antipathy between the two camps. The translationists do not allow elaboration of the PSM or generated code "because elaboration is stupid". Leon Starr writes [2.]: "Elaboration destroys the original specification. In fact, the process of mucking up the specification with implementation details usually starts long before the specification itself is complete. So the boundary [between the two] is always fuzzy."

Meanwhile, the elaborationists point to the limited applicability of the translationist approach to date. Warmer, Kleppe & Bast state [3.]: "[it is] only useful in ... embedded software development". They also state: "[the] action semantic language is not high level ... [and] has no standardized syntax".

At a minimum what this means to MDA adopters is that they should not expect PIMs developed with a tool from one camp to be usable in any form in a tool of the other camp. And the fact that two camps exist at all is disconcerting, to say the least. There ain't ever gonna be such a thing as a standard MDA tool.


Would your development process cope?

Adopting MDA has some major implications on the development process too. Whichever flavour of MDA you choose, you will need to put robust procedures and processes in place to make sure that changes (features/enhancements or bug fixes) are made at the right place. In the elaborationist view of MDA, there are four places where a change could be made: the PIM, the PSM, the code, or the transformations. In the translationist view, there are still two: the PIM, and the transformation. And whenever it is a transformation that has been modified, it also raises the issue of versioning. A new version of a transformation must be able to preserve a PSM or code originally created by a previous version.

If these robust procedures and processes are missing, changes will start being applied in the wrong place. In other words, maintainers will just start to hack the generated code. And remember: these procedures must also remain good for the next 20 years so will need to be deeply ingrained into the culture ... a single advocate for MDA in the company wouldn't be enough for them to survive.

Adopting MDA also requires a move towards agile development practices. While like many I'm an advocate of agile processes, it's still foreign to many organizations. The need for agile development follows from the fact that MDA requires models to be treated as software, hence they are the "as-is" view. Those organizations that used UML only for blueprints or sketches (Fowler's analysis [4.]) will find that MDA does not permit the use of UML in that way.


Would it - honestly - work?

At the coal-face there remain some pretty substantial issues. One of the most significant is the difficulty of expressing behaviour in UML. Interaction diagrams are not suitable for this since that shows a single interaction for a single scenario between object instances. The trouble with interaction diagrams is that they are instance- rather than type-based.

Using a declarative approach for defining behaviour means using OCL to specify pre- and post-conditions for class operations. While OCL is great for formally defining the semantics published interfaces between components, it's overkill to attempt to do this for the unpublished interfaces of objects within components. It just adds too much rigour at the wrong points in the development process, and in turn impedes agility. Put it this way: I can't see your average Perl programmer taking kindly to defining the behaviour of their programs in this way. In any case, the approach of turning OCL into a platform-specific language such as Java is largely unproven: toolkits such as Dresden-OCL by necessity use reflection extensively making the generated code hard to debug.

The translationists solve the behaviour problem using Precise Action Semantics. However, as I've already indicated, in practice this means using a particular tool vendor's own action semantic language. This means abandoning investments in languages such as Java and .NET in favour of a tool vendor's proprietary syntax for an action semantics language. The platform comes back to haunt us.

For the elaborationists, such behaviour is added as a "refinement" (their term) to the generated code. In ArcStyler, one adds the behaviour in between commented placeholders: "insert your code here...". But this isn't refinement at all; it gets to the very heart of responsibility assignment, and thus the very heart of modelling. Omitting it from the PIM means that the PIM is missing significant information. All that such tools are doing is automating the generation of the scaffolding for static structures of the domain objects.

Another issue is the continuing immaturity of tools, even 3 years on from MDA's launch. Tool support for generic MOF capabilities, as required by a strict interpretation of the elaborationist approach, is still very patchy. While many UML tools allow PIMs and PSMs to be defined, they are not in themselves MOF-aware. They can be used to import/export MOF XMI format because they understand the UML namespaces, but they cannot be used for arbitrary MOF models. None of the leading UML tools (including Rational Rose, Borland Together and Embarcadero Describe) yet have generic MOF support.

For the OMG's view of elaborationist MDA to succeed, vendors must not only make their UML tools MOF-aware, they must also support QVT (once it is defined). Although I should be careful not to prejudge this, these are the same sort of vendors who have not implemented the OMG's earlier CWM for their various data modelling tools, a much easier proposition.


All things considered

Let me recap on the objectives of MDA as I see them. The primary objective is to separate business requirements and analysis from technology. A secondary objectives is to provide rapid development; a third is to put a focus back on modelling.

If your business doesn't care about models, and/or you adopt a Java or .Net platform using open source technologies, and/or your business changes as rapidly as the technologies, then that first objective would seem to be moot.

Let's look at the second point: rapid development. Clearly code generation can produce benefits and productivity gains. However, the initial cost of creating the model transformers/code generators will always outweigh the cost of just writing an application. The transformers provided out of the box by the vendor mitigates this cost, but if the business wants a highly custom application (and they probably do, otherwise they would have bought a 3rd party COTS product), then those transformers provided will act only as a starter for ten.

The third objective I listed for MDA was its emphasis on modelling. However, that emphasis is at best superficial at least for the elaborationists, mostly because of the problem I identified of expressing behaviour of objects, and hence their responsibilities. If a designer cannot easily experiment with assigning different responsibilities to objects; that is, it's hard for them to create a decent design. Putting all the above together, I really can't see MDA working as the OMG defined it. But let's not be too gloomy; it's not a total write-off:

For a start, MDA seems to have galvanized an interest in code-generation. Tools like AndroMDA are powerful code generators, and happen to use an XMI import as their input. It's a long way from what the OMG was trying to accomplish, but it's not to say that it isn't useful. More substantially, the use of metadata annotations in a PIM is remarkably similar to a couple of technologies that will become much more important, namely Java 1.5 metadata coupled with aspect-oriented programming (AOP). When AspectJ is enhanced to define pointcuts based on Java 1.5 metadata tags, these will start to act very much in the same way as PSMs. Aspects too may solve the behaviour problem. OMG didn't want to use Java as a syntax for the Precise Action Semantics because as a language it is not constrained enough. It allows things to be said that are not rightly part of the PIM. However, in much the same way that one might define a UML profile to be a set of constraints for a UML model, so aspects can be used to define (what I suppose one might call) a code profile. These could constrain the behaviour expressed in such code to be a strict subset of what is possible in Java itself. (Indeed, one sees this to a limited extent in the EJB specification where multi-threading and calls to java.awt are not allowed).

What MDA has contributed then is the definition of a problem which AOP is ideally placed to solve. I can envisage writing software in a year or two where we use Java (or .Net) to write domain objects, and use annotations and aspects to represent various system-level concerns. The prevalence of open source frameworks that use POJOs (plain old java objects) is just the beginning of this trend.

And as a Naked Objects advocate (see [5.] articles on the Server Side), I can see a future version of the Naked Objects framework that is really just a view aspect of some POJOs. When we have that then we truly will have a technology that puts the emphasis back on modelling.


Join the doubters

I hope I've at least triggered some sceptism about whether MDA can work as described by the OMG. But I'm not the only sceptic. The OMG are very vocal about vendor buy in from the likes of IBM, HP, Sun and Borland, claiming at the same time that, yes, MDA is the end of programming as we know it. But those vendors are clearly hedging their bets. For example, witness IBM's huge investment in "traditional" IDEs like Eclipse, not to mention its investments in aspect oriented programming. Indeed, in March 2004 at the AOSD conference IBM's CTO Daniel Sabbah stated "AOP is vital for our survival". That's a very bold statement). And consider the millions of VB programmers that Sun is after with its Project Rave: Sun isn't trying to get them to move to Java by offering them an MDA solution. As for Microsoft? Noticeable by its absence.

But even though I'm sceptical of MDA as described, I do think the problem that the OMG set itself when it came up with the MDA ideas is one worth solving. And that may be its biggest contribution, providing a focus for the use of powerful new technologies such as AOP. But the OMG's own solution to that problem: not for me, thanks.


About the Author

Dan Haywood has worked on large and small software development projects for 15 years having worked previously at Accenture and Sybase UK and for the last 6 years as an independent consultant, trainer and technical writer. Dan is an expert on Together Control Center having co-authored "Better Software Faster", a book addressing the effective use and customization of Together Control Center. Dan is also an active member of the Naked Objects community, http://www.nakedobjects.org, and has developed a set of customisations for Naked Objects and Together downloadable from his website, http://www.haywood-associates.co.uk.


References

[1.] Evaluating the Model Driven Architecture - Dan Haywood www.appdevadvisor.co.uk, Jan 2003

[2.] Executable UML: How to Build Class Models Leon Starr Prentice Hall PTR ASIN 0130674796

[3.] MDA Explained: Model Driven Architecture - Practice & Promise Warmer, Kleppe & Bast Addison-Wesley ISBN 032119442X

[4.] UML as Sketch, UML as Blueprint Martin Fowler's Bliki www.martinfowler.com

[5.] Naked Objects Architecture Series Richard Pawson, Robert Matthews & Dan Haywood www.theserverside.com, Mar - Apr 2004


original link: MDA Nice idea, shame about the ...

本文地址:http://com.8s8s.com/it/it32021.htm