Wednesday, November 19, 2008

Problems with WebDAV on Vista 64bit

Does anyone have any suggestions on a good reliable WebDAV product for Vista 64bit? I can not get the one that is bundled with Vista to work. Every time I try to create a remote connection I get the following error message:

"The folder you entered does not appear to be valid"

I have run the Vista WebDAV patches:


I have also tried the following Hotfix:

Windows Vista WebDAV Hotfix

I have also tried to get various 3rd party packages working such as Independent DAV, BitKinex and WebDrive. All of the WebDAV products seem to have problems with the Windows Vista 64 bit systems. Is there any way to get Vista to use the Windows XP WebDAV "drivers"? I am using both eXist and MarkLogic native XML databases. Thanks - Dan

Wednesday, November 12, 2008

How to compile the eXist 1.3 build

I have been learning how to use higher-order functions in XQuery. I am using the eXist system to test this. To do this you will need to get the 1.3 build that there is no download for (yet). Here are the steps I used:
  1. Download fresh copy of eXist from svn via TortoiseSVN or Eclipse subversion subclipse plugin from the eXist SVN sourceforge repository:
  2. (Confirm environmental variables are set correctly for JAVA_HOME and EXIST_HOME) To do this I use the Windows/Computer/Properties/Advanced Settings and check it with the SET command at the CMD prompt. The result looks like this:
    JAVA_HOME=C:\Program Files\Java\jdk1.6.0_10
    My Eclipse workspace is just "C:\ws" and I created a project called eXist-1.3 so the path name I used for EXIST_HOME is:
  3. Open the DOS cmd prompt, cd to eXist directory
  4. type in "build.bat"
  5. Type in "build.bat -f build\scripts\jarsigner.xml"
It took just 1 minute and 6 seconds for the main build to run on my 4CPU system with 8GB of RAM even though it is running Vista. Then I just ran the start.bat in the bin directory. eXist was then running on http://localhost:8080/exist. I had to use the new WebStart admin tool on the admin section of the eXist web page to change the admin password. Thanks to Joe Wicentowski at the US Dept of State for helping out!

Thursday, June 05, 2008

The Finch and the Raccoon

The blogs on are down so I have reposed this here. Have you ever wondered if the laws of evolution apply to computer languages? When you walk down the isle at your favorite bookstore, does it seam like there are actually more computer languages than last year? What forces are driving each of these new languages to evolve? In 1835 Charles Darwin visited the Galapagos Islands. There he collected what he thought were about a dozen distinct species of birds. Upon returning to England he discovered that each of these species had evolved from a single species of finches. On the various Galapagos Islands the requirements for food gathering was different, but consistent over hundreds of thousands of years. Enough time for a single species to adapt to meet consistent requirements. Consider the Raccoon: omnivores that have proved to be one of the most adaptable mammals on Earth. The Raccoon’s range has rapidly expanded into urban areas due to their ability to quickly adapt to new requirements before other animals have had time for the wheels of evolution to turn. So goes it with computer languages. Some procedural languages can be quickly adapted to fill in the needs for a new niche. When the web was young, procedural languages like Java and JavaScript quickly filled in the need for a variety of tasks. As the requirements for building web applications stabilized, declarative systems like CSS, XForms and XQuery started to push procedural languages back into niche-areas. As these declarative languages stabilize and become worldwide standards, graphical tools are being created to allow non-programmers to create, manipulate and extend these systems. This is why many of us believe their will always be some need for procedural programming, but certainly not for building standard web applications that are controlled by style sheets and user interaction forms. Like the finch, declarative languages need a little longer to evolve. It sometimes takes years for a small vocabulary of functional specification patterns to emerge and be given labels. Additionally, it can takes years for the standards bodies to agree on the best way to deliver these new languages in a set of semantically precise data elements that have unambiguous interpretations. Finally, it may take another few years for IT managers to realized that they really do lower costs if they avoid vendor-specific implementations and adopt worldwide standards. When CSS first came out you may have been a little reluctant to let web designers play with a rules engine. As XForms becomes ubiquitous you may be resisting change because you have invested so much time and energy learning how to debug JavaScript (without a debugger). You can not hold back the forces of evolution…and now we all need to adapt to the declarative world or risk our own extinction. If you are interested in more on this topic see my Presentation from the 2007 Semantic Technology Conference The Semantics of Declarative Systems

Tuesday, May 27, 2008

XRX: Simple, Elegant, Disruptive

I recently started writing for O’Reilly Media. I posted an article on XRX. Here is the link:
We are just getting started moving to a new MoveableType (MT) system so not all of the features (like keywords and feedback) are working.
If you have problems commenting on the site, please feel free to post your comments here.
Here is a comment from Arun Batchu:
An elegant introduction to an elegant architecture, Dan. Thank you. The essential takeaway from what you describe is the exploitation of XML from one end to the other end - especially from the Developer's perspective, for, how XRX actually manifests in runtime could be left to implementation technologies. Thus the logical architecture of XRX could be realized by a few variations of concrete technology - which is great. The XForms could be realized by XForms server technology (such as the excellent Orbeon stack), the ReST could be realized by any middle tier and the XQuery could be realized by an XQuery engine (such as Data Direct) that may actually be driving any one or a combination of datastores (XML, SQL, file system or ...) . The symmetrical and consistent leverage of XML as a data model from creation to transport to rest and back eliminates a whole lot of wasteful work. Like you point out, XPath is one of the most powerful query systems I have encountered; you can pack so much in so little and reuse it across the board with little if any change from the drawing board to production. In such a system as you describe, a business rule expressed once can be reused anywhere - from one end-to-end , however long the travel, as long as the architecture is XRX, like you have described. Thanks for expressing it so well. A few of your readers will not get it - it is one of those things that once you experience it, you are left wondering why this did not happen before. Oh, well!

You are welcome! Thank for your feedback Arun!

Saturday, March 22, 2008

XForms, Dyslexia and the Right Brain

Those of you that have worked with me know that I am a little bit of an odd-duck. I have dyslexia. If you ever see me attempt to write on a whiteboard my spelling is at the ninth grade level. My left brain, used for phoneme recognition, never really developed like the rest of you. My right brain had to be co-opted to help out. But one of the skills I seemed to have picked up due to my over-exercised right brain is the ability to visualize multiple complex application architectures and quickly understand architectural tradeoffs. I am one of the few people that seem to be interested in discussing how XForms, metadata registries, ontologies, the semantic web, OWL, RDF, graphs, business rules, BPM and Kimball conformed dimensions all can work together to deliver elegant and cost-effective enterprise-scale solutions. It seems easy for me to simultaneously visualize two or more architectures and it constantly challenges my patience when I have to explain over and over why architecture alternatives will not meet a business requirement.

It turns out that many people that have dyslexia also have the gift of being able to visualize complex systems. Albert Einstein, Thomas Edison, Jackie Steward and Charles Schwab are good examples of dyslexic people that have used the strengths of the right brain to do things that left-brain thinkers could not.

As a right-brain centric person I need to also tell you that I really love the XForms architecture. The magic of a declarative language, MVC, bindings and a dependency graph makes XForms development 10 years more advanced than anything else I have worked with. I think it is beautiful and elegant. It is everything that AJAX and JavaScript application are not. Clean, simple and easy to visualize (for me at least). When someone asks me if I can create an XForms application to do something, I create a mental image in my mind of the model, the view and how events will update instance data in the model using inserts or external submission results. I can easily visualize the bindings of view controls to data elements in the model. Once I can visualize the application clearly, writing the application is just a matter of typing in the code.

I think that my right-brain is also a reason I detest JavaScript and AJAX. It is far too much code to read and trying to visualize how 300 lines of JavaScript enables me to do a drag-and-drop. I want to just add an attribute to an element like "drag-source" and "drop-target" and I want it to just work.

What triggered this posting is that I have been reading Proust and the Squid by Maryanne Wolf. This is a book about how the brain's circuits are used in the reading process. She has a wonderful explanation of how the dyslexic brain co-opts the right brain for reading and enhances it functionality. I didn't really understand the relationship between my defects and my gifts .

So how about you and your development team? Do you have a dyslexic right-brained person on your team? Can they quickly visualize architectural tradeoffs? Have they tried XForms? And if they do, will you be willing to tolerate their disgust of AJAX and JavaScript after they have built their first XForms applications?

For more information about dyslexia check out these two Wikipedia entries:

Friday, March 21, 2008

Great Example of Multi-dimensional Bubble Chart

Here is a beautiful example of a bubble chart display using global population statistics. The example on carbon emissions are very interesting. Note the dimensions
  1. Population of County (size of bubble)
  2. Continent (color of bubble)
  3. Live Expectancy (vertical axis)
  4. Income (horizontal)
  5. Time (the play button)
It is interesting to see the huge impact AIDS has had on the life expectancy in African counties. Imagine if you could see your organizations product sales using this type of graph. This application was done with a software system called Trendalyzer. It was initially developed by Hans Rosling's Gapminder Foundation in Sweden and acquired by Google Inc. in March 2007. This version is a Flash application. Does anyone know of any open-source software that could do this?

Metadata Repositories vs. Metadata Registries

For several years people have been using the terms metadata Registry and Repository inconstantly, imprecisely and almost interchangeably and I would like to weigh in as to how these terms could be used more precisely to allow organizations to effectively to manage metadata processes.

First lets take the definition of a Repository. Webster defines a repository as …a place, room, or container where something is deposited or stored.. Note that here is nothing in this definition about the quality of the things being stored or the process to check to see if new incoming items are duplicates of things already in the repository. If I have 100 users they could each define "Customer" as the see fit and put their own definition into the metadata repository as their own definition. No problems.

On the other had lets take the word Registry. A Registry has the connotation of more than just a shared dumping ground. Registries have the additional capability to create workflow processes to check that new metadata is not a duplicate (for a given namespace). One of the definitions from Webster is an official record book. Note the word official.

A Repository is similar to a front-porch of a house. No locks prevent new things from landing there. But a Registry is a protected back room where human-centric workflow processes are used ensure that metadata items are non-duplicates, precise, consistent, concise, distinct, approved and unencumbered with business rules that prevent reuse across an enterprise. These registries have become the central foundation that agility can be baked-in to many enterprise process. The latest version of the Kimball's Data Warehouse Lifecycle Toolkit (which is actually a very good read) even goes as far as to call their process "metadata-driven". Not different the model-driven development world.

Registries have the implicit connotation of trust behind them. They now serve a a central process for the creation of shared meaning across the enterprise. Definitions in a registry have been vetted by an enterprise-level organization that has the responsibility of enterprise data stewardship. They have a high probability of being consistent with industry best-practices and vertical industry standards. Registries are the go-to source for creating canonical XML schemas, enterprise ontologies or conformed dimensions in a OLAP cube. Repositories are personal or small departmental definitions of an isolated view of the world.

None of these ideas are really new. They are at the core of the ISO/IEC 11179 metadata registry standard. Note that they don't call it a repository standard! People are just now starting to understand how important Registries are in most enterprise-wide systems. The growth of Business Intelligence and Enterprise Data Warehouse terminology and Service Oriented Architectures is a good place to see the rise of repositories and registries. We now see service registries, portlet registries, model registries...the list goes on-and-on.

Much of the background on the differences between the use of repositories and registries can be traced way back to the early days of object-oriented systems in the 1995 book Succeeding with Objects by Adele Goldberg and Kenneth Rubin. This was one of the first books on enterprise reuse strategies and they defined the concept enterprise asset reuse and the need for a trust-driven repository as a basis for reusing assets. They identified a multi-step process for reviewing new submissions to determine if the submission duplicated existing assets. They showed how critical it was to classify items in a registry and search an existing registry for duplicates before new items are added. If you can get a copy of the book I would suggest you read the section on "Set Up a Process for Maintaining Reusable Assets" on page 245.

The book then goes on to show how organizations can and should be structured to reuse these assets and gives the pros and cons of the differing organization structures and their impact on reuse. This is the basis for the data governance and data stewardship movement in many organizations today.

So the next time someone uses the word registry or repository in a conversation, ask them if they are using the definition of the word that is consistent with the corporate business term registry or is their own private definition from their own repository of imprecisely used buzzwords.

Sunday, March 09, 2008

XForms Tutorial and Cookbook Voted Featured Wikibook!

I am happy to announce that the XForms Tutorial and Cookbook that I have been working on for over a year been voted a "Featured Book" on the wikibooks web site. A quote from the award:

XForms is a featured book on Wikibooks because it contains substantial content, it is well-formatted, and the Wikibooks community has decided to feature it on the main page or in other places. Please continue to improve it and thanks for the great work so far!

Here are some stats on the book so far: 90 sample programs
116 chapters
626 edits
28,811 words
29 registered authors

One of the big tasks is to start to move some of the advanced XForms examples that require siginficant server-side logic to a seperate server-specific book. As a pilot I have started an XRX cookbook for people using REST interfaces and the eXist server.

I would like to thank the over 30 others that have contributed ideas and content to this wikibook. We still have lots of work to do to cleanup the example programs, make them more consistent and add additional examples for new XForms students. But I believe it is one of the best examples of collaborative training that I have worked on in the last few years.

Sunday, January 06, 2008



This posting describes how using three technologies (XForms, REST and XQuery or XRX) you can dramatically transform your organizations development methodology. You can move true model-driven architecture (MDA) that dramatically reduces the temptation to duplicate code.

In January of 2007 I was working on a complex real-estate forms project for a state agency in Minnesota. Our job was to get 87 counties across the state of Minnesota to agree on the over 250 data elements that describe real estate transactions. We had spend months developing a complex XML Schema and web forms with over 50 one-to-many relationships. And out next task was to be able to save the form

data into a structure that could be quickly queried by any of these 250 data elements. But the prospect of "shredding" the documents up into 50 distinct SQL INSERT statements and then reconstituting the documents with 50 distinct SQL SELECT statements was going to require doubling our small team of four to at least eight developers. We didn't have the budget or schedule to do this. After chatting with Kurt Cagle he suggested saving the data to a database that supported XQuery. Both the open source eXist native XML database and our corporate standard, DB2 supported XQuery. It turned out that we could indeed save the entire document with a single command and still perform complex queries on any element in any document. Our project could proceed with a small team and stay on schedule.

And then an interesting thing happened. Our team started to understand that if we saved additional metadata in these native XML data stores we could accelerate our project even further. Every pick-list in every form could dynamically call REST-enabled web services to get their values. XML Schemas could be stored in the databases and be queried. We had found a simple and elegant solution to the pervasive problem of where to store the model in a model-driven development project.

In the past I had been on large teams of developers that attempted to use model-driven development. But although we started out each project with idealistic goals, each time we were faces with a classic option: copy, paste and edit the model or write a transform. The copy and paste solution was quick but it would have to be redone each time the model changes. The transform took longer to write but then could be rerun each time the model changed.

Like most developers, I always was over-optimistic that I had done my homework and done accurate and detailed modeling of the problem. I thought the models were stable and would not change very often. And I was almost always wrong. Models do change and sometimes in totally unexpected ways.

But by storing our models in a native XML database an interesting started to happen. We could quickly transform the models (stored in XML) into other artifacts with simple XQueries. It became far less tempting to commit the sins of copy/paste/edit when it only took five minutes to create another transform. And most remarkably, each of these transforms was another REST enabled web service that could be reused by many web clients or other development tools.

So how do you get started? You might try the following steps.

  1. Download the exist database from
  2. Use a WebDAV interface and copy some XML Schema documents into a collection
  3. Write a few small XQuerys that pull metadata out of your XML Schemas. For example pull out all the enumeration values out of an element.
  4. Use those XQueries to drive some aspects of your development such as an XForms selection list.
  5. Compare the amount of code you wrote with any other MDA system. If you don't have at least a 10 to 1 savings I would be very surprised.

What do you think? Do you have a more efficient way to query your model? Let me know!

- Dan