Archive for November, 2009


Adobe Learns XML, Slowly

I noticed that the draft FXG 2.0 Specification is finally online. It appears that this will be the form of FXG implemented in CS5.

I have been interested in, and somewhat connected to, Adobe’s approach to XML for quite some time. In the mid 1990s, FrameMaker supported SGML prior to the birth of XML. In 2000, Silicon Publishing worked with Adobe in publicizing FrameMaker 6.5 as an XML-capable tool, though FrameMaker+SGML only worked with XML in a very cumbersome, awkward way.

I will never forget our first project for Adobe, which was one of the very first Silicon Publishing projects. One Friday in 2000 I went to meet Doug Yagaloff, the publishing genius that led Caxton, and he gave me a copy of Frame+SGML and said I just had to do one simple thing. Import an XML document, and export it back out. That was a long weekend! I felt like I must be very stupid, it took me forever to get anywhere at all. Thankfully, Sunday night I found a “quick guide” online which exemplifies the great patience that is generally characteristic of those working with SGML and document-centric XML. I was able to show Doug an example the following Monday – “it’s hard, isn’t it?” he smiled.

We worked with Adobe on the FrameMaker 7.0 release, which dramatically improved the XML support. Later we put DITA support into FrameMaker for Adobe, which now gives a real head start when working with document-centric XML. I am a strong believer in DITA.

That is the core, semantic XML that SGML was oriented around at its foundation. InDesign got some bare-bones support for semantic XML with 2.0, but it goes nowhere as deep as the support of real XML authoring tools. Probably more interesting in terms of Adobe technology (they bought FrameMaker but it stands outside of their main product offerings) is rendition XML, and here was an area more exciting to us at Silicon Publishing.

Rendition XML

In 1998, when I was still at Bertelsmann, one of our former employees who had moved on to Adobe told me about a very exciting new XML specification: PGML. This made great sense to me, and I was an early enthusiast. It was not long before the PGML effort was subsumed under SVG, and Adobe was a major participant the SVG spec development effort, with their representative, Jon Ferraiolo, serving as lead editor of the spec itself. The Adobe SVG Viewer became the primary way SVG was viewed on the web, while tools like Batik evolved steadily and browsers (with one huge notable exception) gradually evolved support for it. Adobe Illustrator supported and still supports SVG round trip, while InDesign offered SVG export but has since deprecated it.

On another front, rendition of documents, Adobe also participated in the most significant standard: XSL-FO. Here was a document description language highly similar to FrameMaker’s MIF, and again an Adobe expert, Stephen Deach, led the specification definition. FrameMaker never directly supported XSL-FO, but a short-lived server application, Adobe Document Server, offered XSL-FO support via its underlying FrameMaker engine. This was a great XSL-FO implementation, actually, but was not well supported by Adobe and it is now extinct.

On the surface you could consider Adobe a leader in standards-based XML for graphic and document formats. However, as I discussed earlier, there is an interesting mix of motives in the involvement of such companies in web and XML standards. When Macromedia Flash was a competitor, an “open standard” like SVG made sense, but after the Macromedia acquisition, it made less sense.

Adobe has gone down the path of proprietary XML namespaces, not unlike their competitor Microsoft. And like Microsoft, whose XAML is highly derivative of SVG, they have not found a reason to re-invent the wheel.

Three XML Namespaces

There are three XML namespaces that appear critical to the future of document and graphic description at Adobe. These are IDML (InDesign Markup Language), FLA (formerly XFL, the description of Flash, and FXG (the graphic model supported by Flex 4 and central to the designer/developer workflow of Flash Catalyst): FLA handles the complete, interactive Flash model (literally replacing the binary .fla format) while FXG is more about static graphic representation. Theoretically, FXG is open source, as is the Flex SDK, but these remain extremely Adobe-centric efforts.

FXG and FLA have some strong similarities to SVG. In fact Adobe acknowledges the partially derivative nature in the specs. Of course there are differences between what was specified in SVG and what is natural to the graphic model underlying Flash; it appears that SVG would have been difficult to implement across the board, given how Flash was built and the goals of Flex yet they used SVG tags directly where it did fit the Flash model.

FXG is becoming a very powerful specification, now that the Text Layout Framework is built into it. Flash is able to render FXG and Illustrator is able to import/export FXG. With CS5 the designer/developer workflows and the general interaction between print-centric and web-centric work should become much better.

IDML is not derivative from XSL-FO. It represents a very general similarity, especially compared to the earlier INX XML format for InDesign: it is at least a complete document object model. INX was merely instructions to the scripting DOM as to how to create the document. It is too bad that Adobe has not managed to reconcile the text engine of InDesign with that of Flash: it appears that IDML will for the near term stay quite separate from the other Adobe namespaces.

To me, FLA is the most exciting new XML namespace coming from Adobe, but it won’t really be exciting until we have an FLA server that can compile FLA to SWF quickly. Dynamic content is possible with Flash in many ways already, but the possibility of making the entire SWF dynamic and manipulating that content in arbitrary ways with XML tools should bring the form of publishing power we envisioned with SVG to life once and for all.

As they have tended to miss the boat on any server application of their technology, Adobe appears to be slow to perceive the value of such a thing (I once asked Kevin Lynch for a Photoshop server – he questioned whether anyone would want it, citing experience with Macromedia Generator). It is an interesting question which group an XFL server may come out of; such a product could be conceived as natural to InDesign Server, to Flash Media Server (or some other work of the Flash Platform group), or to Scene7 (which has very powerful SaaS rendition capabilities, some of which are based on FXG). We are lobbying…

Adobe has finally built some XML foundation under their rendition models, and we are able to attain many of the things we dreamed of back in the SVG/XSL-FO days, via XML if not via open XML standards. I don’t have big hopes for Adobe integrating semantic XML in their core products (FrameMaker being a black sheep outlier), beyond simple metadata (XMP is good enough here, but document-level metadata is trivial compared to true semantic XML). Hopefully the power of their rendition technology with its new XML underpinnings (and consequent greater extensibility) will provide a foundation that enables other companies and open source efforts to make tools that bring the deeper vision of XML publishing to life.


Scripting Hyperlinks in InDesign

I was asked how to script hyperlinks in InDesign, based on someone who dug up an old article that I wrote many years back, when I still used VBA for InDesign. Well, nowadays I only use JavaScript when scripting, but hopefully this basic example will help:

var myDocument = app.documents.item(0);
with(myDocument.pages.item(0)){
//Get a reference to the text frame.
var myTextFrame = textFrames.item(0);
var myParagraph = myTextFrame.paragraphs.item(0);
myHyperlinkURL = myDocument.hyperlinkURLDestinations.add("http://www.publishingsilicon.com");
myHyperlinkSource = myDocument.hyperlinkTextSources.add(myParagraph);
myHyperlink = myDocument.hyperlinks.add(myHyperlinkSource,myHyperlinkURL);
myHyperlink.visible=false;
}

This assumes a text frame exists with some text in it, and that when you export the PDF you either check Include\hyperlinks in the PDF Export dialogue, or do the equivalent with code if exporting programmatically.

I need to get around to updating that data-generated example some day. Amazing how many people around the world seem to stumble upon it.


The Two Perspectives on XML

by Max Dunn


I have been working with XML since it was a glimmer in the eye of Jon Bosak. In fact, before XML was conceived, there was SGML; going from SGML to XML represented a streamlining for the web, but at its core there was not much functional difference; in fact XML is a subset of SGML. The key concept of semantic markup is central to the core value of SGML/XML.

The two main perspectives I have seen are Document-centric XML and Data-centric XML. SGML initially appeared in support of document-centric work: managing all the technical documents or contracts of IBM or Boeing, for example. Charles Goldfarb has maintained that “SGML literally makes the infrastructure of modern society possible” and I think he’s right – hmm, should we blame him for the lengths to which humans have gone to destroy the earth?

The document-centric XML world is really a direct continuation of SGML. When XML came out as a standard in 1998, those of us working with document-centric XML became giddy with excitement, anticipating that the standards being proposed at the time (notably XML itself, XLink, XML Schema, RDF, XSL and pre-cursors to SVG) would finally facilitate tools that made publishing work for organizations that weren’t quite as big as IBM or the Department of Defense. The vision of a semantic web and ubiquitous XML multi-channel publishing, seemed to be growing a foundation in theories gaining critical mass, with apparent support of software companies. It appeared these vendors might actually adopt the standards of the committees they were sitting on. “Throw away Xyvision!” I told my boss at Bertelsmann, “this XSL-FO will completely revolutionize database publishing!”

We were sorely disappointed over the next five years. In the years before 1998 W3C standards seemed magical; concepts from the standards were implemented relatively quickly, without perfection but with steady progress: browser updates would reflect CSS and HTML advances; even Microsoft was shamed into some level of compliance. But the monopolistic tendencies of those on the standards committees, coupled with the academic approach of some of the standards committees, managed to make it less and less likely that a given standard would find a functional implementation.

And there was that other perspective – the data-centric side of things. For many reasons, XML was at the right place at the right time in terms of data management and information exchange. In fact, the very year that XML became a standard, it also became the dominant way that machines (servers) talked to each other around the world. Highly convenient for exchanging info, as firewalls would tend to block anything but text over http, while XML markup would allow any sort of specification for data structures, and validation tools would ensure no info was lost.

In 1998, when you asked a programming candidate “what do you know about XML?” only the document-centric people would know anything. By 2000, everyone doing any serious programming “knew” about XML. Trouble was, they typically knew about “XML” only in the much easier-to-use, irrelevant-to-publishing, sense.

And the standards now had to accommodate two crowds. The work of the W3C XML Schema Working Group, in particular, showed the disconnect. Should a schema be easily human readable? What was the primary purpose of Schema? Goals were not shared by the document- and data-centric sides, and data-centric won out, as they have tended to dominate the XML space ever since that time. RELAX NG came about as an alternative, and if you contrast RELAX NG with W3C Schema, you will see the contrast between the power of a few brilliant individuals aligned in purity of purpose and the impotence of a committee with questionable motives and conflicting goals. Concurrent with a decline in the altruism of committee participants was the huge advance of data-centric XML and the disproportionate representation of that perspective.

Ten years later, we find in the document-centric world that toolsets related to XML in a data sense – parsing, transforming, exchanging info – have made great leaps forward, but we are in many ways still stuck in the 1990s in terms of core authoring and publishing technologies. It is telling that descendants of the three great SGML authoring tools as of 1995 – FrameMaker+SGML, Arbortext Epic, and SoftQuad’s Author/Editor, are, lo and behold, the leading three XML authoring tools in 2009.

There have been some slow-paced advances in document-centric XML standards and tool chains as well, especially the single bright light out there for us, Darwin Information Typing Architecture (DITA) which came out of IBM like XML itself. Yet standards for rendition, XSL-FO and SVG especially, have not advanced along with core proprietary rendition technologies such as InDesign, Flash, or Silverlight, though all of these enjoy nicely copied underpinnings pillaged from the standards. More important, nothing has stepped in to replace the three core authoring tools: the “XML support” of Microsoft Word and Adobe InDesign, for example, do not approach the capabilities of a true XML authoring application. There are a proliferation of XML “editors” but most of the new ones are appropriate for editing a WSDL file or an XML message (the data-centric forms of XML), not a full-fledged document.

Meanwhile, on the data-centric front, XML has simply permeated every aspect of computing. There are XML data types in database systems, XML features in most programming languages, XML configuration files at the heart of most applications, and XML-based Web Services available in countless flavors.

Document-centric XML is simply a deep challenge that will take more time (and probably more of a commercial incentive) to tackle. For the time being, structured authoring managed the XML way is still implemented mainly by very large organizations: such an approach has “trickled down” from organizations the size of IBM to organizations the size of Adobe (which does, in fact, use DITA now), but there are not tool chains yet available that will bring it down much further. The failure of the W3C XML Schema Working Group to provide a functional specification supporting document-centric XML can hardly be underestimated.

As long as content is not easily authored in a semantically rich, structured fashion, the vision of the semantic web will remain an illusion. When and if document-centric XML gets more attention from standards bodies and software vendors, human communications will become far more efficient and effective.


Welcome

Welcome to the first post of my new blog. I am Max L. Dunn. While there are plenty of other Max Dunns out there (I am often mistaken for Max S. Dunn, for example), I’m the one who co-founded Silicon Publishing, a company devoted to publishing solutions, back in 2000. We automate data-generated publishing solutions, build graphic and layout software, and increasingly connect web and print publishing workflows. We’re immersed in Adobe technology (Adobe is both a partner and a client), most focused on Adobe InDesign Server and the connection of that composition engine to data, and the reconciliation of that technology with HTML5 and Adobe Flash.

My deep long-term interest is XML from a document-centric perspective. We put DITA into FrameMaker for Adobe back in Frame 7.2, after helping make 7.0 work with XML in the first place, and continued to help Adobe with DITA in Frame 8 and 9 as well. We also developed our own Frame/DITA plug-in with Leximation for those that are really serious about such things. At this point in time our semantic XML work doesn’t connect very directly to our Web/InDesign Server work; I expect one day it will. I co-wrote a chapter of the XML Handbook with Charles Goldfarb on WYSIWYG XML Authoring, and realizing this vision in the InDesign/Web world looks more attainable each year, slowly working its way onto the road map for our Silicon Designer product.

I am big on standards, in theory: I am owner of the SVG Developers’ Group, for example, and I have tons of experience with XSLT and XSL-FO. Yet the sad reality is that as of 2009 such standards are rarely used directly. Rather, we find such standards copied into proprietary “standards” by the large software companies that we still depend on for software that actually works.

In this blog, I’m going to be sharing opinions, information, and stories of my life in publishing technology. Posts will range from opinionated rants to factual explanations of how to tackle the challenges we face in our day-to-day work. I hope to be joined by some in Silicon Publishing in these writing efforts.

I’d like to hear from you. Help guide this blog by posting your own comments, resources, knowledge, and opinions. If you have questions about web-to-print technologies, Adobe tools, or XML standards, let me know. I’ll do my best to answer them here.