Posts Tagged With ‘XML&8217


Silicon Designer Test Strategy Overview

by Lanette Creamer


Starting Assumptions

When I began testing at Silicon Publishing in the spring of 2011, my initial focus was to understand the complete (round trip) web to print workflow. In other words, I focused on every step from the creation of a document through final delivery of the end-product to the user. Having been part of the InDesign team at Adobe for the past 10 years, including working with InDesign Server, I had expected this to be the domain I’d be working in. Just as we have many different suite configurations, I knew that not every Silicon Designer was the same, so this is not a new testing idea for me. I quickly learned that my perspective would need to change: since we customize Silicon Designer implementations for each partner, we do not give each of our clients an identical, shrink-wrapped product, or different subset or grouping of products, in the way that I had been used to testing and delivering the Adobe Creative Suite.

I quickly realized that what we do offer is something much more: Silicon Designer is a great product that excels at enabling granular editing on the web for complex InDesign documents, and then uses the power of InDesign Server to produce full high resolution output for professional print. All within the context of a raft of customizable features.

Shifting Gears

In my prior experience, product delivery consisted of a digital download or set of DVDs. In contrast, our customized products, although they share core code and many features, are distinguished by nuanced requirements, with unique ways of working for entirely different types of end users. There are even unique, custom branding aspects to consider. The net result is that with Silicon Publishing, I have changed from grouping my testing by customer type, to a test strategy that embraces testing multiple, custom-branded Silicon Designer deliverables in the context of unique & concurrent engagements.

Core Product Quality

Of course, the quality and maintenance of the core product code is at the center of all QA initiatives. But customer feedback also plays a crucial role in assuring the quality of our deliverables, providing key data points that help us to better meet customer requirements and expectations, both of which have a tendency to evolve. During iterative testing, we use customer feedback as part of the data stream informing our development and bug fix schedules. To more efficiently accommodate customer input, we needed to increase testability of both our core and customized product features, so that it would be possible to test more frequently, more accurately and with better results.

Smiling Cat Can't Be Trusted

Only Slightly Less Known-Never Trust an Untested Build

Silicon Designer core feature testing is broken down by functionality: below is a quick overview of the main ways I think about Silicon Designer testing, and feel free to ask or comment if I’ve left something out.

  1. Conversion, Packaging, Custom Scripting
  2. Services
  3. Silicon Designer Editing in Browser Actions
  4. Data Throughput
  5. Performance
  6. Error Handling
  7. Help and Documentation
  8. Integration
  9. Preview (Both 3D and Print Previews)
  10. Output and Printing (Imposition, Bleeds, Folds, Die Cuts, etc.)
  11. Internationalization/Globalization
  12. Fonts, Configuration Files, and Setups
  13. Template Editor Settings
  14. Roundtrip Testing (End to End Testing)
  15. Exploratory Testing (Session Based Testing)
Cloud Testing

You may notice that I haven’t called out our cloud testing. The reason is that it’s rare that we do anything other than cloud-based testing. We live & breathe cloud-based testing. Everything exists in the cloud with Amazon AWS and has for quite some time. We also use ExtendScript and InDesign Server in order to test locally, but that is done for isolation and troubleshooting purposes rather than standard practice. The reason that we test using InDesign Server installed on cloud instances or on a business-strength server is because that best reflects our clients’ environments.

Cloudy sky by Space Needle

Cloudy? We call this normal.

Quality as Defined by Clients

To achieve the goals of incorporating client feedback faster, our developers changed the way that they checked in code, and our team adopted both Git and Jenkins in order to help us manage builds and releases. To better manage our Agile tickets and client priorities, we added Pivotal Tracker to our toolset.

But perhaps most importantly, we worked with our clients to ensure that we developed and tested code against their actual production documents and resources, rather than our generic test data. We did this to ensure that our acceptance test results would stand or fall in the context of their users and their workflow, as opposed to evaluating performance based primarily of results achieved in the context of our server environment, testing a feature set that we may have been particularly eager to evaluate, or using digital assets not perfectly reflective of the client’s environment.

This means that when builds pass our acceptance test, real client content has successfully made the end-to-end journey that begins in InDesign, passes through the Silicon Designer web interface for editing, and that the resulting documents have then been managed by InDesign Server and output to a custom PDF or other delivery channel that the client has specified.

Broken mug with the word Quality written on it.

If the client can’t use it, it isn’t quality to them.

In our agile methodology, we use stories and tickets to prioritize our work with clients. This helps our developers and testers to coordinate our efforts externally with client schedules, and internally manage our team via a series of interrelated sprints. We always try to schedule feature delivery to align with client priorities within the business context, which of course also entails honoring budgetary constraints. By using this approach, our testers and developers work primarily on those tasks and tickets applicable only to the current sprint.

Testers, Developers, and Project Managers (Oh My!)

Silicon Designer projects are bolstered by our best leadership, including dedicated project managers who function as an integral part of the delivery team. They help us with everything from getting the right UI design in place, to managing features & client feedback, as well as scheduling (and meeting!) our sprint and overall delivery milestones.

The entire team continually strives to broaden its skill sets, and in this spirit, our project managers have even been known to help with testing on occasion! It is a joy to see the team’s understanding of requirements and goals clarified and deepened by the project managers, whose role includes maintaining an intimate understanding of the project context as well as achieving fluency with the unique demands of the customer’s business space.

In the context of testing, I find the main advantage of agile methodology is in how it allows us to close the loop and use feedback from our customers for closer coordination. As much as this has improved our development, it has really revolutionized our testing approach entirely.


DITA to take over the world…

DITA will take over the world… or maybe more like lay under it, as XML does currently.

From my perspective, DITA (or a good part of DITA – there is also the tech doc focus) is the next step in core SGML/XML. IBM started SGML itself, and later had a fair amount to do with XML: now the same sort of people are working on DITA, making XML safe for the world.

DITA extends SGML constructs such as entities with constructs such as conrefs. Everyone loves the idea of re-use of content, but XML 1.0 is a bit too flexible in this regard. It doesn’t say much about *how* you re-use, associate, and aggregate content, thus tools will do the same thing different ways, or won’t support re-use well at all. DITA fixes this, then immediately (concurrently) applies it to Tech Doc.

DITA is based on the practical experience of some IBM tech doc teams and while their goals and requirements were specific to tech doc, many of the core constructs are not.

Similar to XML itself, which is a meta-language (or language for creating languages), DITA has a powerful specialization methodology, that allows for completely custom document structures, yet a backwards compatibility with the core DITA constructs. If your <eBookPara> tag is read by a DITA rendition tool that only knows the <p> of DITA, you will at least get things rendered, though perhaps not in the special “eBook” way that you prefer. At least the tools don’t break.

It is somewhat confusing that the drivers for DITA remain squarely in the Tech Doc space, yet the solution it provides is often fairly universal. Maybe what DITA needs to do is split into the tech-doc specific DITA and the generic DITA, the way XSL split into XSLT and XSL-FO.


Adobe Learns XML, Slowly

I noticed that the draft FXG 2.0 Specification is finally online. It appears that this will be the form of FXG implemented in CS5.

I have been interested in, and somewhat connected to, Adobe’s approach to XML for quite some time. In the mid 1990s, FrameMaker supported SGML prior to the birth of XML. In 2000, Silicon Publishing worked with Adobe in publicizing FrameMaker 6.5 as an XML-capable tool, though FrameMaker+SGML only worked with XML in a very cumbersome, awkward way.

I will never forget our first project for Adobe, which was one of the very first Silicon Publishing projects. One Friday in 2000 I went to meet Doug Yagaloff, the publishing genius that led Caxton, and he gave me a copy of Frame+SGML and said I just had to do one simple thing. Import an XML document, and export it back out. That was a long weekend! I felt like I must be very stupid, it took me forever to get anywhere at all. Thankfully, Sunday night I found a “quick guide” online which exemplifies the great patience that is generally characteristic of those working with SGML and document-centric XML. I was able to show Doug an example the following Monday – “it’s hard, isn’t it?” he smiled.

We worked with Adobe on the FrameMaker 7.0 release, which dramatically improved the XML support. Later we put DITA support into FrameMaker for Adobe, which now gives a real head start when working with document-centric XML. I am a strong believer in DITA.

That is the core, semantic XML that SGML was oriented around at its foundation. InDesign got some bare-bones support for semantic XML with 2.0, but it goes nowhere as deep as the support of real XML authoring tools. Probably more interesting in terms of Adobe technology (they bought FrameMaker but it stands outside of their main product offerings) is rendition XML, and here was an area more exciting to us at Silicon Publishing.

Rendition XML

In 1998, when I was still at Bertelsmann, one of our former employees who had moved on to Adobe told me about a very exciting new XML specification: PGML. This made great sense to me, and I was an early enthusiast. It was not long before the PGML effort was subsumed under SVG, and Adobe was a major participant the SVG spec development effort, with their representative, Jon Ferraiolo, serving as lead editor of the spec itself. The Adobe SVG Viewer became the primary way SVG was viewed on the web, while tools like Batik evolved steadily and browsers (with one huge notable exception) gradually evolved support for it. Adobe Illustrator supported and still supports SVG round trip, while InDesign offered SVG export but has since deprecated it.

On another front, rendition of documents, Adobe also participated in the most significant standard: XSL-FO. Here was a document description language highly similar to FrameMaker’s MIF, and again an Adobe expert, Stephen Deach, led the specification definition. FrameMaker never directly supported XSL-FO, but a short-lived server application, Adobe Document Server, offered XSL-FO support via its underlying FrameMaker engine. This was a great XSL-FO implementation, actually, but was not well supported by Adobe and it is now extinct.

On the surface you could consider Adobe a leader in standards-based XML for graphic and document formats. However, as I discussed earlier, there is an interesting mix of motives in the involvement of such companies in web and XML standards. When Macromedia Flash was a competitor, an “open standard” like SVG made sense, but after the Macromedia acquisition, it made less sense.

Adobe has gone down the path of proprietary XML namespaces, not unlike their competitor Microsoft. And like Microsoft, whose XAML is highly derivative of SVG, they have not found a reason to re-invent the wheel.

Three XML Namespaces

There are three XML namespaces that appear critical to the future of document and graphic description at Adobe. These are IDML (InDesign Markup Language), FLA (formerly XFL, the description of Flash, and FXG (the graphic model supported by Flex 4 and central to the designer/developer workflow of Flash Catalyst): FLA handles the complete, interactive Flash model (literally replacing the binary .fla format) while FXG is more about static graphic representation. Theoretically, FXG is open source, as is the Flex SDK, but these remain extremely Adobe-centric efforts.

FXG and FLA have some strong similarities to SVG. In fact Adobe acknowledges the partially derivative nature in the specs. Of course there are differences between what was specified in SVG and what is natural to the graphic model underlying Flash; it appears that SVG would have been difficult to implement across the board, given how Flash was built and the goals of Flex yet they used SVG tags directly where it did fit the Flash model.

FXG is becoming a very powerful specification, now that the Text Layout Framework is built into it. Flash is able to render FXG and Illustrator is able to import/export FXG. With CS5 the designer/developer workflows and the general interaction between print-centric and web-centric work should become much better.

IDML is not derivative from XSL-FO. It represents a very general similarity, especially compared to the earlier INX XML format for InDesign: it is at least a complete document object model. INX was merely instructions to the scripting DOM as to how to create the document. It is too bad that Adobe has not managed to reconcile the text engine of InDesign with that of Flash: it appears that IDML will for the near term stay quite separate from the other Adobe namespaces.

To me, FLA is the most exciting new XML namespace coming from Adobe, but it won’t really be exciting until we have an FLA server that can compile FLA to SWF quickly. Dynamic content is possible with Flash in many ways already, but the possibility of making the entire SWF dynamic and manipulating that content in arbitrary ways with XML tools should bring the form of publishing power we envisioned with SVG to life once and for all.

As they have tended to miss the boat on any server application of their technology, Adobe appears to be slow to perceive the value of such a thing (I once asked Kevin Lynch for a Photoshop server – he questioned whether anyone would want it, citing experience with Macromedia Generator). It is an interesting question which group an XFL server may come out of; such a product could be conceived as natural to InDesign Server, to Flash Media Server (or some other work of the Flash Platform group), or to Scene7 (which has very powerful SaaS rendition capabilities, some of which are based on FXG). We are lobbying…

Adobe has finally built some XML foundation under their rendition models, and we are able to attain many of the things we dreamed of back in the SVG/XSL-FO days, via XML if not via open XML standards. I don’t have big hopes for Adobe integrating semantic XML in their core products (FrameMaker being a black sheep outlier), beyond simple metadata (XMP is good enough here, but document-level metadata is trivial compared to true semantic XML). Hopefully the power of their rendition technology with its new XML underpinnings (and consequent greater extensibility) will provide a foundation that enables other companies and open source efforts to make tools that bring the deeper vision of XML publishing to life.


The Two Perspectives on XML

by Max Dunn


I have been working with XML since it was a glimmer in the eye of Jon Bosak. In fact, before XML was conceived, there was SGML; going from SGML to XML represented a streamlining for the web, but at its core there was not much functional difference; in fact XML is a subset of SGML. The key concept of semantic markup is central to the core value of SGML/XML.

The two main perspectives I have seen are Document-centric XML and Data-centric XML. SGML initially appeared in support of document-centric work: managing all the technical documents or contracts of IBM or Boeing, for example. Charles Goldfarb has maintained that “SGML literally makes the infrastructure of modern society possible” and I think he’s right – hmm, should we blame him for the lengths to which humans have gone to destroy the earth?

The document-centric XML world is really a direct continuation of SGML. When XML came out as a standard in 1998, those of us working with document-centric XML became giddy with excitement, anticipating that the standards being proposed at the time (notably XML itself, XLink, XML Schema, RDF, XSL and pre-cursors to SVG) would finally facilitate tools that made publishing work for organizations that weren’t quite as big as IBM or the Department of Defense. The vision of a semantic web and ubiquitous XML multi-channel publishing, seemed to be growing a foundation in theories gaining critical mass, with apparent support of software companies. It appeared these vendors might actually adopt the standards of the committees they were sitting on. “Throw away Xyvision!” I told my boss at Bertelsmann, “this XSL-FO will completely revolutionize database publishing!”

We were sorely disappointed over the next five years. In the years before 1998 W3C standards seemed magical; concepts from the standards were implemented relatively quickly, without perfection but with steady progress: browser updates would reflect CSS and HTML advances; even Microsoft was shamed into some level of compliance. But the monopolistic tendencies of those on the standards committees, coupled with the academic approach of some of the standards committees, managed to make it less and less likely that a given standard would find a functional implementation.

And there was that other perspective – the data-centric side of things. For many reasons, XML was at the right place at the right time in terms of data management and information exchange. In fact, the very year that XML became a standard, it also became the dominant way that machines (servers) talked to each other around the world. Highly convenient for exchanging info, as firewalls would tend to block anything but text over http, while XML markup would allow any sort of specification for data structures, and validation tools would ensure no info was lost.

In 1998, when you asked a programming candidate “what do you know about XML?” only the document-centric people would know anything. By 2000, everyone doing any serious programming “knew” about XML. Trouble was, they typically knew about “XML” only in the much easier-to-use, irrelevant-to-publishing, sense.

And the standards now had to accommodate two crowds. The work of the W3C XML Schema Working Group, in particular, showed the disconnect. Should a schema be easily human readable? What was the primary purpose of Schema? Goals were not shared by the document- and data-centric sides, and data-centric won out, as they have tended to dominate the XML space ever since that time. RELAX NG came about as an alternative, and if you contrast RELAX NG with W3C Schema, you will see the contrast between the power of a few brilliant individuals aligned in purity of purpose and the impotence of a committee with questionable motives and conflicting goals. Concurrent with a decline in the altruism of committee participants was the huge advance of data-centric XML and the disproportionate representation of that perspective.

Ten years later, we find in the document-centric world that toolsets related to XML in a data sense – parsing, transforming, exchanging info – have made great leaps forward, but we are in many ways still stuck in the 1990s in terms of core authoring and publishing technologies. It is telling that descendants of the three great SGML authoring tools as of 1995 – FrameMaker+SGML, Arbortext Epic, and SoftQuad’s Author/Editor, are, lo and behold, the leading three XML authoring tools in 2009.

There have been some slow-paced advances in document-centric XML standards and tool chains as well, especially the single bright light out there for us, Darwin Information Typing Architecture (DITA) which came out of IBM like XML itself. Yet standards for rendition, XSL-FO and SVG especially, have not advanced along with core proprietary rendition technologies such as InDesign, Flash, or Silverlight, though all of these enjoy nicely copied underpinnings pillaged from the standards. More important, nothing has stepped in to replace the three core authoring tools: the “XML support” of Microsoft Word and Adobe InDesign, for example, do not approach the capabilities of a true XML authoring application. There are a proliferation of XML “editors” but most of the new ones are appropriate for editing a WSDL file or an XML message (the data-centric forms of XML), not a full-fledged document.

Meanwhile, on the data-centric front, XML has simply permeated every aspect of computing. There are XML data types in database systems, XML features in most programming languages, XML configuration files at the heart of most applications, and XML-based Web Services available in countless flavors.

Document-centric XML is simply a deep challenge that will take more time (and probably more of a commercial incentive) to tackle. For the time being, structured authoring managed the XML way is still implemented mainly by very large organizations: such an approach has “trickled down” from organizations the size of IBM to organizations the size of Adobe (which does, in fact, use DITA now), but there are not tool chains yet available that will bring it down much further. The failure of the W3C XML Schema Working Group to provide a functional specification supporting document-centric XML can hardly be underestimated.

As long as content is not easily authored in a semantically rich, structured fashion, the vision of the semantic web will remain an illusion. When and if document-centric XML gets more attention from standards bodies and software vendors, human communications will become far more efficient and effective.


Welcome

Welcome to the first post of my new blog. I am Max L. Dunn. While there are plenty of other Max Dunns out there (I am often mistaken for Max S. Dunn, for example), I’m the one who co-founded Silicon Publishing, a company devoted to publishing solutions, back in 2000. We automate data-generated publishing solutions, build graphic and layout software, and increasingly connect web and print publishing workflows. We’re immersed in Adobe technology (Adobe is both a partner and a client), most focused on Adobe InDesign Server and the connection of that composition engine to data, and the reconciliation of that technology with HTML5 and Adobe Flash.

My deep long-term interest is XML from a document-centric perspective. We put DITA into FrameMaker for Adobe back in Frame 7.2, after helping make 7.0 work with XML in the first place, and continued to help Adobe with DITA in Frame 8 and 9 as well. We also developed our own Frame/DITA plug-in with Leximation for those that are really serious about such things. At this point in time our semantic XML work doesn’t connect very directly to our Web/InDesign Server work; I expect one day it will. I co-wrote a chapter of the XML Handbook with Charles Goldfarb on WYSIWYG XML Authoring, and realizing this vision in the InDesign/Web world looks more attainable each year, slowly working its way onto the road map for our Silicon Designer product.

I am big on standards, in theory: I am owner of the SVG Developers’ Group, for example, and I have tons of experience with XSLT and XSL-FO. Yet the sad reality is that as of 2009 such standards are rarely used directly. Rather, we find such standards copied into proprietary “standards” by the large software companies that we still depend on for software that actually works.

In this blog, I’m going to be sharing opinions, information, and stories of my life in publishing technology. Posts will range from opinionated rants to factual explanations of how to tackle the challenges we face in our day-to-day work. I hope to be joined by some in Silicon Publishing in these writing efforts.

I’d like to hear from you. Help guide this blog by posting your own comments, resources, knowledge, and opinions. If you have questions about web-to-print technologies, Adobe tools, or XML standards, let me know. I’ll do my best to answer them here.