In just two days, I will fly to London to attend IPEX 2014, where we will show Silicon Publishing technology to the world. This will be the first tradeshow that we have attended outside the US, and I only agreed to go because we have finally attained a level of quality in our products and services that is truly scalable for the global marketplace.
Ask an artist how long it takes to make a painting, and they will answer “5 minutes, and my entire lifetime.” It is the same with software development, which is truly both an art and a science. I will explain some of our lifetime, then, before summarizing what we will show at IPEX 2014.
When we started this company 14 years ago, we had passion and ambition, and we were hardly new to the business. Our prior experience at a 20-year-old database publishing company gave us some serious shoulders to stand on: our intense collaboration with Adobe from day one and our use of advanced Adobe technology gave us some wonderful core capability, and our connections to those defining standards for web, print, and information technology (in no small part due to the wonderful part of the world where we live and work) kept us in a position of seeing the next advances in technology from the outset, and even being able to help define the technological direction in some small ways.
Yet despite our passion and these advantages, we were new, young and quite naive about the software business. We learned business fundamentals as we went, and too often had to learn from real-world mistakes rather than proactive planning or education.
And we faced the bizarre trajectory of web and computing technology that is both the blessing and the curse of software development. There are standards, there are implementations of those standards, and there is an endless interplay between proprietary technology and standards-based technology. Technologies sometimes move at a shocking, disruptive rate, while at other times the pace is glacially slow, with an unpredictable course that is defined more by politics and coincidence than by any smooth sort of evolution.
In the 1990s we had seen web standards in their heyday: the HTML and XML specs, for example, made it appear that a benevolent standards body would generally prevail. By the time we started Silicon Publishing, the luster of standards was wearing off: Microsoft had exemplified the pollution from proprietary interests; standards like XLINK and XHTML had shown that even with the best intentions, committees can fail; W3C Schema had shown how committees with bad intentions can really fail; and in our niche we found that Open Source efforts could range from the sublime (Apache Web Server, Tomcat, MySQL) to the ridiculous (Batik, FOP) with those central to our domain almost always landing at the wrong end of the spectrum. Being bright-eyed and bushy-tailed, it took a while for this to fully dawn on us. We worked hard on standards-based approaches to web and print rendition while falling in love with the proprietary technology of Adobe InDesign in our main work, which was flowing data into print documents.
On this more practical side of our work, InDesign 1.0 did a fantastic job of producing even higher quality output than we had in the 1990s, although throughput was exponentially slower given the fact that Adobe at that time did not know the first thing about servers, but was an entirely desktop-centric company.
Given the amazing degree of automation that Adobe InDesign offered (something that no other product at the time offered), we were immediately drawn to experimenting connecting this to standards-based web and print rendition technology. We had InDesign 1.5 round-tripping with XSL-FO and SVG, for example. XSL-FO had limitations as a standard and generally terrible implementations, but at the time we were still optimistic (especially when they released the short-lived Document Server with FO support). We simply loved SVG but were depressed by the failure of browser companies to support it. In 2003 we wrote an application that flowed XHTML into InDesign templates to produce automated web and print output from the same source. Sadly, the process of print generation was to download XHTML from a web server and run it through an offline process. Our only real barrier from doing this online in real time was the InDesign license, although we hoped they would also add some server-centric features.
We had been begging Adobe for an InDesign Server from the day we opened up InDesign 1.0. It took them 5 years to come up with the concept of a server version, which I am certain had nothing to do with our pleas. We were oddly well-connected with the FrameMaker group but un-connected to InDesign until we heard through our Adobe connections that a server form of the product was in the works. That was a key moment for us as a company, and I remember fondly meeting the great Whitney McCleary, who not only got us into the beta program but hired us to help announce the product.
We had randomly gotten into writing and we were approximately zero help to Whitney, who patiently rewrote everything we came up with. However, we are extremely enthusiastic about the product itself and from that period on we have been extremely good friends with the wonderful people at Adobe who brought InDesign Server to life. To them, server wasn’t so much a product to focus on as something they felt obligated to release, as they knew such technology had practical use in a server context. The way Whitney explained it to me the first day we met; “we decided to release the technology to developers, but we also decided not to support it: solution providers like yourselves will provide the front line of support, and it isn’t much more than a ‘headless’ version of InDesign itself.” In other words, it was just an engine (precisely the same engine we’d been using in desktop form): higher-level constructs like “how do you round trip this with web content?” and “how do you set up templates for database publishing” were left entirely to us.
I wrote an article for InDesign Magazine scheduled for the release, in which I compared this advance to that of Gutenberg’s printing press a few centuries back. The InDesign Server release date was set for October 17, 2005 (I remembered this as the 16th anniversary of the Loma Prieta earthquake) and the place of announcement was set for the IFRA conference in Germany. IDS was earth-shaking in our universe, though at the time it was like a new born baby that couldn’t quite speak yet. We hadn’t sufficient advance notice to present solutions based on it at the time, and those companies that had been in the “pilot” program (we weren’t quite so honored, having just made the tail end of the beta) seemed to have very pedestrian offerings compared to our vision (even compared to what we had done already with desktop), so from our perspective IFRA was mainly a promise of things to come.
I think of IPEX 2014 as a long-delayed after-shock of IFRA 2005, where InDesign Server was announced. At this point in time, InDesign Server solutions are completely robust, proven beyond the shadow of a doubt to be the superior approaches to database publishing and online editing of print documents.
Then it was an infant with immense potential, but now InDesign Server is a formidable technology that has broken through all obstacles of core features, scalability, and interplay with web standards, which themselves are finally starting to work as originally intended. Even our 14-year-old hobby, SVG, is now a respected rendition technology available to every device, and the subtlety with which IDS and these standards work together is beginning to surpass the proprietary technology that we had been forced to adopt when standards faltered in the early 2000s.
So what are we showing at IPEX 2014? Our three products, of course: Paginator, Designer, and Connector, and the two technologies we resell: InDesign Server, of course, and now the Adobe Digital Publishing Suite, a much more recent and highly related technology.
We are honored to be there in the booth of our favorite partner company, Grafenia, as the work they are bringing to market with their w3p product represents the culmination of both InDesign Server and our Silicon Designer technology that is tightly coupled with it. They have deployed this IDS-based technology to many countries and our plans for the coming year are likely to make it fairly ubiquitous around the world. This is the sort of long-term hope we had back in October of 2005, and while we have focused on the largest InDesign Server solutions in the world for companies like Shutterfly, Nike, and Royal Caribbean, the Grafenia offerings bring it to even the smallest of businesses: there is no longer a need to settle for sub-standard print output from non-indesign based tools such as home-built PDF engines or 1990s software that butchered typography for the sake of speed. InDesign Server is now actually faster and more scalable than any of these approaches; a topic we are happy to discuss with anyone out of touch with recent advances in this domain.
Tony Rafferty of Grafenia actually wrote a book that they are giving away at the w3p booth. This is not marketing, but true information, the most insightful book on web to print that I have seen in my 20 years in this domain.
Aaron and I will be at IPEX on Tuesday, Wednesday, and Thursday, happy to meet with colleagues who share our passion for advancing human communications through publishing technology.
Features of Silicon Designer, the flagship product from Silicon Publishing.
There is an explanation of what Silicon Designer is and how it works here.
There is an explanation of how it evolved here.
You can find out more about the people that created it here.
Thank you to all that have contributed to Silicon Publishing and to the evolution of this product over the years.
Learn to install a script once—and you’ll never have to do it again
I spent years at Adobe helping to develop, document, and popularize scripting in InDesign. I did this because I want to free creative people from the drudgery of most day to day graphic arts tasks (which I know well—Having worked as an art director, graphic designer, typesetter, and general purpose page layout lackey/slave). InDesign scripting gives graphic artists a way to automate the boring parts of page layout—which means you have more time to spend on the fun, creative parts of your work. InDesign scripting can both lower your stress level and help you get more sleep.
Now that I’m outside Adobe, I’m having a great time working with the tools that I helped create. At the same time, as I talk to InDesign users, I’m feeling that the job I started in the late 1990s—getting the word out about InDesign automation—is, at best, only half done. The majority of InDesign users still don’t know that scripting exists, and what it can do for them. They also don’t know how to install and run a script, much less how to write one.
If I’m giving myself a grade on the job I’ve done so far, it’s an “Incomplete.” It’s high time, therefore, to get back to work. I’ll be writing scripting-related blog posts for SPI as frequently as our (busy!) work schedule permits.
The InDesign scripting model is designed to be complete. Anything you can do with an InDesign document, scripting can do. Scripts can create and apply color swatches or paragraph styles, draw rectangles, ovals, text frames, and other page items. Scripts can resize the page, enter text, and place graphics.
If you’ve got a time-consuming, repetitive task that’s driving you crazy—moving hundreds of occurrences of a particular graphic by a specified amount in a long document, for example—it’s a prime candidate for automation. This is true even if you only expect to do the task once in your life.
There are lots of little things, too: take, for example, the process of cleaning up Word files you’ve received for placement in a layout. Typically, you search for double-spaces, tab characters at the beginning or end of lines, convert double-dashes to em dashes, and so on. You’re probably pretty good at it—but imagine never having to do that again. Scripting can make that possible—in fact, there’s a script that comes with InDesign that will do most of it for you.
Finally, there are creative effects that would be difficult or impossible to achieve by other means. Again, InDesign comes with an example of this sort of a script—the Neon.jsx script creates Illustrator-style blend effects from your InDesign objects.
Automation via scripting can improve your productivity, streamline your workflow, and provide a creative spark when you’re stuck for an idea.
First, though, you need to know that scripts appear in the Scripts panel (in InDesign CS6 and above, you can display the Scripts panel by choosing Window>Utilities>Scripts). In the Scripts panel, you’ll probably see two folder icons, “Application” and “User” (for the English version of the application—other languages may use different names). Inside the Application folder, you’ll probably see a folder named “Samples,” and inside that folder, you’ll see the sample scripts that are installed when you install InDesign. I say “probably” in the preceding text because the default scripts may have been removed by some “helpful” person from your IT department (to, you know, protect you from yourself).
To run a script, double-click the script name in the Scripts panel. Most of the scripts in the Samples folder contain simple user interface elements—dialog boxes and alerts—that can let you know more about what the script does. If you want to experiment with the sample scripts, do so in an empty document; most of these scripts add new page items or change text—you don’t want them to mess up a production document just before a deadline.
If you’ve made it this far, you know that InDesign scripting exists, and you have some idea of what it can do for you. The next thing you need to know is how to install and run a script. I’m going to go one better—I’ll show you how to install a script that you can use to automate the process of installing other scripts.
Here’s what you need to do:
When you return to InDesign, you’ll see the script in the Scripts panel.
This is the process for installing any script—just copy it into the Scripts Panel folder. This script, however, helps you avoid the trouble of clawing your way through files and folders in your operating system—it’ll automate the process of moving the script to the right place.
Now run the script. It won’t be obvious that the script has done anything—but it has. Display the Scripts panel menu again, and you’ll see three new menu options at the bottom of the menu: Install Script, Install Startup Script, and Remove Startup Script.
To use the script to install another script, follow these steps:
Assuming that the script installed correctly, you should now see it in the Scripts panel. If it doesn’t appear immediately, close and re-open the panel, and it should appear.
InDesign can run scripts as it starts up, which gives you a way to add features without having to run a script each time. Script-based menu customizations, such as the script we’ve just installed, are great candidates for startup scripts.
To try this out—let’s install the script we’ve been working with as a startup script. That way, the Script panel menu choices will always be available for use. To do this, follow these steps:
If the installation succeeded, InDesign will run the script every time that InDesign starts, and our custom Scripts panel menu options will always be available.
Note that not all scripts make good candidates for installation as startup scripts. Scripts that format text, for example, will not find any text to format immediately after InDesign starts, and will sometimes generate an error.
If you install a script as a startup script and see an error, or if InDesign seems to take longer than usual to start, you’ll need to remove the startup script. To do this, follow these steps:
I use InDesign to draw parts for various musical electronics devices (robots and synthesizers) that I build as a hobby. Due to this (admittedly esoteric) practice, I spend a lot of time with the options on the Object>Paths, Object>Pathfinder, and Object>Convert Shape menus, as well as the AddPoints.jsx example script. While I could get at these options using various panels or keyboard shortcuts, I prefer to have them on the Context menu when I’m working with paths.
Here’s a bonus startup script for anyone doing a lot of drawing with InDesign. If you didn’t already install this above, you can download and unzip the script from here.
I’ll be back with more, as soon as I can find the time! This first post might seem pretty basic, but I’ve got more scripting tips and tricks—from simple to complicated—to share.
This Thursday was a great day for the Silicon Designer team. Most of us in the greater Seattle area have met each other, but we hadn’t met Jorge Solis in person. Jorge has been working with the Silicon Designer team for the past year, and he lives in Rosario, Argentina! He’s been a driving force in getting our new HTML5 Silicon Designer work underway, as well as implementing specific features for clients in Flex.
While we have a strong culture of being available in Skype during the work day, meeting the people you work with regularly helps. I can envision Jorge’s sense of humor as well as the way that he talks. When we set up the meeting, I used slang unknowingly. I told him to meet us at noon, which he thought meant “afternoon”.
Quickly noticing when a miscommunication happens, then correcting it quickly is another key part of a successful work culture. In that spirit, Jorge sent a message that said, “Pardon.., but what time exactly is noon?” I was able to explain not just that noon is twelve o’clock, but that it was in two hours in this time zone. I gave him my cell number to call just in case. That gave him another path to reach the group easily, one that turned out to be important later when the Seattle construction teamed up with the difference between 1st Avenue and 1st Avenue South to get Jorge a bit lost. He showed up and it was great to have everyone from the Seattle area along with Max present to meet Jorge.
A demo is worth a thousand words. Thus Jing, Skype, Jenkins, Pivotal Tracker, and our internal Admin tool are all really important technologies that we use to work together effectively. We take video and screen shots with Jing. We track requirements, bugs, and pending work, supplemented by screenshots and video in Pivotal Tracker. We use Jenkins for Continous Integration as well as creating on-demand builds in some cases. We use the internal admin tool to test client projects, and to isolate issues when needed. We also use multiple meeting technologies. ConnectNow, MeetingPlace, Adobe Connect, Skype, and a few other technologies help us to speak to one another live, resolve issues in real time, and give demos to each other and to customers.
It’s not enough to have Skype. While we work in different time zones in some cases, being on Skype is part of being at work. It is as important as checking your email. You generally can’t go all day without checking your communication tools &work effectively as a team. Yet we have an occasional “special programmer” who refuses Skype, and we respect that.
More important than standing up, than having set a timer, is allowing anyone on the team to speak up when a meeting is getting long or off track. One of the best ways I’ve ever seen this expressed is by Max Dunn, when he said, “We have eight developers in this meeting. Is it worth having all of them away from solving customer issues right now?” How much better would your team be if your boss had a running total in front of him for how much time this meeting was eating up? We do meet, but there is a customer awareness present that cuts through any perceived heirarchy.
Not everyone is ideally suited to working remotely. Some folks must be in an office to do their best work. Others may need some practice, reminders, or tools. Max is great at being optimstic, expecting the best, and giving prompt feedback if change is needed.
One thing that was very difficult for me to adjust to was not hearing much feedback when things were going well. I remember one particular time at home I was asked how my work was going, and I said, “I’m honestly not sure. I see that I’m now able to do more testing than I was, but Max never tells me that I’m doing well or anything, so maybe I am just sucking at my job?” My Step-Dad who owns a business looked at me and said, “So Max does payroll? And he’s the one who hired you? And you got paid?” I said, “Yes.” My StepDad said, “It isn’t school, Lanette. He’s busy trying to run a company. If you got paid, you’re doing a good job, or he’d be talking to you about what you need to do and firing you. You can’t expect to have someone to coddle or micromanage you. You either have someone there who is doing actual work, or you don’t.”
So, I do get feedback, but there is no hierarchy. That comes with many great points, and it also has its downsides. I remember the time we tried an intern. It’s very difficult to train an intern in person. Remotely? If it’s possible, I’ve never seen it happen. I simply think that it’s something that is much easier in person. If your entire job is to teach someone remotely, it may be possible. However, if you are trying to do work yourself? Just teaching that other person takes so much time, it’s nearly impossible to give them enough feedback to improve, keep them busy and encouraged, while still contributing in your own role.
I’ve worked at large companies where you can do job sharing, and move from one role to another. I’ve never seen this happen at Silicon Publishing, and I’d be shocked if someone did change roles, unless they’d had experience in both roles before. Changing to a new role is not something I’d recommend for a remote worker. Part of picking up how a job is done well entails understanding the flow: how the person gets into the zone of doing work. It also means understanding how they prioritize their work, and share their results. You can’t get a good picture of flow when you aren’t near the person.
To be a good remote employee, you need to know your job. You have to be willing to find out from others what there is to do. You need a bias towards action. If you wait around for someone else to tell you what to do, you are worse than useless to the team. At all times, we each need to work to make the team better. That means never saying, “That isn’t in my job description.”
You just try something else! This is what I love about our team culture. The lack of politics is rare and I treasure it. It is very common for employees to start as contractors. This is because working remotely isn’t a good fit for everyone. We have some folks who join us on occasion for projects, but do not work on a full time basis. I was a part time contract worker who worked a different job for a time.
The number one reason I love working at Silicon Publishing is the honesty, optimism, and goodness of Max and Alissa. The founders don’t come up with dorky mission statements and alienate their employees. Instead, they understand the technical direction and work to improve it. What would your company be like if the CEO, founders, CTO and everyone at the highest level not only could code, but did code. They wouldn’t blindly ask you to do something that isn’t remotely possible. They wouldn’t overlook the workhorse to reward the showpony. We do not work on “Stretch Goals” or something outside of daily work. Our entire job is to make products for clients. We are too busy doing that to have a political side show. I believe that a small part of this is due to working at a smaller company. A larger part of this is working for a company that doesn’t have managers. We have team members, and we have to add value for the customer.
Max is serious about a positive working environment. If you are speaking negatively without contributing any useful solutions, he won’t hesitate to bring it to your attention. At first I didn’t understand why he was so adamant about this. Now that I’ve had time to see the results, I realize that it’s not the power of positive thinking alone that makes the difference. It’s the commitment to envisioning action in order to make the positive result happen. Complaining without a better idea is a fruitless activity. Coming up with options and deciding which to try first is a much more constructive way to deal with the reality that a change is necessary.
Quite simply, we need deep expertise. The work that we do isn’t the type of work that you can hire a new developer and quickly train them. We have some very deep expertise in Adobe InDesign, from the scripting, plug-in, typography, layout, and document perspectives, and this domain is in fact somewhat concentrated in Seattle, as several of us worked here to create Adobe InDesign. It might be nice to all be in one place! It would be easier to train new employees in one office. We could benefit from in-person pair programming, and the subtle messages that are communicated in body language verses the occasional emoticon. However, it would limit our potential team members to one city. Even in the two cities with the most Silicon Publishing employees, we don’t have every skill that we need. That’s why we have the best developers for the task, and we meet up in person several times a year.
In this post I am going to explain how Silicon Designer works in much more detail than we have previously divulged since its creation in 2009. We are at a point in the evolution of this product that I am truly proud of, and I am deeply grateful to our incredibly talented developers and other participants in its success. Go here for some history of how it came about: in this post we will talk about what it is and how it works.
Silicon Designer is a highly customizable, end-to-end solution for editing documents online. If you make a Photobook with Shutterfly’s “Custom Path” application (the primary way Photobooks are created at Shutterfly), you are using Silicon Designer. If you make a business card at Printing.com, Printed.com, or many other sites, you are also using Silicon Designer. Here Almira demonstrates the concept, with one of the earliest implementations of the product.
This is by design. The intent is to facilitate the entire spectrum of editing experiences, across the entire spectrum of document types.
As far as user interfaces, we have clients who want constrained or form-based editing at one extreme, and clients who want “InDesign on the Web” at the other. Our core process doesn’t care: user interface is neatly abstracted to be something that we can easily change quite radically. We don’t tell our clients what to do, generally, but instead we let them define the UI they want, then configure the product to support it. You can look at this blog post to see some of the spectrum of UI demands we encounter, and the considerations that go into choosing the type of interface you want for your end users.
No, we’re not there yet, and it will be years. “All” is kind of a high bar to aim at, but the point is that our document model is extremely generic and all-purpose at its core.
We have supported more than 1,000 document types, from greeting cards to door hangers to dimensional products (with 3D interactive preview) to ads to large format signage to brochures to newsletters to school handbooks to insurance documents to book covers to mail merges to variable data campaigns to… you can almost name it.
It is interesting to me that HTML5 is both a curse and a blessing in terms of this product. Flash had some wonderful features related to text, but they were truly abandoned by Adobe when they gave up on Flash, and old-school HTML had always been better than Flash in terms of tables, lists, and several other long document features. We pulled code from our early 2000s HTML editors and are now using HTML5 to provide the same sort of long document features, this time with the pixel-perfect, pagination-perfect rendition that has only recently become possible.
At the moment our sweet spot is business collateral and consumer personalization. In terms of technical documents, books, and complex variable data, we have the fundamentals and can customize for nearly anything. The final frontier is really interactive content, a wonderfully changing game as devices proliferate and standards evolve.
The functionality of Silicon Designer is to “round-trip” a document, meaning it starts in InDesign, it turns into a web document, which (after edits) then turns back into an InDesign Document (using InDesign Server). Back in the early 2000s, we defined an XML structure to support online editing, which described a document sufficiently to enable an editing experience on the web. We wrote code to interrogate a document set up in InDesign and emit this XML. Our web application would then ingest this XML, render the document on the web where users could edit the content, and save the results of their editing (again in this same XML structure) back to the server. The second piece of InDesign code would load the original InDesign document, and make changes based on the XML that had come from the editing experience.
The other thing we had to do was manage the different sorts of image required by web vs. print. In the same process that emits XML from InDesign, we also generated “proxy images”: low-resolution images appropriate to a web site. We retained the original high resolution versions for final print output.
While our process has not changed much, the XML structure has been growing and growing in complexity over time, getting more and more comprehensive in terms of document features. We were minimalistic in our initial approach, using the InDesign document as something of a crutch. We really only need XML for those things that would be edited: the rest could be left in InDesign, since the server would re-load that and change the images or text the user had altered. After years of enhancing the model, we reached the point where today, the entire back-end composition process no longer needs an InDesign document. It does create a new one, but it does so entirely from the XML and referenced images.
This actually came as a surprise to me: I was telling a client that we had to have the InDesign files on the server, and my colleague Ole informed me, “no, not anymore.” It pays to hire people smarter than yourself.
Around the time our document structure reached industrial quality, it got a name. In 2010 we named our XML “SDXML” as in “Silicon Designer XML” and it helps immensely in describing how this works. There are other XML models for documents, and we probably know all of them at least a little bit. InDesign used to have INX, then it got IDML, which happens to have been invented in large part by our staff that were then at Adobe. There is tremendous confusion about IDML and its use: a naïve programmer will think that IDML is a good model to use for web-t0-print solutions. No, it is not. SDXML is far different.
In the first place, we need to support metadata at different levels (on page objects, on text ranges, on the document itself) related to how things are edited. So there is more to it than just the rendition of the document in a design sense. But as much as we love InDesign, we are not InDesign-centric in our approach to this. SDXML is about describing a document in a way that allows multiple rendition engines to render it, and we have flowed SDXML into five different engines so far (InDesign, Illustrator, Scene7, Flash, and HTML5). SDXML is rendition engine-agnostic. It does not care what renders it.
There are three main parts of a Silicon Designer implementation:
As you can see below, SDXML is critical to all three.
In the authoring phase, designers use what we call our “Template Editor”, which is a plugin to Adobe InDesign, to apply metadata to InDesign templates. Like the web front end, the template editor is built to be very different for very different clients. It is a single codebase for all, but it is highly configurable. On loading the InDesign document, the Template Editor inspects the objects in the document and lists them out, providing an interface through which any metadata can be applied to any object.
Markup can indicate almost anything required for a document editing experience: the only universal element that needs to be defined for pretty much everyone is which objects are editable. Beyond that, you can indicate almost anything you could dream of: should this field be edited with a date-picker control? Should this object be something that can be moved, scaled, rotated, colorized? Which image gallery might be applicable to a photo? What is the color palette and font list for the document? What form of copy fit logic would apply to a particular text frame? It lets users define variables, which can be vector or raster images, entire text frames, or ranges of inline text. We have had a large range of use cases that have made the Template Editor extremely flexible, yet we turn off those features not needed with configuration and preferences, so it looks different for different customers.
When the designer is done defining how the template will behave in the web experience, they choose “package” from the callout menu of the Template Editor panel, and the magic happens… Our code generates SDXML reflecting all of their changes, while creating proxy images used for web links. This is all zipped up and uploaded to the web server, where the template becomes available for online editing.
The runtime environment is where the user does the editing, through an interface that we customize for each client (like the Template Editor, this is done in a layer quite independent of the core code base, which is now common for all clients).
The runtime itself has three components: the front end (which is Flash-based for old browsers and HTML5-based for new ones); the services layer; and the back end where SDXML is stored and which may be used for server-based rendition (i.e., for page or document thumbnails).
As you can see, SDXML is critical to this process. Most of the service calls are loading or saving SDXML. As the editing is all in real-time on the client, there is not an absolute need to have any composition calls at all, yet some users either want thumbnails to be generated on the server or to provide the high resolution print output in real time.
When the user has completed the editing session and is ready to purchase what they’ve designed, there is a back-end composition process. The SDXML reflecting their document will be loaded in InDesign Server and rendered: metadata in the SDXML may also define production processes such as binding, stock selection, die cutting, or spot color. The composition process is typically 100% automated.
We are happy to have created this product and we look forward to continuing its evolution. As I said elsewhere, it grew organically, it was not created the way startups are creating software products these days. Instead, it came after years and years of building similar solutions over and over. Now that we have attained the level of quality we were aiming at, we intend to spend more time sharing with the community what we have learned from this experience. It was certainly the wonderful programming community that made this possible, and we are very thankful to the cool people who automate InDesign around the world, and those who moved web standards forward (eventually) to open up the HTML5 frontiers now before us.
When I began working at Silicon Publishing, I was excited to bring testing to the company, thinking that our agile testing would give developers writing code faster feedback! I knew from past experience that this could help us deliver working software to our customers more efficiently.
However, I did not yet realize that I could provide something much better than test results to developers, if I’d stop assuming that everyone approaches problems from the same perspective. Doing testing for developers can help them, by giving them a set of eyes and hands to bring them results. It is not the same as teaching those developers how to better interpret results, or ask more questions of the design before implementing the code. To really test as a team, sharing test ideas earlier in the process, but without interrupting important technical considerations that are informed by factors outside of testing is a hard balance to strike. I’d like everyone on the team to understand some testing basics, but it isn’t neccesary for every developer to know every testing tool. The developer only needs to know what kind of tests can give him data that is useful, and then we can work together as a team to figure out which tests make sense to run given the amount of time we have.
Who is responsible for the quality of our software?
We are! The entire team that makes it. Not just the tester or just the developer. Not the product owner, and not the person who promised it would be ready with X features on Y date. We all have a part in creating software that has high quality and usefulness today, but it is in all of our best interests to also have quality in the code. Sustainable, understandable, logical code that can stand up to change and being repurposed.
Beyond the Happy Path
The one path that Developers usually cover is the type of test we call a “confirmation” or “Happy Path” test. The developer writes the code, and then uses the positive test case to see if that code “works”. There are some formal paths to doing this sort of testing. If you write the confirmation test first and see that it fails before you write the code, and passes afterwards, then you confirmed the happy path using an automated test. You also may use a tool like cucumber to do Behaviour Driven Development (BDD). This can give you a longer string of positive test cases that confirm more than one piece of code. It is possible to use these techniques and tools to do other types of testing, but you do not get those tests automatically just for using these techniques or tools.
Some of the basic types of tests outside of “Happy Path” or “Confirmation” are listed here, because I plan to go into detail on how a developer can understand and use the basics to better communicate with testers or consider how their own code might function before they are able to run the first test!
I’m going to be sharing a testing thought weekly that I think can help a developer in a way they may not have considered. If you read this list and see a glaring ommission, please share it! This is a list in progress, and like everything we achieve on an agile team, we are hoping for improvement now, that will lead us towards perfection eventually.
Think about nothing. When nothing is entered, should it overwrite something? Should there be an error that we require “Something”? Should we disable the mechanism by which a user can enter data, by dimming out an OK button if they don’t enter a name and password? Should we highlight what is missing until we have data that fits criteria we accept? Or does nothing mean the number 0? Does it mean erase what was there before? Does it mean refresh the page? Does it mean a space? A space is a type of a letter, or does it simply mean not a number? Is it null? Or is it nothing? If you aren’t specific about it, nothing can become the something of nightmares.
Imagine this scenario: You were in the hospital overnight being treated for a possible heart attack. It turns out that it was a panic attack. This happens often in the modern world. We are designed to run away from scary animals that can bite us, but it’s seen as rude to run away from your car when someone cuts you off in traffic, so that stress builds up. Since your daily commute is part of your anxiety, you’ve now moved to the city, and are biking to work as well as doing the other recommended things to manage your anxiety. You’ve been notified now that your insurance paid some of the bill, but now you have to pay the remaining amount. You go to the website, where they allow you to update your address. You log-in, and see that your new address has already been uploaded! How cool. This means your change of address form must have alerted the hospital. So you bring up with edit form, leaving it blank, meaning “no change”, and accept it. Does your address get removed from the database? Or does the current information stay unchanged? What if the user accientally hit the space bar or a tab when they didn’t expect it? Will they get any warning?This is the kind of scenario where nothing suddenly matters. It becomes something. What if your sister in law works in billing? Can she change your amount owed to zero? That is a totally different type of test. I think permissions testing and checks and balances fit more neatly into the “Security Testing” bucket. But for now, I hope that regardless of any of your healthcare beliefs (Should it be State Sponsored or all Free Market) other thoughts can be put aside to consider the meaning of nothing.
Let me tell you a bit about Jenkins and why I’m such a superfan of it! But, wait! I’m a big fan of Hudson, because Hudson was Jenkins and Jenkins will be Hudson? Is that so? Let me share a bit what I’ve learned on the topic of tool name changes this week! I thought all I had to share was a short story of builds & tests.
Both Hudson and Jenkins have a butler as their icon. The intent may be to show you the fine service you’ll be getting from the tool. Just like all good help, even a trained Butler is going to need some direction and orientation to serve your specific needs well.
In 2011, when we first started using Hudson, the open source project branched into 2 projects. Jenkins would be the open source, free product, but Hudson would move forward towards more under the surface code changes driven by a company.
Now the projects are similar, and both are open source again. From what I know so far, it seems that Jenkins is more widely used with more plug-ins at the moment, and Hudson has the newer build.
Jenkins—The Tool Formerly Known as Hudson, is a fork from Hudson, for the past yearish, a free “Continuous Integration” tool.
Hudson—The original Butler in the House. Also a “Continous Integration” tool. Very similar to Jenkins. Was a paid tool for a bit but is now free? ..with new code under the covers. And a higher build number than Jenkins.
Hudson and Jenkins are Brothers from the same Mother! The Future? Depends on who you ask. Without anyone telling Jenkins “I am your faaaaahthah”, I’ll refer you to John Ferguson Smart, who I trust on these matters, and warn you kindly that this topic has been closed as “not constructive” on StackOverflow.
From our team’s experience so far, Jenkins is serving our needs well. We upgraded from Hudson when it went closed source, and we will continue using Jenkins. We happily used Hudson before the name change. We now happily use Jenkins. We are cool with the entire Butler family. What it comes down to for us is getting quality builds out to clients. We don’t want to make big changes unless there is a valid business reason.
Get the appropriate backup plug-in for whichever butler name you prefer, create and test your backups–in a location other than where your currently running configuration is. This kind of friendly advice sounds very obvious. So obvious that an experienced team would never put this off, right? Let’s just say that should the great Cloud Outage of 2014 happen on your project, it’s very nice to have a working backup. Don’t assume that it works. Test your backups to be certain that they work.
Project Name Whiplash
Why is this project mess of multiple names & teams happening again? Didn’t we just go through the same thing with merging & divorcing web testing technology over the past 5 years? I’m guessing for the same reason there are 87 frameworks to do anything in software testing, yet none of them are complete? There is no honor, glory, or reward worth losing sleep over (for most of us) in maintaining an existing framework. Still, to those who so kindly participate in making plug-ins, testing, and even fixes for the technologies that help us have testable builds with meaningful status that we can mark & share with people, this is an awesome gift. So thank you.
As far as why CS students create a new one but get tired of maintaining it for free once they have a job? I believe this is caused by “Market Conditions” or basic supply and demand. This is one of the few disadvantages of using free software and goes with the territory. Consider the price. When the tool is free, often times it’s a free start. In the past, thousands of dollars would be spent, and an expensive toolkit might sit on the company shelf unused for years! Considering we used to script everything by hand with minimal tools we had available, just to run some pretty simple tools and now can record & playback for free with video? Sometimes you have to be thankful you got started for free. Still, let me know when the Great Incentive for Tooling Technology Cohesiveness Initiative comes along. Until then, we may not find the one “be all end all” technology to use for all builds, integration, and testing which has Total Market Domination and is the answer to all of our Cyber dreams.
Jenkins and Hudson are both (will it change its name to a symbol?) continuous integration system that can run tests, create builds, and help show the status of the builds.
First, let me tell you that there are many companies that do “continuous integration” by the book. By many–I mean more than 5, but certainly not most companies making software. In theory–this is what Continous Integration looks like.
Such results are possible if you have all of the right tests, test environments, and processes in place, and assuming that these stay in place even when faced with short-term needs to break process for critical updates or issues.
Now that you know what continuous integration is supposed to be, please know that many teams are doing a hybrid of processes to help them meet the needs of shorter and more agile cycles. Even if you don’t do picture-perfect continuous integration, there are plenty of capabilities that Jenkins has that are useful and are more likely to happen for most companies & software teams than actual continuous integration by the book. We try to focus on progress towards more quickly meeting our clients’ needs rather than trying to have a perfect process, and that is nothing to be ashamed of. Fully automated continuous integration is a cool thing to have, and certainly useful, but in some cases the work to get to fully automated continuous integration is not only expensive to achieve, but difficult to sustain.
Jenkins, along with Hudson, are modular, independently useful steps towards having continuous integration if you have the sort of project where fast frequent changes will continue to happen over a long period of time. I’m very happy to report that although we don’t have everything automated to the extent we’d like, just being able to promote builds to different environments to test with has helped us to more quickly turn around quality builds for our clients, and that progress is exactly what we are looking for.
famo.us is a 2.5-year-old Silicon Valley startup that claims to have solved the performance challenges of HTML5.
“Performance challenges?” you might ask, but only if you hadn’t yet heard the tales of Facebook and LinkedIn turning an about face from HTML5 in favor of native applications. As I blogged about a year ago, HTML5 has had mixed results in the wild, driving many to adopt native or hybrid native/html5 strategies. As I discussed in describing the event where I first encountered famo.us, the classic example of poor HTML5 performance is the scrollview. Quoting Trunal Bhanse of LinkedIn:
“Mobile devices have less memory and CPU power compared to Desktop computers. If you render a very long list in the HTML, you run the risk of crashing the device. This makes it challenging to build large, interactive HTML5 apps for mobile devices. Native technologies provide UITableViewControllerto build long, infinite scrolling lists. UITableView contains reusable UITableViewCells which are optimized for memory, performance and responsiveness . For HTML5, we did not have any solution. So we set out to build one!”
The article by Bhanse is a great example of the hurdles one has to go through to create an experience with HTML5. Shouldn’t something like this be easy?
The famo.us founders have explained in numerous presentations how they went about creating their own rendering engine, and have showed impressive demos. Latest reports are that the famo.us library has four components: a rendering engine, a physics engine, a gesture engine for input, and an output engine.
famo.us says they plan to open source the entire library under the Mozilla Public License Version 2, some time in 2014.
While the general response to famo.us has been an enthusiastic clamor from developers to join the beta (70,000 have reportedly signed up), and there is certainly rapt attention at developer conferences and meet ups, the way they are going about promoting this technology has rubbed many in the development community the wrong way. There are several things that seem to have triggered skepticism.
Steve Newcomb is a very passionate person. He talks with a style that echoes Steve Jobs: his goals are nothing short of changing the world. A seasoned entrepreneur, Newcomb has written essays about “Cult Creation” as a metaphor for his company- and team-building success. From his LinkedIn profile description of his work at famo.us:
“Microsoft and Apple owned the OS, Oracle owned the database, and Google owned the search engine, but no one has ever owned the UI layer. Whoever does own it for mobile devices will own something insanely valuable – every tap event that exists for each user. Imagine the company that owns the UI layer on top of Facebook, Twitter, LinkedIn and Gmail, that would enable that company to build the first unified social graph.”
Perhaps there is a bit more than saving the world on his agenda…
When I saw Newcomb speak in San Francisco, he told a story of building a computer with his father, and the moment of joy when typing a “k” key on the keyboard made a letter appear on the screen. This was perhaps a perfect metaphor for his mighty framework coming together, but it was also eerily similar to a scene in the movie “Jobs.” It is sometimes hard to tell where the genuine technology passion ends and the hype begins.
Newcomb is a consummate salesperson, and when he describes the technology, he can make statements that are slightly inaccurate technically. One huge example is his oft-repeated claim that famo.us talks to the GPU directly. For example, from a VentureBeat interview:
Technically, the conversation is not so direct (see this presentation or this explanation to see the more granular picture). The “direct to GPU” message may work with investors, but this sort of thing does not work as a sound byte with developers, but instead triggers their BS meters. In Newcomb’s defense, he has provided more detailed grounding in reality in his more in depth presentations.
“16,000 developers have signed up for the beta, but ‘we are not letting any of them touch anything yet.’”
What is the point of a beta? The lack of anything tangible for the development community to test certainly sets famo.us at square zero in terms of developer adoption, whether or not they have done anything meaningful for web development.
So, the “traditional” approaches to standards-based developments were for documents, not apps, and we must then do something different. Newcomb has outlined a vision of a JQuery-like, accessible-to-mere-mortals, approach to such an API.
That sounds absolutely great, once we get past abandoning standards 20 years in the making, but an elegant, human-usable API is a goal orthogonal to the performance work they have demonstrated. It would seem that if famo.us were serious about such a goal, they would engage some of the 70,000 signers-up with some actual code sooner than later.
Funny that Apple, once perceived to have an “underdog” status, has now assumed the stature of its old nemesis, Microsoft, as a champion of delaying and impeding standards (this is the scene in Animal Farm where Napoleon stands up). This can’t last: Apple will be forced, probably quite soon, to support WebGL. It is, after all, a simple flag to switch; the support is there under the hood.
It would be supreme irony if this switch happens before famo.us releases any code. Certainly the “direct to GPU” sort of behavior can be handled in raw WebGL. A “physics engine” is a nice-to-have but WebGL alone, when functional on mobile, may offer performant scrollview and the core capabilities touted by famo.us, especially when open source libraries tested by the development community attain JQuery-like power.
I sincerely hope that famo.us is as great as they imply, and that the code will be available soon. There is simply insufficient data to assess the value of this framework at this time.