It is an exciting time, again, for those of us who believe in the potential of web applications. The two powers that currently matter in the tablet space, Apple and Google, are both poised to give web apps a level of viability beyond anything formerly possible:
Are the two halves of the duopoly of mobile in a good mood or something? I could write another blog post about why this is coming about, but I care much more about the incredible opportunity it affords us developers, and its potentially huge benefit to human communications.
We can imagine that, within months, web apps on the dominant mobile platforms will – at least for those users lucky enough to afford “latest and greatest” versions of software for their devices – attain a greater-than-previously-possible level of performance and user experience.
Let me back up a moment as I am using the term “Web App” in a very specific way. I mean it just as you find an “iOS Web App” described by Apple here and here: an application built using HTML and other web standards that has behavior generally typical of native apps – like having its own icon on the home screen and filling the entire screen without the browser chrome. If you ever see a web page asking you to “add to homescreen” that is what I am talking about.
Web apps are an alternative to native Android and iOS apps, which have to be coded targeting a specific platform, and which need to be distributed through a store such as the Apple App Store or Google Play.
Another type of app that is likely to benefit from Apple’s recent move is the “Hybrid App”: these are a form of native app that are also distributed through stores, but are built with a standards-based starting point. Hybrid apps are essentially web apps that are “wrapped” into native apps with a tool like Adobe PhoneGap or Apache Cordova.
When web apps for the iPhone came out in 2007, they were actually the only way developers could make an app.
As Apple gave developers tools to make native apps, creating the wildly successful App Store, the two alternative approaches were available, and PhoneGap quickly sprang up offering the hybrid option.
To most developers familiar with the concepts of Open Source and web standards, consistent even with Steve Jobs’ initial vision, the notion of building with a cross-device, standards-based methodology felt like the right thing to do. From the pure standpoint of efficiency it made a ton of sense. A single code base could feed multiple devices through either the web (web sites or web apps) or stores (hybrid apps compiled from the same web source). What were the problems?
On the surface, before significant apps were built, the obvious problem was the lack of direct access to hardware features such as the accelerometer. While performance was also not going to be quite as good, the tech community was aware that it was not a true technical obstacle, and presumed that the trajectory of hardware coupled with better OS software would not keep the performance down forever. And there were at least several examples where performance was good enough.
Web and hybrid apps were embraced by many serious companies, including Facebook, LinkedIn, and the Financial Times. Certainly HTML5 would provide a powerful cross-device development language whether you wanted to go through an App Store or not.
As I wrote in this blog post, somewhere around 2012 it became apparent, even to most of these tech leaders who had invested millions in their web and hybrid apps, that native app performance, coupled with powerful development tools like Apple’s XCode, gave native apps a substantive advantage.
“When I’m introspective about the last few years, I think the biggest mistake that we made as a company is betting too much on HTML5 as opposed to native. Because it just wasn’t there.” — Mark Zuckerberg, 2012
So even early web app believers were disillusioned. Native seemed to win. But that certainly didn’t stop any of them from maintaining an HTML code base. As Mark pointed out in the remainder of that quote, “One of the things that’s interesting is we actually have more people on a daily basis using mobile Web Facebook than we have using our iOS or Android apps combined. So mobile Web is a big thing for us…”
So here we are at round three. With iOS8, Apple is making a huge step forward. HTML5 experience is deeper than it was, HTML5 libraries and development toolchains are steadily evolving, and it is quite possible that web apps and hybrid apps will finally have their day.
by Max Dunn
famo.us is a 2.5-year-old Silicon Valley startup that claims to have solved the performance challenges of HTML5.
“Performance challenges?” you might ask, but only if you hadn’t yet heard the tales of Facebook and LinkedIn turning an about face from HTML5 in favor of native applications. As I blogged about a year ago, HTML5 has had mixed results in the wild, driving many to adopt native or hybrid native/html5 strategies. As I discussed in describing the event where I first encountered famo.us, the classic example of poor HTML5 performance is the scrollview. Quoting Trunal Bhanse of LinkedIn:
“Mobile devices have less memory and CPU power compared to Desktop computers. If you render a very long list in the HTML, you run the risk of crashing the device. This makes it challenging to build large, interactive HTML5 apps for mobile devices. Native technologies provide UITableViewControllerto build long, infinite scrolling lists. UITableView contains reusable UITableViewCells which are optimized for memory, performance and responsiveness . For HTML5, we did not have any solution. So we set out to build one!”
The article by Bhanse is a great example of the hurdles one has to go through to create an experience with HTML5. Shouldn’t something like this be easy?
The famo.us founders have explained in numerous presentations how they went about creating their own rendering engine, and have showed impressive demos. Latest reports are that the famo.us library has four components: a rendering engine, a physics engine, a gesture engine for input, and an output engine. famo.us says they plan to open source the entire library under the Mozilla Public License Version 2, some time in 2014.
While the general response to famo.us has been an enthusiastic clamor from developers to join the beta (70,000 have reportedly signed up), and there is certainly rapt attention at developer conferences and meet ups, the way they are going about promoting this technology has rubbed many in the development community the wrong way. There are several things that seem to have triggered skepticism.
Steve Newcomb is a very passionate person. He talks with a style that echoes Steve Jobs: his goals are nothing short of changing the world. A seasoned entrepreneur, Newcomb has written essays about “Cult Creation” as a metaphor for his company- and team-building success. From his LinkedIn profile description of his work at famo.us:
“Microsoft and Apple owned the OS, Oracle owned the database, and Google owned the search engine, but no one has ever owned the UI layer. Whoever does own it for mobile devices will own something insanely valuable – every tap event that exists for each user. Imagine the company that owns the UI layer on top of Facebook, Twitter, LinkedIn and Gmail, that would enable that company to build the first unified social graph.”
Perhaps there is a bit more than saving the world on his agenda… When I saw Newcomb speak in San Francisco, he told a story of building a computer with his father, and the moment of joy when typing a “k” key on the keyboard made a letter appear on the screen. This was perhaps a perfect metaphor for his mighty framework coming together, but it was also eerily similar to a scene in the movie “Jobs.” It is sometimes hard to tell where the genuine technology passion ends and the hype begins.
Newcomb is a consummate salesperson, and when he describes the technology, he can make statements that are slightly inaccurate technically. One huge example is his oft-repeated claim that famo.us talks to the GPU directly. For example, from a VentureBeat interview:
Technically, the conversation is not so direct (see this presentation or this explanation to see the more granular picture). The “direct to GPU” message may work with investors, but this sort of thing does not work as a sound byte with developers, but instead triggers their BS meters. In Newcomb’s defense, he has provided more detailed grounding in reality in his more in depth presentations.
“16,000 developers have signed up for the beta, but ‘we are not letting any of them touch anything yet.'”
What is the point of a beta? The lack of anything tangible for the development community to test certainly sets famo.us at square zero in terms of developer adoption, whether or not they have done anything meaningful for web development.
So, the “traditional” approaches to standards-based developments were for documents, not apps, and we must then do something different. Newcomb has outlined a vision of a JQuery-like, accessible-to-mere-mortals, approach to such an API. That sounds absolutely great, once we get past abandoning standards 20 years in the making, but an elegant, human-usable API is a goal orthogonal to the performance work they have demonstrated. It would seem that if famo.us were serious about such a goal, they would engage some of the 70,000 signers-up with some actual code sooner than later.
by Max Dunn
This was for an application we built some time ago, but one that is very dear to my heart: our Silicon Paginator implementation for Royal Caribbean Cruise Lines. We worked very hard on this, and it represented a stage in the evolution of Silicon Paginator that was important to me at the time, and is still relevant to our future.
There are many cool aspects of this application, but what I think most significant is the pure InDesign Server automation that renders the very high-quality output from very complex data, following a structure-to-style mapping that lets InDesign templates format content from (mainly) XHTML structures into output that looks hand-crafted. Also the way it was built was a significant step up from the ways we had done this before, as we took on this project right after hiring one of the first in a string of top-notch software engineers that has joined us the past few years, and he took my crude hacks of years earlier on the processing and built it out the right way with more modern technology.
The Royal Caribbean implementation of Silicon Paginator represents the culmination of earlier work with XHTML as a data source for InDesign Server applications. One particular project that had a great deal of similarity was the content management system we built for ACTS Seminaries, and attempted to productize as “Instacatalog” in 2003. This was prior to InDesign Server, but at the time we automated InDesign desktop for catalog output. The application worked by ingesting XHTML, parsing it with a stream parser (then SAX-based) and analyzing each node of content to determine its structural context to render it in InDesign in the correct style, given style mappings between the CSS for the web and the InDesign object styles for print. That application and others like it worked quite well, but I had hacked them in fairly precarious ways. In particular, the processing wasn’t elegant at all.
Despite the lack of elegance back then, it was impressive how well it worked. The one limitation (still a limitation) is that it didn’t handle absolutely arbitrary structures: you could put a list inside a list, but only so many levels deep, for example. Yet it was able even then to validate and reject cases where users would try to go outside of such limitations. And it seemed whatever structures anyone needed, it was easy enough to extend.
With Royal, it had to scale. The volumes were huge, there was certainly not going to be any manual inspection of things. We were fortunate to have more product-level developers working with us, and the whole transform piece was refactored to work very efficiently. Template setup was fairly manual, though we had some utility scripts that were helpful and we developed some straightforward setup guidelines. These days we are much better at tooling the document setup aspect of this sort of application.
The text-intensive, tables flowing across pages form of output with sources like this is still in the minority in Silicon Paginator implementations. More commonly we have relational data sources, with only crude markup from within database fields, producing directories and catalogs more than heavy stories of rich text, but here in applications like this are the fun part. How far are we going? Do we merge cells? Lists within lists within tables? Tables within lists with tables? It is fun to gain control over the transformation between different forms of rendition, with HTML posing some unique challenges and opportunities. While Royal may enjoy the tangible benefits of automated publishing, I enjoy the technical details in seeing what I had personally cobbled together 10 years ago get transformed by real developers into a truly reusable set of tools.
by Max Dunn
But it isn’t just the technological genius of Steve that is amazing, it is also his integrity as a human and his devotion to his work. If those things were present in more humans around the world, we would have no war or hunger.
I got into computers early in life, first doing a little bit of programming at the Lawrence Hall of Science in Berkeley in about 1974 when I was a kid. I even ran into home-built computers at the time: at summer camp there was a man there that taught us how to build a computer from a minimal set of chips. If this same guy wasn’t part of the Homebrew Computer Club (which is perfectly possible) that Jobs and Wozniak were part of later, he might as well have been, it was inspiring to be able to say “I built a computer” even though it was pretty useless. I wasn’t really interested in such things at the time, but the memory comes back when I read about early Apple history.
Looking back, any great invention will appear obvious, like it would have inevitably happened, but there are at least two dimensions to any major technological advance, the concept and the implementation. Steve Jobs had a very fundamental concept of the personal computer that might not have been original, yet he had such an ingenious vision of its implementation that for all practical purposes he is owner of the conceptual advance.
Once the conceptual battle is won with a new invention, the implementation and evolution provide a lifetime of potential work, and Steve took this on as few others in the history of science. So many scientists who push forth a new idea are blind to its repercussions and unable to produce further advancement, yet Steve was the opposite extreme, re-thinking technology even in the last years of his life, 35+ years since his fundamental breakthrough.
I first encountered the work of Steve Jobs when I saw the Apple II computer, which I learned to program in 1980. Concurrent with this I was working on a DEC-20 mainframe, and the difference between the two was night and day. The personal computer liberated one from the centralized system, from punch cards, from the ridiculous sharing of sparse resources such that you might wait 24 hours to see if your program had compiled or not. It is impossible to explain to young programmers of today just how bad it was pre-PC, but I had a glimpse of it and it definitely sucked.
While I was in awe at the fundamental chasm between PC and mainframe, I didn’t appreciate the design of the Apple II in relation to its competition until I looked back, as I later saw Apple’s computer design evolve through the variants of the Macintosh over the years. Yet with hindsight I can see that the Apple II already expressed the phenomenal design ambition/insight of Steve Jobs.
At the time I used the Apple II, it was simply one of the few options available, and given that I was using it for digital audio at a time when processor speed and availability were so anemic, it still felt more something like the time I had played with a home built computer; certainly there were some use cases for which this was already a practical device, yet it had extreme constraints compared to what would come next.
The second time I was impacted by Steve’s work was when I started using and programming the Macintosh, back when it was that single box with a keyboard and mouse circa 1984. It is amazing to look at this compared to the Apple II and compared to today’s Macs… at the same time it looks advanced and ancient. That is how far we have come with personal computers, and shows the incredible speed at which Steve’s innovation went. The early Mac was also interesting not just in terms of design, always Steve’s obsession, but also in terms of the early adoption and focus as a tool of expression as opposed to a business device. Once there was the IBM PC to compete with it, the vision of Apple could be seen clearly, and here was the expression of Steve’s great mind. The Mac was what one used for art or music, right from the start. And here we saw digital audio (my obsession back then), which had been completely painful with the Apple II and IIe, find a home. Ever since the first Mac, the personal computer as a tool of artistic expression has been owned by Steve Jobs.
There was a whole evolution of the Mac from the mid 1980s to the late 1980s, where it seemed to get more powerful, yet while the continuous incremental improvement in terms of things like processor speed and monitor/video card power, coupled with the evolution of the software on top of it suggested evolution, the form of visionary transformation that had happened between the Apple IIe and the Mac left Apple, along with Steve Jobs.
I never actually used a NeXt box, but I dreamed about one and read about it for 2 years. I was at that point starting to realize how incredibly important a visionary like Jobs can be. It was no accident that stunning advances in technology were now happening over here instead of at Apple. NeXt was quite innovative in so many ways, and it was of interest to me through my interest in computer music (that was my only interest in computing the first 15 years or so).
NeXT was in some sense vaporware: academic departments across the US (around the world?) made plans around it, based on promises of free/cheap NeXT machines for academia and a significant amount of testing/input from academics. Computer Music Journal would present the NeXT audio capabilities as second to none, right above the digital audio and multimedia that at the time was starting to fully work in the Macintosh, with all the newer more expensive models that iterated throughout Steve Jobs’ absence.
Many have spoken of Steve Jobs’ “failures” and some would cite NeXT as such. The complaint is similar to complaints about Larry Ellison, that he has a loose concept of “we’ve done that” equal to his ability to think through how one might do something. But technology is fragile, in many senses the concept *is* the advance, as long as you have the integrity as Jobs and Ellison do to fulfill the implementation of the concept.
The Mac kind of left my life when Steve Jobs left Apple… I dreamed of a NeXT box but it never materialized as such. Eventually, everything in NeXT showed up in Apple products.
I saw Steve Jobs in person the first time at the JavaOne conference in 2000, when he showed a preview of OS X. It was quite stunning, and the Unix underpinnings, NeXT features, and built-in Java ended up being quite compelling for developers, who really hadn’t been enamored of the Mac other than in specific cases (those developing multimedia software, for example). At subsequent JavaOne conferences mac laptops became increasingly prevalent. This and some of my work requirements led me to start buying Macs again in the early 2000s.
I saw Steve Jobs for the 2nd and last time a the Apple WWDC in 2006. I had been using Macs more often, yet was not impressed with the performance of pre-Intel Macs, even with OS X. When my company worked on benchmarks for Adobe of InDesign Server across Windows and Mac servers, the results were unreportable because the Mac performed so badly compared to Windows boxes and nobody at Adobe wanted to make this public (they still had hopes for the Mac/fear of Microsoft at that time). The G5 chip was nothing to write home about, and Apple noticed and moved to Intel.
The move to Intel proved to be brilliant, for at least two reasons: performance surpassed windows, and running virtualized windows proved good enough that I could stop lugging around 2 computers. I was impressed that as an Apple developer, they shipped Intel-based hardware to me prior to the official announcement of the move. From the time of the Intel chip and Mac OSX, Mac has once again been my primary operating system and all my laptops since 2006 have been macs: Windows is still there when needed on the VM, but I have needed Windows less and less.
Of course the impact of iTunes, iPods, the iPhone, and the iPad has been another aspect of Steve’s incredible vision influencing the world. My family has become an Apple ad, with iPhones, MacBooks, etc.: my one-year-old knows how to swipe the iPhone, open apps and use them. I am not completely fond of Steve’s vision of tablets (I’m still expecting more freedom and computer-like features, not just a large phone), yet I have to acknowledge he got it right enough to finally reach the public with such a device footprint, and Google is too dumb and fat (they are clearly as arrogant and inept as Microsoft) in trying to compete. On first glance his staunch opposition to Flash looked somewhat tainted by a monopolistic tendency, no doubt gained in his ongoing rivalry with Bill Gates (certainly he took on a few traits of the enemy along the way) yet I think he ultimately had a positive impact on Adobe by challenging their overconfidence in the Flash “platform” and at least partially supporting standards.
If I had to pick the greatest technologist born in the 20th century it would either be Steve Jobs or Charles Goldfarb: I am very thankful to have been able to live in their general neighborhood (Northern California) and enjoy the technologies they have brought us. I am glad Steve died now rather than two years ago, as only in the past two years did he truly “win” in the Jobs/Gates war by every last objective metric.
I loved the dialog from Pirates of Silicon Valley:
Steve Jobs: “We’re better than you are, we’ve got better stuff”
Bill Gates: “That doesn’t matter”
I am glad that Steve hung in there despite his health battles to show once and for all that right can triumph over might, and the best can prevail. He will always be a great inspiration for us all.
It does matter.
by Max Dunn
My report about this Adobe MAX is delayed and mainly based on post-conference analysis, as in the conference I was busy announcing, explaining, showing, promoting and helping develop our Fotolia CS Extension. This year we had a booth for Silicon Publishing, which meant I attended keynotes and sneaks but no sessions, yet it also meant I was able to talk quite a bit with people who did have the luxury of attending sessions. Thanks to our recent success as a company, I think the majority of the people I spoke with were the ones leading sessions, and I am thankful to have the honor first of working with the Silicon Publishing programmers who today are of such strong quality, and by being associated with them getting to meet other geniuses of publishing today such as Chris Converse, David Blatner, and James Boag.
Overall, the news is postive, very positive… Adobe has many HTML5 initiatives, the Creative Suite is going to move to do what it should have done all along, allow InDesign to be a control center for multi-channel publishing, and Adobe does have some sensible understandings of a low-end market after all for its Digital Publishing Suite. The “Creative Cloud” was set forth as the main message, in that Adobe seeks to put Creative Suite sorts of capability, relevant to the new mobile/tablet/HTML5 reality, into SaaS- or PaaS-model offerings.
I am still trying to look deep enough to find evidence that this is being done in anything but a Google “spray and pray” way (i.e., putting out 100 apps and keeping the surviving 13), and if there were a down side to the current apparent trajectory, that would be it. Given the disruption of the initial introduction of Adobe to Macromedia to the creative suite (for a moment InDesign looked “legacy” while Flash looked like a “platform”), and the concurrent “anything for the street” mentality of a company that spent $3.5 Billion and needs to get it back somehow, I am thankful for the Google approach, it is at least trying the 100 things along the lines of their core competence as a business, unlike random non-sequitur unrelated ventures such as Business Catalyst. After that one, I was waiting for the Adobe announcement that they sold toilet paper on a subscription basis. But I digress.
Before diving into the present, let’s look at the past 6 years a bit.
Ever since Adobe bought Macromedia, I have been attending the MAX conference. The first “Adobe MAX” in 2006 was interesting to me… I spoke there about DITA and eLearning, and was able to discuss technologies that had roots in both Adobe and Macromedia, with assistance from experts from both companies. The SCORM expertise of those who had put SCORM support in Macromedia Flash, the DITA and print rendition expertise of FrameMaker and InDesign technologists. In a pure technology sense, it was exciting to consider the merger. As usual, I considered it in relation to standards… as far as DITA and SCORM, I had little to complain about, as expectation/hope had hardly been raised: these tangents to rendition technology were obviously better off thanks to the merger. SVG? Good thing I wasn’t presenting on that topic.
Politically, it was strange. Hanging out with speakers waiting to present, I noticed a surprising degree of fear and suspicion between the former Macromedia people and their new colleagues. The impression was that Macromedia people were in fear of their jobs, and resentful of the acquiring company. Presentations around Macromedia products (and at this point non-Macromedia products were just barely mentioned at MAX) would often include bashing of Adobe equivalents… “you’ll notice our software loads quickly: we hope to teach the Photoshop team a thing or two…” Such jabs really didn’t fit what Adobe had been before, but were probably natural side-effects of fear and uncertainty.
Where was HTML5 at this time? There are two perspectives:
Adobe had actually led the SVG effort in 1999 and 2000, but had not invested in it in a broad way, and by 2006 many were proclaiming the death of SVG. To the great credit of Open Source software, Mozilla and some others didn’t see it has dead, yet they were quietly working away at SVG support.
As far as HTML5, it was right around the time of MAX 2006 that Tim Berners Lee and others embraced the HTML5 effort.
Adobe at the time had swallowed the Macromedia egg and started to delete SVG support from its applications, starting with discontinuing the Adobe SVG Viewer
Within a year, the nervousness of Macromedia speakers at MAX 2006 proved completely unfounded. The tables had completely turned… By MAX 2007, it was clear that the Adobe acquisition was balanced to a fault: the joke by then was that Adobe “acquired” Macromedia but Macromedia took over Adobe. Former Macromedia people were enjoying great political clout, Dreamweaver had replaced GoLive, and the Flash Platform was starting to look like invincible technology. By MAX 2007 there were some very cool examples of collaboration between the amazing engineering talents of the two companies.
In my case, I was fascinated by the Dandelion project that was demonstrated in Chicago: my company eventually had the honor of inheriting this prototype application (Adobe didn’t wish to support it after it was officially abandoned), which represented a high-level combination of technologies. I had hoped for a more low-level reconciliation of the extremes, but while the acquisition was ambitious, it was extremely pragmatic. There were very early indications at MAX that Flash 10 would have much better text, but the Flash 9/InDesign integration that Dandelion represented had to hit the InDesign Server for high quality text rendition.
Adobe MAX 2007 also included the first demo of “Thermo” which later became Flash Catalyst. It looked quite stunning, a rich internet application now could be instantly created by a Photoshop designer, apparently. Ever since that first one, Catalyst demos have showed a much less rosy picture. We didn’t as of MAX 2007 have much insight into FXG or the Spark component model, which would be the underpinnings not just of Catalyst but also influential across numerous Adobe application, and would have direct relevance to the “Flash vs. HTML5″ question.
I am the gullible, naive optimist that can imagine a company like Adobe having the guts not only to spend $3.5 Billion on Macromedia, but also to think of technology first when assimilating such a company. Hadn’t Adobe supported SVG, the standard for vector graphics? At least one person did, as he wrote the spec. Yet I have gradually gained some insight over 15 years of having my hopes for standards constantly disappointed: even by 2007 I realized that if anyone at Adobe supported SVG, it wasn’t the person with the $3.5 Billion. No, that person supported the goals of the public company, like any good corporate shepherd.
Adobe could have had the guts to kill Flash right then and there. They actually could have killed it in 2002 or 2003 if they had been more ambitious and less fragmented with SVG, in my optimistic opinion. But as a public company they have to be so prudent, the SVG “silo” had to be incubated and analyzed to see what chance it had to bear fruit… and oops, it is not just a technology, it is a standard. The sensitive issue of when/if to support what form of “standard” is a balancing act for corporations, there is rarely an actual person leading these companies, capable of true support for standards: instead, committees define the proper balance between cheering on the standard (when politically convenient) and copying it into proprietary formats.
But no, in 2007 SVG was off Adobe’s radar and they were full steam ahead with Flash. Flash in Adobe’s hands made huge steps forward. They dramatically improved performance, they made stunning text improvements (“Vellum”), and Flex took shape as a serious development environment.
Adobe MAX 2008 showed the fruits of the first low-level integration work between the two companies that made a difference from the perspective of our company. We saw the Flash 10 Text Layout Engine, and thought for a moment… maybe standards just aren’t so important. For the first time, on the web, we can do things with text that should be basic to human communications. This is through proprietary technology, but it isn’t so bad…
Adobe MAX 2008 was the point at which the glimmer of Flash as a ubiquitous “platform” shone brightest. The fact that it didn’t run on mobile at all didn’t bother us then, because there were still graphs and charts implying it *would* run everywhere, naturally, of course. As of 2008 they weren’t spinning or faking these graphs and charts. It was already apparent that iPhones weren’t supporting Flash, it was becoming increasingly noticeable and some were already proclaiming it would never happen, but as of MAX 2008, many still believed that Adobe was working out the engineering and there would be an announcement any day.
Where was HTML5? The spec was moving rapidly, WebKit was advancing quickly, and Mobile Safari was starting to support SVG.
By Adobe MAX 2009, the Adobe/Macromedia rivalry was a completely internal affair… the public didn’t care/know about anything going on there, by this time it was Apple vs. Adobe, pure and simple.
Johnny and Kevin pretended all was fine in terms of Macromedia/Adobe, and that a VM approach to iOS apps would cure everything. Yet (a) Macromedia technologists had been given responsibility beyond their capability, and (b) Apple was hell bent on attacking any such VM approaches, thanks to the rare case where a human being actually ran a huge company. It appears that Steve Jobs personally made sure that Adobe would not casually virtualize iOS development.
By 2009 HTML5 was moving rapidly, there were cool demos on Apple.com, and demos based on nightly builds of WebKit were getting quite stunning.
This made for interesting shakeups prior to Adobe MAX 2010… by MAX 2010, Apple had relaxed its restrictions on virtualized iOS apps, and Adobe had gone further with Flash to HTML5 export. The 2010 keynotes had more dumb messaging and less content than any MAX I had seen, including puppets representing Flash and HTML5 learning to “live together.” In a rare moment of meaningful content, Kevin Lynch spoke of the work of Adobe to contribute to WebKit with an implementations of CSS Regions and CSS Exclusions.
Since MAX 2010 these minor efforts have born fruit… Adobe contributions are in the main WebKit and early Chromium builds, and have been implemented in IE 10 preview. Kevin Lynch recently summarized his perspective on the state of HTML5/Flash just prior to MAX 2011. Yet there are probably much deeper repercussions of the growth of HTML5 on Adobe’s technology path. Certainly the Flash “platform” is not aiming where it once aimed, and FXG is either deprecated or relegated to maintenance mode; there are still advances with Flash that keep it ahead of HTML5 in some respects, but there is also a rush to embrace HTML5 with new tools like Edge and Muse, as well as new features of Dreamweaver and other CS apps. Adobe is covering both bases, probably putting more emphasis on the HTML5 side.
Packaging Flex for Mobile devices has improved over the past year, and AIR support on the iPad 2 offers a better option than previous “build in Flash, run on iOS” attempts. Using Flash or other Adobe tools to create iOS native apps (as targeted by the Adobe Digital Publishing Suite) seem better handled and more of a priority to Adobe than reconciling Flash with HTML5. Flash Media Server can now serve up HTML5 video, Scene7 has some HTML5 features. Many small moves in the HTML5 direction, nothing fundamental.
Adobe MAX 2011
So here we are at MAX… based on what Adobe had put forth recently, as well as the sessions scheduled for MAX, my guess was that we would see a two-track approach: Flash will get better, staying ahead of HTML5 in some respects, yet there will be more HTML5 tooling, more HTML5 features, more forms of conversion. My main question was whether there would be anything fundamental and core tech that takes shape. When “Apollo” (AIR) came out, there was the big question of how to reconcile all the formats… the FXG effort was a possible move towards reconciliation. Flash Catalyst did not live up to expectations, the Flash “Platform” didn’t live up to expectations. In terms of HTML5 they have scrambled to rush out patches and prototypes that do a few things, but is there a fundamental core tech direction that has taken shape yet?
From MAX 2011 it seems my suspicions were correct that Adobe went after both the HTML5 and Flash tracks to advance a pluralistic approach, trying to catch up quickly in the HTML5 space and keep pushing Flash where it enjoys success. But my hopes of a core tech approach have not yet been fulfilled. It appears that everything is so new, and so much growth is by acquisition, that the core tech ambitions have evolved towards a more pluralistic, less integrated vision than was put forth a few years earlier as FXG seemed to bridge the creative suite apps with Flash and Flex. Or maybe I just missed some key session that was obscure… if anyone knows of such a thing, please clue me in.
The MAX conference coincided with the release of version 11 of the Flash Player and AIR 3, which push forward progress in Flash and AIR that has been moving steadily for years. The “Molehill” technology (hardware-accelerated high-performance 2D and 3D graphics), presented at the last MAX, finally came out with Flash 11 as “Stage 3D.” The Flex capability to create applications for mobile and tablet was touted, with the same vision as last year but much more tangible results. There was not a revolutionary advance on the Flash side of things, but the amazing performance of the player moved forward enough to show that Flash is likely to retain viability in the areas it still shows relevance, and can still light the way for HTML5 in being the first technology to make certain things possible on the web.
While the things that Adobe did show about Flash were cool, it was at the same time telling what they did not show… mainly, the killer demo of the past 4 MAX conferences, Thermo/Flash Catalyst, was nowhere to be found in the keynotes, though it did enjoy a session or two – it appears they may have gotten round trip to work, sort of. But the momentum of this product appears to be gone, between the more limited use cases envisioned for Flash now and the slow momentum of the technology itself. I would not be surprised to see Catalyst go away… it appears now more like a utility than an application.
What had been the promise of Flash Catalyst? That designers could create Rich Internet Applications. It now appears that Adobe doesn’t really think of Flash as the primary form of such applications… as is evidenced by comments in the latest Adobe earnings call, Adobe is aware that HTML5 is a more compelling output for an application like Flash Catalyst.
A number of already-announced initiatives for HTML5 were presented at MAX:
Muse seems to have usurped Flash Catalyst as the shiny button for magically generating code without coding. As the output format is so new, it will probably follow a path much like Flash Catalyst, enjoying a honeymoon period when nobody expects things to round trip, eventually reaching a role more as a utility than a true authoring application when the realities of real web publishing and the technical challenges of round trip become apparent.
Beyond the pre-existing HTML5 technologies, the following new initiatives announced/previewed:
I was most impressed with the HTML5 features of the next InDesign, which Kiyo showed in the Sneaks. These are discussed and shown on InDesign Secrets. This makes so much sense… designers are already in InDesign, which is an extremely well-built layout tool – there is already a very common requirement to “repurpose” print content from InDesign to web and tablet. Yet InDesign has had starts and stops in direction regarding output… SVG export came and went, HTML export has taken on several different forms, the Digital Publishing effort started as a clumsy crude hack but did at least start from an InDesign-based workflow.
The hope is that now Adobe has learned from their experiences that InDesign is not some sort of “legacy print app” ready to be put out to pasture, but a powerful design tool for multiple media. We have already been using it as such for many years, but simply by leveraging its wonderful exposure to automation, with CS5.5 and the new planned features, there look to be some directly usable multi-channel capabilities that will require far less work. It would appear that when Flash Catalyst appeared on the scene a few MAX conferences back, the InDesign multi-channel concept went on hold, and has been dusted off now that Flash and Flash Catalyst are looking far less omnipotent.
To read the PR about the Financial Times, you would have thought they had just invented the Web App. Yet as with so many “new” inventions (“AJAX”, “Cloud”…), the FT is merely leveraging technology that has been around for years. There is some newsworthy relevance to it, as they do represent a minority in the news industry and they apparently did build a decent app; so on balance, just like with AJAX and the Cloud, it is so inspiring to see the technology succeed that the late adopters painting themselves as innovators are forgiven.
Web Apps for devices such as the iPhone have been around since the iPhone itself: all the initial apps available for the iPhone were in fact Web Apps. Part of the initial PR for the iPhone was around just such an approach to applications, and Apple still provides core developer support that help truly innovative companies like Codify Design build such applications.
Apple is in the strange position of advancing both a closed, proprietary system with iOS, while simultaneously advancing web standards by using WebKit and HTML5. When a company like Google or Facebook use the Web App approach, it can look like a fundamental assault on iOS itself (and this was dramatized as such 2 years ago with Google, and last week with Facebook). Yet it would seem that Apple has gone far enough with support for standards that there is no turning back. Web Apps appear to be here to stay.
We at Silicon Publishing have seen great interest in tablet forms of everything we do, and while there are applications where native apps shine, our general philosophy is to use the Web App approach unless a native app has a really compelling benefit.
Certainly Web Apps are very cool in certain cases: a business distributing content to their internal staff, informational content distributed to consumers, etc. Tim Berners-Lee has a point about the pitfalls of the “walled garden” yet we still find ourselves building native iOS apps as well and can’t argue with those who think they need them.
We have two main products at Silicon Publishing: Silicon Paginator, which spews forth content from data sources, and Silicon Designer, which lets users edit documents online. Both are evolving to fully support tablets, with relatively simple extensions of Paginator (leveraging the thoughtful work of Codify Design who help us with this) rendering powerful output to tablets, while it is taking heavy work to define the Designer interface for tablets, and completion of that is a ways off. Editing/authoring is exponentially more involved than publishing.
The problem with tablet UI for editing is pretty much identical whether you are building a Web App or a native app… tablets are a very different UI, and working without a mouse makes, for example, text selection something of a UI challenge, one that even Apple hasn’t solved really well yet. Take editing inline formats, please (making a single word bold). I don’t believe there exists an elegant interface for that, currently. We’re working on it, and we’re poised to steal the best work of whoever gets it right.
It isn’t trivial to make Web Apps work. While there are tools that are improving, they are just starting to evolve, and to make things work is alot like any other new technology, you have to surf around to find nuggets of information to get past the initial obstacles.