Canvas 2D Web Apps/Introduction

About this Book
This book was created to provide a relatively easy way to create functional software prototypes for a large range of interactive applications — including non-standard multi-touch interfaces with multiple simultaneous multi-touch gestures. The rationale is that easier prototyping permits more prototyping and more prototyping is likely to result in better prototypes. On the other hand, prototypes that use non-standard interfaces and other exotic features of technologies often require some form of programming; thus, this wikibook has a strong focus on how to program certain common features of interactive applications.

After considering various technologies, we decided to base this book on HTML5 web apps; more specifically, on the API (Application Programming Interface) of the HTML Canvas 2D Context. Here, a “canvas 2D web app” is an HTML5 web page that only shows a single canvas element but includes several JavaScript functions to render interactive 2D graphics within this canvas element. However, path objects and hit regions are not used since at the time of writing, the specification of the canvas 2D context states that these “features are at risk and may be removed due to lack of implementation.”

This approach has several advantages:
 * It supports many popular platforms (desktop web browsers, browsers on mobile devices, iBooks widgets, etc.).
 * It supports mouse and multi-touch interaction, animated graphics, and sound.
 * The limitation to a single canvas 2D context allows us to completely avoid CSS syntax, most HTML syntax, and many dependencies on browser extensions.
 * The 2D graphics programming can be simplified by focusing on rendering bitmap images (instead of vector graphics).
 * The approach allows us to use a single entry point (for each page) for all rendering and event processing.

While writing the book, it became clear that it would be useful to provide a framework in the form of a set of useful functions and use those for the examples instead of starting every example from scratch. The framework is called cui2d (Canvas 2D User Interface) and the JavaScript file is called cui2d.js. An automatically generated reference documentation is available online. The code of cui2d, as well as the code of the examples in this wikibook are in the public domain. The approach has a couple of advantages as discussed in the chapter introducing this framework.

Who Should Read this Book?
The book is intended for anyone who is familiar with JavaScript and wants to prototype interactive applications with 2D graphics for a web browser or a mobile device without having to learn a lot of HTML and CSS syntax but is willing to learn how to program the HTML5 canvas 2D context.

How to Read the Book?
The book starts with several “Getting Started with ...” sections, which introduce static canvas 2D web apps on various platforms; thus, you should read the section for the platform that you are interested in. The following chapters discuss the implementation of common features of canvas 2D web apps with the cui2d framework. Throughout the book, working HTML and JavaScript code is presented to allow readers to try out the discussed concepts.

Which Programming Paradigm is Employed?
The GUI programming in this wikibook uses a single JavaScript function for rendering and event processing. This is not a new idea:
 * The canvas 2D context specification includes an example (at the end of Chapter 12) with a function  where   is a boolean flag determining whether the function actually draws the checkbox. If the flag is , it still sets the current path of the checkbox, which is used in the event handler to determine whether the checkbox has been clicked. This approach requires identical calls to   in the redraw functions and the event handlers while it is preferable to use only a single function for redrawing and event processing.
 * The 2D GUI system of the game engine Unity also employs the same function for rendering and event processing. However, in Unity, this function is called every frame (usually at 60 or 30 frames per second) while it is often beneficial to call the function only when needed. Furthermore, Unity appears not to separate rendering from event processing, which makes it more difficult to process multiple events per frame (e.g. for multi-touch devices).

Since the rendering is only based on the GUI's state variables (including the current time), the event processing can be considered an implementation of the transition of the GUI's state (by changing the state variables). Thus, the function for rendering and event processing corresponds to the step of an automaton in automata-based programming. Furthermore, this function may call subroutines for rendering and event processing of contained GUI elements (e.g. widgets), which may again call further subroutines, etc. In that case, the hierarchy of contained GUI elements is reflected by the call hierarchy of the program. This allows us to define and reuse standard widgets, which would be difficult in automata-based programming with a single switch instruction to distinguish between all states of the GUI.

Since the GUI is always rendered from scratch based on the GUI's state variable, there are also similarities to reactive programming because the render function specifies how the GUI is constructed from its state variables, which is what would be specified in reactive GUI programming.