My Thoughts

React, Deck.gl, and Digital History in JavaScript

As a digital historian who is a JavaScript developer, I frequently feel like an odd duck mainly because the field is currently enthralled with R. There are many good reasons to use R. It is an incredible statistics-driven scripting language. But since my research has a distinctly spatial flavor to it–I’m especially interested in France’s geopolitical community–I naturally gravitated toward JavaScript because of its web mapping capabilities. Since then, I have become more of an advocate for JavaScript for historians. Certainly, people should use whichever language suits their needs best, but I don’t think JavaScript should be dismissed so easily. Since the ES6 additions to JavaScript in 2015, the scripting language has become much more user-friendly. More importantly, Node.js has become increasingly popular, and the libraries available for statistical analysis (D3.js, simple-statistics.js, Tidy.js, Lodash.js, Tensorflow.js, etc.) have made working with Javascript and data a breeze. But that is not the main reason I advocate for the use of JavaScript. Really, for me, the focus is on the form. One of the best aspects of digital history is displaying the information online and democratizing history. JavaScript is the language of the web, and it gives the greatest ability to create interactive web visualizations that can interact with the text on the page. This ability has only increased dramatically with the development of front-end frameworks such as React, Vue.js, Angular, etc., and access to GPU-accelerated rendering such as mapbox-gl, map-libre-gl, and especially Deck.gl. The possibilities these technologies provide for enhancing the form and usability of interactive visualizations and spatial analysis on the web makes JavaScript an important language for digital historians to consider.

Digital historians must think about two parts of their project: the data analysis and the form by which it is presented as a narrative. R has satisfied both of these capacities for some time. It is an impressive statistical programming language that makes visualizing data incredibly painless. R shiny apps make it possible to render those visualizations on the web. But R is really a data analysis language that can also operate on the web. JavaScript on the other hand is a web language that can do data analysis. Node.js was introduced in 2009, producing a runtime environment for JavaScript on the server. In other words, JavaScript can run anywhere and does run anywhere, including LinkedIn, PayPal, NASA, Netflix, ebay, and others. Since 2015, ECMA International, the company in charge of standardizing JavaScript, introduced a series of features (classes, arrow functions, and modules among others) that made the language a truly full-purpose programming language. These improvements have been combined with a rapidly growing community of libraries that increasingly make JavaScript very conducive to data analysis. D3.js, a data visualization library that includes statistical functions as well as the ability to produce incredibly bespoke visualizations, has made data analysis on the web incredibly powerful. For quicker visualizations that do not take as much time to produce, the same programmers of D3.js created Observable Plot. Moreover, incredibly complicated processing has been made possible by exploiting the native web-gl standards which give JavaScript access to the Graphics Processing Units (GPU) of the computer to process heavy loads. For instance, while the famous Tensorflow machine learning library (written in C++ and CUDA) has iterations for Python and R, they are primarily wrappers that run the native Tensorflow code. Tensorflow.js in the browser, on the other hand, runs actual JavaScript that gains access to the GPU through WebGL (except if Tensorflow.js Node runs as wrappers of C++ and CUDA like Python and R). The point here is not that JavaScript is better because it can run Tensorflow natively, but that JavaScript has come a long way–a very long way–in the past seven years that make doing data analysis much easier. Nevertheless, it is still not as easy to do data analysis in JavaScript as R or even Python for that matter. But at the same time, if you ultimately want to put the data on the web. Being comfortable working with and manipulating arrays of JSON (the data format of the internet) is essential.

But where JavaScript excels is in the form by which historians present their information, and here, it excels like no other. In this manner, it is the inverse of R or Python, and this is what makes it so important to historians. The glory of digital history is that the public can actually gain access to what you write. More people will read your work. A book by an academic press might–might!–print 400-500 copies that will be stored away in academic libraries. Digital history makes your hard work accessible to the public. Yet, we lose those opportunities with bad user interfaces (UI) and bad user interactions (UX). People, including researchers, have no patience for bad UI and bad UX. It is the equivalent of bad writing: JavaScript to the rescue. Facebook released React as an open-source front-end framework to create dynamic single page applications. In other words, pages that load once but use asynchronous data fetching to change the data on the page.[1] While it looks as if you changed a page, only specific information on the page was reloaded. This operation creates a seamless user experience. More importantly than navigation, however, is how the framework (and others that followed such as Vue.js and Svelt.js) makes interaction and reactivity (a term that refers to asynchronous data streams manipulating the state of the page in real time without the need for page reloads) so simple to accomplish.

React has taken over the web. The companies that depend on it dominate internet traffic: Facebook (obviously, since they created it), Instagram, Netflix, Yahoo! Mail, The New York Times, WhatsAPP, CodeAcademy, Dropbox, Blackboard, Canvas and so many more. In an incredibly ironic twist, the WordPress blog editing tool I’m using to write this post uses React. If you have used ArcGIS StoryMaps–with its fancy responsive UX, manipulating visualizations on scroll–you used a React app. React and similar libraries are used almost everywhere because of these features. Managing large complicated sites is simpler and more manageable. On top of it, spinning up a simple site is expedited tremendously. Creating the boilerplate for a new site is as simple as running npx create-react-app [your program name here] in the console, and an environment for data manipulation and visualization is immediately created.

When we add other visualization libraries to React, the sky is the limit for how we can visualize and represent data, and how we can integrate it into the narrative. D3.js is particularly useful. Its functions for manipulating arrays of data in JSON are powerful, and it can create any visualization you can imagine. But the webs power is made especially apparent by exploiting JavaScript’s implementation of WebGL–native JavaScript code that is standard for all modern browsers that gives the browser access to GPU-accelerated processing. Libraries such as Deck.GL have made exploiting this JavaScript tool for visualizations particularly easy. (Writing Web-GL code is particularly difficult and time intensive. I would not recommend it).

How amazing this technology combination can be is best demonstrated with an example. Below is an iframe of a visualization of Florentine properties in 1561. The data comes from the DECIMA Project. You can link to the actual page here if you want to view it in full screen. The visualization represents almost 9,000 properties rendered in 3-dimensional space, and the browser handles it easily. More importantly, all the data is rendered on the client at load time, so it can be filtered in real-time without fetching anything after the initial load.

The map presents a large corpus of data in an immediately intelligable fashion without summarizing it, which eliminates the underlying data from the visualization. In other words, the data on the map is completely transparent. Every data entry is viewable on the map and the underlying information can be viewed in the popup.

When you do digital history with JavaScript, you put your users first. Its much the same as writing with your readers in mind. The difference between the languages is where you will do more work. R and Python have great libraries for quick analysis. JavaScript can do all the same thing, but just with a little more work. R and Python have ways to build web apps, but its going to be more work than JavaScript because JavaScript is built for web presentation. I think we should start taking JavaScript a lot more serious for historical analysis purposes.

Notes:

1. Asynchronous simply means that code that must await some response to run is placed in a queue and only runs after the rest of the synchronous code finishes. This prevents the page from stopping a rendering (or any other) process as it awaits data from some other source.

css.php