Build data-driven environments for desktop, touch, and WebVR.

Hello World with Embedding.js

Creating your first data-driven environment

Just as D3 makes it easy to create web documents whose appearance and behavior is generated dyamically from data, Embedding makes it easy to create immersive environments using proper abstractions with readable, clean Javascript.

This tutorial shows how quickly you can create your own environment, which you can then navigate and interact with from within your desktop browser, a mobile browser on a phone or tablet, or a WebVR-enabled browser - all supported from the same code without modification.

If you want, you can check out some examples before returning here to roll your own.

Clone the embedding-boilerplate repo or download and unzip it.

$ git clone https://github.com/beaucronin/embedding-boilerplate.git
$ wget https://github.com/beaucronin/embedding-boilerplate/archive/master.zip \
  -O embedding-boilerplate.zip
$ unzip embedding-boilerplate.zip

Open the index.html file in your favorite editor and take a look. Note that a number of scripts are loaded in the head element - in addition to three.js, these support WebVR, including the default camera movement, distortion, responsiveness and input behaviors.

<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/three.js/r81/three.min.js"></script>
<script type="text/javascript" src="js/VRControls.js"></script>
<script type="text/javascript" src="js/VREffect.js"></script>
<script type="text/javascript" src="js/webvr-polyfill.js"></script>
<script type="text/javascript" src="js/webvr-manager.js"></script>

The style tag which follows ensures that the canvas element in which WebGL renders fills the entire window, without margin or overspill.

body {
  width: 100%;
  height: 100%;
  background-color: #000;
  color: #fff;
  margin: 0px;
  padding: 0;
  overflow: hidden;

Now take a look at the body script, which is where you'll do most of your work. The lines included there invoke a convenience function that configures the essential objects that you'll need in order to create and animate your environment. They also declare, but do not define, the dataset and embedding objects that you'll typically create - which we'll do next.

const { scene, camera, manager, effect, cameraControls } = EMBED.initScene();
var dataset, embedding;

Every Embedding.js environment will have at least one Dataset, and at least one Embedding. Datasets can be populated from many sources, including database queries, websocket connections, web APIs, and static files available from web servers. In this example, we'll use a CSV loader convenience function to load one of the most famous datasets in all of statistics, Fisher's Iris data.

Add the following lines of code to the script:

  function(dataset) {
      // To be filled in below

While there are many creative directions we could take in embedding this dataset in space, we'll start very simply. In the Dataset.createFromCSV() callback, we create a MeshEmbedding - a basic embedding type in which each datapoint is represented by a single mesh in the scene. We'll rely entirely on MeshEmbedding's default behavior to start.

embedding = new EMBED.MeshEmbedding(scene, dataset, 
  mapping: {
    x: 'petal_length',
    y: 'petal_width',
    z: 'sepal_length'
  color: EMBED.utils.categoricalMap('species', {
    'Iris-setosa': 0xff0000,
    'Iris-virginica': 0x00ff00,
    'Iris-versicolor': 0x0000ff
  ry: Math.PI,
  z: 3

The first two arguments to the MeshEmbedding constructor are the scene, to ensure that the meshes created can be added to the environment, and the dataset we just created. The third argument is an options object that specifies how the embedding will render the data. In this example, we use only a few of the possible options.

  • The mapping tells the embedding which attributes in the dataset to use for the x, y, and z position of the meshes.
  • The color specifies the material color to use. In this case, we use a categoricalMap utility function that maps from attribute values (in this case, the possible Iris species) to colors.
  • The ry and z options specify the y-rotation and z position of the embedding object, respectively. These are a simple, if slightly inelegant, way ensure that the datapoint meshes are visible from the initial camera position and orientation.

After creating the embedding, we register it, which ensures that it will be updated properly to reflect changes in the underlying dataset, input actions, and so on. Finally, we start the animation loop with a call to startAnimation().

Once you have filled in this code, you can view the resulting environment by serving it locally (for example using http-server) and then opening the page in your web browser. The resulting behavior will depend on the hardware and browser you use:

  • On any standard desktop browser with WebGL support, such as Chrome, Firefox, or Safari, on either Windows or a Mac, you will see a canvas that takes up the entire browser window. You will be able to navigate with mouse and keyboard arrow keys.
  • On a mobile browser on either Android or Safari, you should be able to direct the camera orientation using the phone position - a so-called magic window.
  • On a WebVR-capable browser, you should be able to enter VR mode by clicking on the VR button in the bottom-right. At this point, you should be able to put on your HMD and look around the environment.
  • On a mobile device that supports Cardboard or Daydream, you should be able to enter a WebVR mode by touching the VR button in the bottom-right. Coming soon.



Join https://gitter.im/embedding for help and to find ways to contribute.