Coding

Steam VR: Assigning inputs to OpenVR controllers in Unity

Creating spheres by pressing the trigger button.

Creating spheres by pressing the trigger button.


UNITY 2018.3.0f2 – OPEN VR INPUT

Assigning input to open controllers

Since I started dabbling in VR development I’ve been struggling with the apparently simple operation of assigning actions to VR touch controllers. What is a simple scripting operation for keyboard and mouse, became an exoteric guessing game when applied to VR controllers, one that involves Steam VR accounts, binding profiles and the creation of action sets.

The philosophy behind this convoluted process is to create a product agnostic input system, but I’m still baffled on why I can’t just press a button and assign the script I want for this. To complicate matters, because of the shifting sands of VR development, any software update can cause minor changes that make scripts stop working and trusty tutorials out of date.

Because of this I decided to make this guide on assigning Open VR inputs for the latest Unity version (2018.3). This is not intended to be a tutorial ( I don’t know if I’ll be able to make it work) but as a guide and journal of my process.

Open VR Inputs for Unity 2018.3

  • Downloaded SteamVR v2.2.0 assets from the github page.

    • Installing package: Assets – Import Packages – Custom Package – (steamvr2.2 folder)
  • Setup the scene: Player object, Input and Teleport System

    • Create a plane object.

    • Add a Steam VR Player gameobject (type “player” on the Project tab search bar to find it). Place it anywhere on the scene.

    • Disable the “Main Camera”

    • Activate the Input System: Windows – SteamVR Input – prompt will show up: “yes” -

      • The SteamVR Input windows is where you create actions that will be later assign to buttons on a different screen (I’ll check this later). For now press “save and generate”. This will activate the SteamVR sample commands to get started. Close the screen.
  • Add Teleport system:

    • Duplicate the Plane, rename it (exp “TeleportPlane”) and move it on the Y axis (up) just a little bit (exp: 0.05) – This plane will become the teleport area.

    • Add the SteamVR “Teleporting” game object to the scene – this will control the teleporting action.

    • Go to the copied plane (“TeleportPlane”) and “Add Component” – TeleportArea. – Now the plane will be transparent and limits the teleport area.

    • Press play and test the scene on your headset.

    • ! (For some bizarre reason, the headset wasn’t displaying anything. Again, I spent time looking for any errors until I pulled the classic “turn de pc off and on again” and it worked!).

  • Steam VR Key binding – I want to create a sphere on my hand’s position when I press the trigger button.

    • Window – SteamVR Input

    • Create a new input. Named my “DrawTrigger” and set it to “Boolean”. This means it can only be “on” or “off” true/false. If I wanted to use the analog trigger control in order to detect how much pressure I’m applying to the trigger, I had to choose “Vector1”. “Vector2” is for the analog stick 2d position and “Vector3” is for the controller gyroscope…I think.

    • Save and Generate

    • Go back to the SteamVR Input

    • Click “Open binding UI”. This will open a browser window with your steam account and the binding profile for this specific Unity project.

      • Press “edit” on your browser: It will open the “public binding for oculus touch in (unity project name)”.

      • Here you can assign SteamVR Inputs to specific buttons. It also have several tabs, with different button biding collections, like “platformer”, “buggy”…I’ll keep at the “default” tab, because this is where I created my action in Unity.

      • On “Trigger” press the “+” icon to create a new button assignment. Choose “Button”

      • The new “Button” has two modes: “Click” and “Touch”. I want “Click”

      • Click “none” next to “Click”. Choose the action you just created “drawtrigger” in my case.

      • Save by clicking “Save Personal Binding” at the bottom of the window.

      • Go back to Unity.

  • Code Key Command – With the button assigned, It is time to code what you want it to do. UNITY VERSION ALERT! – Open VR is always evolving, which means the coding seems to change all the time. Most tutorial I looked didn’t work on Unity 2018.3. That’s the main reason I’m doing this tutorial for myself.

    • Create Script – Create your new script. Ideally make a folder for it inside “Assets”. I created a “_Script” folder. I named my script “Draw”. Recapitulating: It will create a sphere where my hand position is. Open the script on your code editor.

    • !!!!1 – Add Valve VR Libraries. Before coding you have to add the libraries:

When you open the script it, the library area on top should look like this:

using System.Collections;

using System.Collections.Generic;

using UnityEngine;

Add the following

using Valve.VR;

using Valve.VR.InteractionSystem;
  • !!!!2 – Add Namespace. I banged my head on this one for a while.
    The `public class Draw : MonoBehaviour` must be inside the brackets of a namespace `namespace Valve.VR.InteractionSystem.Sample` like this:
using System.Collections;

using System.Collections.Generic;

using UnityEngine;

using UnityEngine.UI;

using Valve.VR;

using Valve.VR.InteractionSystem;

namespace Valve.VR.InteractionSystem.Sample

{

public class Draw : MonoBehaviour

{

}

}
  • Now the script is ready for you to code, otherwise it wouldn’t recognize any of the Steam VR inputs.

  • Calling the buttons and actions. Creating the Steam VR variables

  • Inside `public class Draw : MonoBehaviour` add

namespace Valve.VR.InteractionSystem.Sample

{

public class Draw : MonoBehaviour

{

//Create variables for the SteamVR actions

public SteamVR_Action_Boolean triggerPress;

//Call Rigidbody objects to be created.

public Rigidbody paintObject;

private void Update()

{

//when the left hand trigger button is pressed down, create object at the same
position and rotation of the current parent object.

if (triggerPress.GetStateDown(SteamVR_Input_Sources.LeftHand))

{

Rigidbody paintObjectClone = (Rigidbody)Instantiate(paintObject,
transform.position, transform.rotation);

}

}

}

}
  • Assigning objects to the variables on the Unity Inspector screen

The “public” variables called at the beginning of the scrip must be assigned “by hand” on the Unity Inspector screen. This allows the same script to be used with different objects at different contexts.

    • Create a “Sphere” 3D object and make it a prefab by dragging and dropping it inside the folder you assigned for prefabs. Named mine “_Prefab”. The sphere will be the “paint” created by the “Draw” script when I press the trigger button.
  • Add the “Draw” script to the object you want to control it – In this case SteamVROjbectsLefHand

  • On the “Draw” script component inside the LeftHand object assign the “DrawTrigger” SteamVR action to “Trigger Press” and the sphere prefab to Paint Object – as shown in the animation bellow:

Assigning the variable objects to the script on the Inspector window

Assigning the variable objects to the script on the Inspector window

RW E-Text - Dark Souls, Lonely Hearts - FINALS (update)

 

 

Development Diary

For my finals I will continue my previous exploration of computer generated poetry using database of words from the game Dark Souls

Dark Souls is the story of a lonely warrior braving a strange and dreamlike world filled with monsters. It is known for being a very difficult game, but also for its cryptic story line and intricate meticulously crafted level design. It emanates a constant sense of dread and wonder, as you roam through dark corridors and spiraling towers - dying over and over again as if in a curse, slowly progressing through trial and error. While playing as its silent protagonist for many hours I realized that I was constantly filling this silence with strange recurring inner monologues. Most of it was relative to the repeating game mechanics - "dodge; jump; perry; dring; flee", but not only that. As if in a shallow trance, I was combining small snippets of dialogues and thoughts related to the emotions and exchanges in the game. I would think on the empty windows, the constant dread, the ever repeating loop of dying, fighting and dying again, roaming foggy lands and listened to non-existent dialogue from its monsters. Often taunting, but mostly suffering and longing. 

When learning how to use Python and Tracery to create randomly generated text, I used the Dark Souls repository of words just because It seemed like the most interesting one and I imagined that it would be funny just to play with it. But as I started to add elements of the game world into love stories I was reminded of the voices in my head when I played videogames - specially ambiguous ones like Dark Souls. That strange combination of words, phrases and game elements in my Tracery exercise felt a lot like the same process that went through my head - and how powerful it is when the player transforms the game experience into something of his own. And what is poetry function if not to blur the literary narrative with the reader's most inner thoughts?

Only when a loyalty has his or her left arm accidentally kissed by a demon, will a sellsword truly believe in love.
Only when a wings has his or her thumb sheepishly touched by a critical foe, will a princess truly believe in love.
Only when an ash has his or her core violently caressed by a friend, will a beast truly believe in love.
Only when a hint has his or her core jubilantly touched by a friend, will a fatty truly believe in love.
....

Expanding the original idea

Updating the original code - saving expansions

Lists and basic rules are working properly. Also, added some new lists. Now I need to create more rules in order to generate an overal arch for the final piece. This means creatin a sense of beggining and end to the poem. For this I want to start by saving the objects generated so it can remain part of the generated narrative.

Kate Compton's Hero example 1 and Her Crystal Palace Tracery Tutorial

rules = {
    "origin":["#[lover:#creature#][wise:#creature#][muse:#creature#]story#"],
    
    "story": ["One evening in #location#, a #mood# #lover# met #wise.a# who asked: Do you believe that #muse.a# can love #lover.a#? The #wise# looked at the #lover# with #mood.a# #bodyPart# and touched the #lover#'s #bodyPart# while saying:Only when #creature.a# has his or her #bodyPart# #adverb# #verb# by #creature.a#, will #muse.a# truly believe in love."],
   
    "noun": [ "#creature#", "#bodyPart#", "#concept#", "#object#"],
    
"verb": verbs,
"location" : locations,
"object": objects,
"creature": creatures,
"bodyPart": bodyParts,
"action": actions,
"concept": concepts,
"conjunction": conjunctions,
"adverb": adverbs,
"mood" : moods

}
    
grammar = tracery.Grammar(rules)
grammar.add_modifiers(base_english)
for i in range(2): 
    print(grammar.flatten("#origin#"))

The result: 

One evening in Church of Yorshka, a offended queen met a sage who asked: Do you believe that a poor soul can love a queen? The sage looked at the queen with an elated stomach and touched the queen's smallfinger while saying:Only when a dragon has his or her left side yieldingly hugged by a wretch, will a poor soul truly believe in love.
One evening in Cathedral of the Deep, a earnest oddball met a pyromancer who asked: Do you believe that an oddball can love an oddball? The pyromancer looked at the oddball with a chipper tail and touched the oddball's rear while saying:Only when a pilgrim has his or her right side weakly touched by a cleric, will an oddball truly believe in love.

Prose and poetry - making it more complex

Now that I can save the results could keep on adding more and more elements to the #story, but I'd like to try doing something more than a Tracery generated fan fiction. How could I make prose more like poetry? I'd like to add more flow and rhythm to it. The narrative could start like prose, but evolve to a more recursive structure as the "lover" and "wise" character dwell deeper into what loves means in the strange world of Dark Souls. For this i'll have to go beyond the safe waters of Tracery.

Rhyming with word vectors

After looking at most of the class notes and Allison's Parrish's repository, I believe that only  word vectors will be able to give me what I want. 

https://github.com/aparrish/phonetic-similarity-vectors/blob/master/some-applications.ipynb

Spacy, word vectors and tracery 

After studying the Word Vectors and spaCy notes by Allison Parrish, I managed to mix it to my previous Tracery Dark Souls code with some very satisfying results. 

Word Vector is as system for finding similarities between data in several dimensions. In principle it works like any vector: X and Y positions in a 2D plane. In the first example, it compares different animals organized in a X and Y axis of cuteness and size. By analyzing the distance between them it is possible to get some interesting insights about the relationship of kittens and tarantulas for example. 

Spacy is, among another things, a Natural Language Processing system that can be used to parse through huge amounts of data and extract all kinds of information from it. It is specially useful for text analysis. 

In my case I managed to use Word Vectors and Tracery to go beyond the initial Json Dark Souls library I've been using inside my Tracery prose. 

After creating all the different arithmetic functions for vector comparison, addition, subtraction, mean, I installed spaCy and fed it with the "Frankenstein" text from the example notebook. 

With these codes in place I just applied them inside the Tracery rule set.  

A normal Tracery rule set is like this: 

rules = {
    "origin":["#[lover:#creature#][wise:#creature#][muse:#creature#]story#"],
    
"story": ["#tracery_halfsies#.......One evening in #location#, a #mood# #lover# met #wise.a# who asked: Do you believe that #muse.a# can love #lover.a#?\nThe #wise# looked at the #lover# with #mood.a# #bodyPart# and touched the #lover#'s #bodyPart#  ( EDITED).............."],   
    "noun": [ "#creature#", "#bodyPart#", "#concept#", "#object#"],
    
"verb": verbs,
"location" : locations,
"object": objects,
"creature": creatures,

A spaCy powered Word Vector operation for finding words located halfway between two other words. looked like this:

spacy_closest(tokens, meanv([vec("truth"), vec("war")]))

['war',
 'truth',
 'nothing',
 'Nothing',
 'Fear',
 'fear',
 'strife',
 'conflict',
 'wars',
 'humanity']

Ominous right? And cool! 

So I add this spaCy code as one of the rules, but instead of comparing two specific word it is comparing words generated by tracery for that specific iteration of the rule set. If it chooses the word "Angel" and "Miscreant", the spaCy rule will compare these two and give me a new word not included in the original Dark Souls Json file. 

Here it is comparing #lover# and #muse#, (which were generated from the #creature# library and saved to be used through the whole text.)

"tracery_halfsies": spacy_closest(tokens, meanv([vec("#lover#"), vec("#muse#")]))

Now I just have to add #tracery_halfsies# inside the bigger story generating rule set - in my case "story:"

rules = {
    "origin":["#[lover:#creature#][wise:#creature#][muse:#creature#]story#"],
    
"story": ["#tracery_halfsies#.......One evening in #location#, a #mood# #lover# met #wise.a# who asked: Do you believe that #muse.a# can love #lover.a#?\nThe #wise# looked at the #lover# with #mood.a# #bodyPart# and touched the #lover#'s #bodyPart# while saying:\nOnly when #creature.a# has his or her #bodyPart# #( EDITED )..............f #techniques# was of no use to defeat this #creature# that grows inside."],   
    "noun": [ "#creature#", "#bodyPart#", "#concept#", "#object#"],

Polishing the Tracery/WordVector integration and beyond.

Now that I've got the integration working and added the word vector code to my original Dark Souls Final notebook, it is time to be more creative with it.  I want to do more than adding new words, and since word vectors are already in place, I'll try going back to the Word Vector notebook and add some systems for sentence comparison. I might even mix what I've got with a Pablo Neruda poem on love lost "I do not love you except because I love you". Sorry Neruda!

Managed to add the sentence comparison system to the Tracery code. The next step is to make it compare with more texts. I haven't touched the Neruda poem yet though. But now...

Using Pronunciations and Annoy 

https://gist.github.com/aparrish/4f4f35a046ac1d954a02fc1ffbed9dcb

Installed Pronounce, Annoy and WordFilter. The goal is to get the final part of the Tracery code and gradually make it decompose like the example shown on Allisons page on "some applications". If possible, I will make it so the words will rhyme as they decompose with the results from the tracery rules.

More specifically, I want to stretch the iconic Dark Souls sentence "You Died" to as far as possible, making the tracery prose fade away in a string of repeating words.

Meanwhile, this is the latest result I'm getting from the code

"Precipice.......One evening in Road of Sacrifices , a fatigued tough enemy met a moneybags who asked: Do you believe that a monster can love a tough enemy? The moneybags looked at the tough enemy with a refreshed smallfinger and touched the tough enemy's ringfinger while saying:\n
Only when a sellsword has his or her mount innocently touched by a skeleton, will a monster truly believe in love.
The tough enemy looked the reflection in a mirror and murmured and saw a precipice and a perceptions and a perceptions...\n
Not a single monster, in his/her mind only images of warned inhabit and frogs and hare.\n tough enemy roamed until arriving at a dark spot in Irithyll Dungeon.
His/her mastery of prudence was of no use to defeat this duo that grows inside.precipice is what makes tough enemy parrying through Archdragon Peak and moonshine
As tough enemy dreams of monster - a sound like ore, a sound like lion echoes. Then nothing else as tough enemy is killed once more by a viciously beating to a pulp from the relentless undying player.You Died....tough enemy",
 

Mashups - Data and Three JS

Final project - Changing the original proposal

(4/24/18 update) - Github page for this project. My original idea for API Mashups class was to create a 3D representation of webdata that could be experienced in VR, but since then I've been struggling with the web development assignments, specially with coding in JavaScript. Last class I made the proposal of scaling the project down and just making a 3D version of the FlickerTimes tutorial. When reminded by the teacher, I accepted that it seemed like an uninspired solution to my coding woes.

Three.js as inspiration

Three.js is a JavaScript library that allows for rich 3D content on the web - using only the browser. The examples at threejs.org are truly inspiring. Since the beginning of this class - semester even - I focused mostly on coding, but the result was that I didn't do anything visually exciting. Being more of a visual person, I decided to push myself into three.js and try another idea that has been in my head for a while already.

New Idea - 3D Data experience

My goal now is to create a 3D data visualization of NYCs water consumption using swimming pools. I believe 3D visualization of data, specially through VR, a great tool of seeing data through a more visceral point of view, instead of the detached analytical experience of spreadsheets. I'll add some sort of interaction as soon as I get this thing working. Also, swimming pools are big cubes, and I can make cubes in three.js. At least that's the idea for now. If the first prototype works, I'll think on another API interaction.

Learning ThreeJs

Managed to make the first, and very basic tutorial run through a local node server. The node server is run by localserver.js and is refreshed by the "forever" npm plugin.

tutorial_cube.gif

Now I'm looking at Threejs examples using RawGit to see in action what's in their GitHub repository

Tried to use a WEBGL detector, but it broke the code.

Running using different files

Instead of writing a long HTML file with all my style and JavaScript inside, I'm creating separete ones. It worked, but now I have two instances of the canvas and two cubes running on top o each other!

cubeNplane.JPG

Found out I copied code by mistake inside the main.js. When fixed, everything runs ok.

Kept following the instrunctions and now I have a cube AND a plane.

After going through the Lynda Workshop managed to create hierarchies, named objects and separate functions.

JavaScript code - main.js

//Lynda - Create a "init" function to keep it organized.
function init(){

  //Create Scene, Camera and Render
  //using "Perspective" camera (one of several types of camera)
  var scene = new THREE.Scene();

  //calling the GetBox function
  var box = getBox(1, 1, 1);
  //calling plane
  var plane = getPlane(4);

  //NAME OBJECTS - allows to call specific objects with "get" commands.
  plane.name = 'plane-1';
  //ROTATE PLANE = can't use "plane.rotation.x= 90;" because THREEJS uses Radians instead of degrees. For this will use the "math" object.
  plane.rotation.x = Math.PI/2;
  //BOX POSITION: makes its position half its height, so it keeps on the grid no matter what size.
  box.position.y =box.geometry.parameters.height/2;
  plane.position.y = 1;
  //after calling the function you have to add the object to the scene. PARENTING: (box) will become child of "plane" (or any other object, like "scene" fxp).
  plane.add(box);
  scene.add(plane);
    //camera parameters: field of view in degrees, aspect ratio, near, far clipping planes.
  var camera = new THREE.PerspectiveCamera(45, window.innerWidth/window.innerHeight,1,1000);
//move the camera by 5, so it is in a different position than the cube (BoxGeometry)
camera.position.x = 1;
camera.position.y = 2;
  camera.position.z = 5;
  camera.lookAt(new THREE.Vector3(0,0,0));
    //"Where the magic happens".Parameters: setsize,
  //For lower resolution: setSize(window.innerWidth/2, window.innerHeight/2, false)
  var renderer = new THREE.WebGLRenderer();
  renderer.setSize( window.innerWidth, window.innerHeight);
  document.getElementById('webgl').appendChild(renderer.domElement);
//instead of just "renderer.render( scene, camera);", call the "update" function, adding "renderer" to its paramaters.
update(renderer, scene, camera);


//lets check the parameters on the browser console by typing "scene" on it.
  return scene;


}

function getBox(w, h, d){
  // The Object: Cube (BoxGeometry)

        var geometry = new THREE.BoxGeometry( w, h, d);
          //"to keep it simple we are just using color attribute"
        var material = new THREE.MeshBasicMaterial ( {color: 0x00ff00});
        var mesh = new THREE.Mesh( geometry, material);

//forgot to add "return mesh". I was just stating the var, but not running it. Thought that "add.box" would be enough.
        return mesh;
}

//LOOP RENDER AND ANIMATION
function update(renderer, scene, camera){
 renderer.render(scene, camera);
//"var plane" will find the "plane-1" plane object.
 var plane = scene.getObjectByName('plane-1');
 plane.rotation.y += 0.001;
 plane.rotation.z += 0.001;
 //call requestAnimationFrame and runs the "update" function itself, making a loop. requestAnimationFrame also optimizes the render for animation 60/s .
  requestAnimationFrame(function(){
  update(renderer,scene, camera );
  })
}
//PLANE GEOMETRY: Copied getBox and changed the parameters
function getPlane(size){
  // The Object: Cube (BoxGeometry)

        var geometry = new THREE.PlaneGeometry(size, size);
          //"to keep it simple we are just using color attribute"
        var material = new THREE.MeshBasicMaterial({
          color: 006800,
           side: THREE.DoubleSide
        });
        var mesh = new THREE.Mesh( geometry, material);

        return mesh;
}


//added "var scene" to "init();" so I can see the parameters on the browser inspector
var scene = init();
Forever Spinning. Two objects being constantly redrawn by the "update" function. 60 times a second to be more exact.

Forever Spinning. Two objects being constantly redrawn by the "update" function. 60 times a second to be more exact.

Day 2 - Let there be light

Single point light had to be moved away from initial position, so it would't be obscured by the cube.

Single point light had to be moved away from initial position, so it would't be obscured by the cube.

Continued the Lynda.com THREE.js tutorial. As I expected, working with coding through visuals has been more stimulating for me than pushing databases around. Also, I'm aware that following tutorials are much less frustrating than real life coding since improvements are obvious and often.

Changed materials from "basic" to "phong". This allows for light sources to interact with objects. Created a single point light and spent 15 minutes trying to figure out why it wasn't working only to find out that I misspelled a variable name (PointLight instead of the correct pointLight). Disabled Fog for testing the light.

dat.gui.cube.gif

dat.gui - adding interaction

Dat.gui is a JavaScript library that creates user interfaces that change variables: buttons, sliders...inputs Here's an example To use it, just download it, add to the library folder and call it from you HTML and JavasCript file. After creating the variables that will be controled by dat.gui, I tested controling the pointLight intensity. Now I can control the intensity through a slider inside the browser.

gui.add(pointLight, 'intensity', 0, 10);

Adding custom 3D models - FBX

Kept following the Lynda tutorial, and I feel it really helped me grasp the use of functions. After creating a group of cubes and trying different light sources, I wanted to see a 3D model I created displayed on the browser. 
I downloaded the FBXloader.ls from the THREEjs.org webpage. This allows the browser to read FBX format, which is commonly used in the Unity game development platform. After setting up the library, I created the "object" function that would load my model in the scene.

loader.load('./models/RoboHausAscii.fbx', function (object) {
 object.rotation.x= -1* Math.PI/2;

  object.traverse(function(child){
    if (child.isMesh){
      child.castShadow = true;
      child.receiveShadow = true;
    }
  });
  scene.add(object);
});

 

But nothing showed up in Chrome and the consoled displayed a message telling I was not allowed to use objects that were not scripts or html! After some quick googling I read that Firefox had none of these limitations and it worked. Magic: my 3D model is running inside a browser. 

robohaus_sm.gif

mAKING IT MY OWN

Today I reloaded my custom model inside Chrome, to see exactly what error message I was getting, and for some strange reason it just loaded my FBX model, just like Firefox the day before. Go figure....

The goal now is to start using what I learned from the tutorials to sketch my final project. I will experiment with some more interactions, but the main challenge will be to combine THREEjs with some other API, allowing external data to interact with the 3D models. This will require different JavaScripts files to talk to one another. I could write everything inside a single main.js file, but I know already that this would make everything quite messy.

But first... Gotta go back to the Lynda.com videos and review some basic tutorials on functions and objects

 

I think this time I finally understood how functions and objects work in JavaScript.

I think this time I finally understood how functions and objects work in JavaScript.

Connecting with the API

The goal now is to make the Three.js file work together with the NYC Data API. I have downloaded a Json file with all water usage data, but the goal of the project is for it to be update online. I didn't say "in real time" because the water consumption data is only updated once a year.  It sort of defeats the purpose of having a realtime API running instead of just getting the data from the static json file, but if I can get it to work I could make other visualizations using the same basic structure.

Now I have a grid of 3D cubes, lights and a free roaming camera. The next steps are:

  • make the number of squares on screen be relative to the water consumption data from the NYC database API, or any dataset for now.
  • Create option of seeing the data set through different perspectives (per capita, yearly, individual). This changes the number of cubes and camera position. 
  • Animate camera to exhibit different amounts of cubes/swimming pools and perspectives.
  • Create the website and CSS style of the whole experience.
  • change cube to a custom made swimming pool model.
  • link to other websites about conscious use of water.
  • (extra bonus) user input for custom data visualization. User watercalculator.org as reference.

5 hours later...

...and I'm still on the first bullet of the above list! I'm having issues getting a value from inside the function. More specifically, how to change the 'amount' value from the 'getBoxGrid' function. 

 

function getBoxGrid(amount, separationMultiplier) {
  //GROUP = objects container
  var group = new THREE.Group();
  this.amount = amount;
  for (var  i=0; i<this.amount;i++){
    var obj = getBox(1,1,1);
    obj.position.x = i * separationMultiplier;
    obj.position.y = obj.geometry.parameters.height/2;
    group.add(obj);
    for (var j=1; j<amount; j++) {
//?? why I need to create the "var object" again?
      var obj = getBox(1,1,1);
      obj.position.x = i * separationMultiplier;
      obj.position.y = obj.geometry.parameters.height/2;
      obj.position.z = j * separationMultiplier;
      group.add(obj);
    }

 

I can control the value of 'amount' on:

var boxGrid =  getBoxGrid(4, 1.5);

But if I try to call 'amount' anywhere else I get :

"Object "[object Object]" has no property "amount"

Tried to create a global variable called 'amount', but still didn't work. It doesn't override the value from var boxGrid(amount, ...);

9 hours later....

still can't change the 'amount' parameter. With some help I managed to get a new function that removes the former boxGrid and adds a new one with one extra 'amount' every 3 seconds, but I couldn't run the function any other way.

Spinning out of control. Looks cool though.

Spinning out of control. Looks cool though.

Created a button that was supposed to run the "addBoxGrid" function, instead of using the 3 second interval, but It says that there is no "addBoxGrid" function! Or I try to add "amout = amount +1", but it also says there is no amount variable, even when I create a global 'amount' variable. Even just running the function "addBoxGrid" gives no result. The button works if I try to add to other parameters, but never to the 'amount'. I have no idea whats going on.....If I can't create a single new box, there is no way I can control the number of boxes using the API. 

"Final" Presentation

The "quotes" on "final" are the best result I could get. After not being able to directly edit the 3D objects with the data, I had to accept that this was the best I could deliver for now. Something strange happened after showing this cheeky post and my floating cubes in my last API Mashpus class, and that was an uplifting mood and realization that I've done something neat...flawed, but neat. Not only I displayed cool 3D objects on a browser, but I've also managed to make the API work, or at least connect to the server and get the data. I have the pieces and basic knowledge to make a fully functional data visualization website, though I still have some hard time debugging and fixing it. 

The goal now is to come back to this post in the future, and make the "quotes" on "final" stand to it's purpose and signal not the end, but the continuing of my creative coding journey. 

Link to the GIT repository for this project here