Thursday, December 31, 2020

Jupiter Notebook and Jupiter Lab

Jupyter Notebook is a web-based interactive computational environment for creating Jupyter notebook documents. It supports several languages like Python (IPython), Julia, R, etc. and is mostly used for data analysis, data visualization, and other interactive, exploratory computing. For beginners in data science, jupyter notebook is more preferred; it only consists of a file browser and a (notebook) editor view, which is easier to use. When you get familiar with it and need more features(which we will talk about later), you can then definitely switch to JupyterLab.


JupyterLab is the next-generation user interface, including notebooks. It has a modular structure, where you can open several notebooks or files (e.g., HTML, Text, Markdowns, etc.) as tabs in the same window. It offers more of an IDE-like experience. JupyterLab uses the same Notebook server and file format as the classic Jupyter Notebook to be fully compatible with the existing notebooks and kernels. The Classic Notebook and Jupyterlab can run side to side on the same computer. One can easily switch between the two interfaces. The interface of both Lab and notebook are similar, except the panel of the file system on the left side in Jupyter lab.


Some differences are


JupyterLab runs in a single tab, with sub-tabs displayed within that one tab, Jupyter Notebook opens new notebooks in new tabs. So JupyterLab feels more like an IDE; in Notebook notebooks, it feels more standalone. All the files are opened as different tabs in your webbrowser. It depends on you what you prefer more.

There’s a huge difference when you open a CSV file in any of them, and CSV is one thing you will see a lot while doing data science. Let me show you the difference when you open a CSV file in notebook and Lab.


See, in the jupyter notebook, it shows the CSV file as it is with the delimiter provided by the creator of the CSV file; let's see what’s the case in Lab.


See, in jupyter Lab, when you open a CSV file, it is represented entirely just like a CSV file looks like(a .xlsx file); jupyterlab also has an option of delimiter; usually, a CSV file is a comma-separated, but even if its space-separated or ab separated you can mention it in the delimiter option given in jupyterlab.


Conclusion

At last, I would like to say both notebook and Lab are both very good to work with, very user-friendly, great UI. All Thanks to the developers of Jupyter! Use any one of them according to your comfort level. Signing off. Happy Learning.


References:

https://medium.com/analytics-vidhya/why-switch-to-jupyterlab-from-jupyter-notebook-c6d98362945b#:~:text=Jupyter%20Notebook%20is%20a%20web,for%20creating%20Jupyter%20notebook%20documents.&text=JupyterLab%20is%20the%20next%2Dgeneration,tabs%20in%20the%20same%20window.

What to Look for in a Node JS framework

Scalability

Node.js web frameworks provide a defined structure for a codebase. In the long run, they decide what characteristics your product will have, how the app will process the data, and handle computing. You want a framework that isn’t too opinionated — it shouldn’t limit you in possible ways of executing the project. If the framework boxes you one method, it definitely is not good enough.


On the other hand, you want to be able to use packages, Node.js libraries, and reusable code frameworks. This is where the ecosystem comes in. You want a framework with an actively contributing community, educational materials, and the one used across many industries.


Functionality

If you define quality standards for your Node JS frameworks selection early on, you’ll have an easier time narrowing down the options. 


Support of declarative programming: such programming describes the platform saved by a feature and its solution. We prefer the frameworks that support declarative metadata describing the parameters and middleware of Node.js handlers.


Cluster management: it’s nice when a framework allows organizing, previewing, editing, and managing clusters, as well as sorting them by their characteristics.


Middleware support: middleware is software that helps developers improve their application’s performance, security, etc. It’s not a framework, but it can be integrated. Middleware helps to optimize your application functionality and to deliver a better experience.


Batch support: not all frameworks are equally good at handling multiple background processes simultaneously. We prefer frameworks that let us access the same databases and APIs caches, regardless if they are currently running. This allows us to get many things done as soon as possible.


Best Node.js API frameworks


Express.js

Meteor.js

Koa.js

Sails.js

Nest.js

LoopBack.js

Hapi

Adonis.js

Keystone.js

Total.js



References:

https://medium.com/dailyjs/which-one-is-the-best-node-js-framework-choosing-among-10-tools-87a0e191eefd

A Quick Guide to Designing for Augmented Reality on Mobile

Knowledge Transfer

One of the superpowers of AR is knowledge transfer: If you compare the theory of gravity with black holes, Theoretically we are more knowledgeable about gravity because we can experience it as opposed to a black hole which we can only observe.



Having your users experience rather than observe may sharply increase the chances of them understanding and retaining information. It’s what makes AR such a compelling medium for education and training. Not to mention the ability to be free of any physical limitations and restrictions.


AR also has great potential for marketing since it involves having the user completely immersed in the experience: It is a known metric that full immersion/engagement leads to a higher rate of conversion. A user is more likely to make a purchasing decision once they have tried out the product by themselves.


Affordances and Constraints


Content Types

Language plays a critical role when defining your experience. The following are examples of some of the more popular content types used within AR.


Static: Content that is still and lacks movement and interaction

Animated: Content that moves on a timeline or follows a sequence

3D: Content with width, height and depth or data with XYZ coordinates

Dynamic: Adaptive content that changes with interaction or over time

Procedural: Content generated automatically or algorithmically


These content types are not exclusive and can combine in many different ways. However, it is essential to understand these formats so the designer can properly articulate what they are trying to do. For example, for a design that requires a vase to reveal a price tag upon clicking: The vase is a dynamic 3D object that exposes a static tag. If the experience then involves clicking on the tag and making a purchase, the tag now becomes dynamic.


Defining Interactions

When mapping out behaviors and relationships in AR, it is helpful to be specific about where and how to treat the content. Try to be as precise in describing the experience to get alignment amongst stakeholders.


A good rule of thumb is to call out the location (e.g., glass, space, object…), the content type (e.g., static, 3D…) and the state of content (e.g., fixed, locked, flexible…)


STATIC & FIXED ON GLASS

This interaction has a static graphic overlay fixed to the glass(screen) at all times. This design convention is useful for permanent elements that need to be within the users reach at all times. An example of this is a menu or return prompt.


STATIC & LOCKED IN SPACE

Although these elements are locked in space, They could have a dynamic feature where they always face the user. This design convention is useful for labels and material that needs to accompany an object or marker in space.


DYNAMIC & FLEXIBLE ON GLASS

In this case static becomes a dynamic content type .This convention works for allowing users to position assets in custom or specific areas. This is helpful for target based or drag and drop elements.

Image for post



DYNAMIC 3D & FLEXIBLE IN SPACE

A great way to engage with 3D models and understanding its components. Most commonly used for educational purposes and understanding the breakdown of an object.


DYNAMIC 3D & PROPORTIONATE IN SPACE

Helpful when allowing a user to see an object in an actual environment with lighting and measurement considerations. Often used in commerce platforms.



References:

https://medium.com/@goatsandbacon/a-quick-guide-to-designing-for-augmented-reality-on-mobile-part-1-c8ecaaf303d5

Step by Step Guide to creating Christmas Tree

Basic code looks like this 

https://threejs.org/docs/#manual/en/introduction/Creating-a-scene

This shows a cube that rotates. This illustrates below items


- Creating a basic scene

- Creating the perspective Camera 

- Creating a Cube Geometry and mesh 

- Adding the geometry to scene 

- Animating the geometry 


What it lacks are the below ones


- Controls 

- Plane which can act as a floor 



First adding the plane is like this below 



var geometry = new THREE.PlaneGeometry( 1000, 1000, 1, 1 );

var material = new THREE.MeshBasicMaterial( { color: 0x0000ff } );

var floor = new THREE.Mesh( geometry, material );

floor.material.side = THREE.DoubleSide;

floor.rotation.x = 90;

scene.add( floor ); 



Now the plane appears but unless able to obit around, the plane is not clearly visible 

Adding orbit controls is done using the below code




const controls = new OrbitControls(camera, renderer.domElement);

    //controls.update() must be called after any manual changes to the camera's transform

    camera.position.set(0, 20, 100);

    controls.update();


And in animate function, need to call the below 


controls.update();


Need to ensure that the Orbit control is also included 


Now below are few important notes on 


- Coordinate System - Which is centre point in ThreeJS, what are X, Y, Z axis orientations  

- What are plane properties that are supplied when initialising 

- Where is light placed 



To understand the coordinate system, below helped much 

https://discoverthreejs.com/book/first-steps/first-scene/



Below helped to get some idea on the ThreeJS camera


https://www.youtube.com/watch?v=KyTaxN2XUyQ




- When a mesh is added to the scene, it by default is kept at origin (0,0,0) 

- Irrespecitve of the camera, it stays at that point



References:

https://codepen.io/atouine/pen/JJeqKE

https://discoverthreejs.com/book/first-steps/first-scene/

Environment Mapping in ThreeJS

Environment mapping simulates an object reflecting its surroundings. In its simplest form, environment mapping gives rendered objects a chrome-like appearance.


Environment mapping assumes that an object's environment (that is, everything surrounding it) is infinitely distant from the object and, therefore, can be encoded in an omnidirectional image known as an environment map.


All recent GPUs support a type of texture known as a cube map. A cube map consists of not one, but six square texture images that fit together like the faces of a cube. Together, these six images form an omnidirectional image that we use to encode environment maps. Figure 7-1 shows an example of a cube map that captures an environment consisting of a cloudy sky and foggy mountainous terrain.


References:

http://developer.download.nvidia.com/CgTutorial/cg_tutorial_chapter07.html 

Monday, December 28, 2020

Export statement in Javascript

The export statement is used when creating JavaScript modules to export live bindings to functions, objects, or primitive values from the module so they can be used by other programs with the import statement. Bindings that are exported can still be modified locally; when imported, although they can only be read by the importing module the value updates whenever it is updated by the exporting module.


Exported modules are in strict mode whether you declare them as such or not. The export statement cannot be used in embedded scripts.


There are two types of exports:


Named Exports (Zero or more exports per module)

Default Exports (One per module)


// Exporting individual features

export let name1, name2, …, nameN; // also var, const

export let name1 = …, name2 = …, …, nameN; // also var, const

export function functionName(){...}

export class ClassName {...}


// Export list

export { name1, name2, …, nameN };


// Renaming exports

export { variable1 as name1, variable2 as name2, …, nameN };


// Exporting destructured assignments with renaming

export const { name1, name2: bar } = o;

// Default exports

export default expression;

export default function (…) { … } // also class, function*

export default function name1(…) { … } // also class, function*

export { name1 as default, … };


// Aggregating modules

export * from …; // does not set the default export

export * as name1 from …; // Draft ECMAScript® 2O21

export { name1, name2, …, nameN } from …;

export { import1 as name1, import2 as name2, …, nameN } from …;

export { default } from …;


for e.g. below code in Utils.js exports two functions 


export function setAuthTokenFromQueryString() {

    const urlParams = new URLSearchParams(window.location.search)

    const authToken = urlParams.get('token')

    console.log('authToken coming in to the app ',authToken)

    if(authToken){

        window.authToken = authToken

    }

}


export function getAuthToken(){

  return window.authToken

}


Below is how these two can be imported to use them 


import {getAuthToken, setAuthTokenFromQueryString} from '../../Utils/Utils'


references:

https://developer.mozilla.org/en-US/docs/web/javascript/reference/statements/export

Saturday, December 26, 2020

Firebase React Router does not work

Use a rewrite to show the same content for multiple URLs. Rewrites are particularly useful with pattern matching, as you can accept any URL that matches the pattern and let the client-side code decide what to display.


You can also use rewrites to support apps that use HTML5 pushState for navigation. When a browser attempts to open a URL path that matches the specified source or regex URL pattern, the browser will be given the contents of the file at the destination URL instead.



Specify URL rewrites by creating a rewrites attribute that contains an array of objects (called "rewrite rules"). In each rule, specify a URL pattern that, if matched to the request URL path, triggers Hosting to respond as if the service were given the specified destination URL.



Here's the basic structure for a rewrites attribute. This example serves index.html for requests to files or directories that don't exist.


"hosting": {

  // ...


  // Serves index.html for requests to files or directories that do not exist

  "rewrites": [ {

    "source": "**",

    "destination": "/index.html"

  } ]

}



This actually worked amazingly well. Basically any URL that is not found was taking to index.html. But if a url that existed, then it was taking to the actual path


For example, to direct all requests from the page /bigben on your Hosting site to execute the bigben function:


hosting": {

  // ...


  // Directs all requests from the page `/bigben` to execute the `bigben` function

  "rewrites": [ {

    "source": "/bigben",

    "function": "bigben"

  } ]

}



references:

https://stackoverflow.com/questions/52939427/react-router-doesnt-route-traffic-when-hosted-on-firebase/52939519

Thursday, December 24, 2020

What is RequireJS

RequireJS is a JavaScript file and module loader. It is optimized for in-browser use, but it can be used in other JavaScript environments, like Rhino and Node. Using a modular script loader like RequireJS will improve the speed and quality of your code.


IE 6+ .......... compatible ✔

Firefox 2+ ..... compatible ✔

Safari 3.2+ .... compatible ✔

Chrome 3+ ...... compatible ✔

Opera 10+ ...... compatible ✔


This setup assumes you keep all your JavaScript files in a "scripts" directory in your project. For example, if you have a project that has a project.html page, with some scripts, the directory layout might look like so:


Add require.js to the scripts directory, so it looks like so:


project-directory/

project.html

scripts/

main.js

require.js

helper/

util.js



To take full advantage of the optimization tool, it is suggested that you keep all inline script out of the HTML, and only reference require.js with a requirejs call like so to load your script:


<!DOCTYPE html>

<html>

    <head>

        <title>My Sample Project</title>

        <!-- data-main attribute tells require.js to load

             scripts/main.js after require.js loads. -->

        <script data-main="scripts/main" src="scripts/require.js"></script>

    </head>

    <body>

        <h1>My Sample Project</h1>

    </body>

</html>


You could also place the script tag end of the <body> section if you do not want the loading of the require.js script to block rendering. For browsers that support it, you could also add an async attribute to the script tag.


Inside of main.js, you can use requirejs() to load any other scripts you need to run. This ensures a single entry point, since the data-main script you specify is loaded asynchronously.


requirejs(["helper/util"], function(util) {

    //This function is called when scripts/helper/util.js is loaded.

    //If util.js calls define(), then this function is not fired until

    //util's dependencies have loaded, and the util argument will hold

    //the module value for "helper/util".

});


References:

https://requirejs.org/docs/start.html


Accessing DOM In Jupyter Notebook

Why access DOM in Jupyter Notebook ?

Most of the time we use: matplotlib, seaborn, bokeh …etc to visually represent data . By using D3 you get an ultimate freedom of modification and customization with respect to your data visualization. Sometimes the unique visualization for your analytics might not be available in the graphing tools that you usually use. Hence D3 has a lot of examples which can be used by modifying it for your needs (custom visualization)



Jupyter NoteBook has a function that will give you access to the DOM.


from Python.core.display import HTML 


So how does it really work?


When you write html followed by a string in the Notebook it will interpret that string and put it into the DOM . The image below explains that:



Simple example that modifies the DOM is given below 


HTML('''


<style scoped>

.steely{

  color:steelblue,

  Font: 16px script,

}


</style>

<h1 class='steely'> Hello Dom </h1> 

''')




References:

https://medium.com/@stallonejacob/d3-in-juypter-notebook-685d6dca75c8

Sunday, December 20, 2020

Conditional Cypher Execution

Sometimes queries require conditional execution logic that can’t be adequately expressed in Cypher. The conditional execution procedures simulate an if / else structure, where a supplied boolean condition determines which cypher query is executed.


WHEN Procedures

For if / else conditional logic, when procedures allow an ifQuery and elseQuery to be specified. If the conditional is true, the ifQuery will be run, and if not the elseQuery will be run.


CALL apoc.when(

  condition: Boolean,

  ifQuery: String,

  elseQuery: String,

  params: Map)

YIELD value


CALL apoc.do.when(

  condition: Boolean,

  ifQuery: String,

  elseQuery: String,

  params: Map)

YIELD value


For example, if we wanted to match to neighbor nodes one and two traversals away from a start node, and return the smaller set (either those one hop away, or those that are two hops away), we might use:



MATCH (start:Node)-[:REL]->(a)-[:REL]->(b)

WITH collect(distinct a) as aNodes, collect(distinct b) as bNodes


CALL apoc.when(

  size(aNodes) <= size(bNodes),

  'RETURN aNodes as resultNodes',

  'RETURN bNodes as resultNodes',

  {aNodes:aNodes, bNodes:bNodes})

YIELD value


RETURN value.resultNodes as resultNodes



CASE Procedures

For more complex conditional logic, case procedures allow for a variable-length list of condition / query pairs, where the query following the first conditional evaluating to true is executed. An elseQuery block is executed if none of the conditionals are true.



CALL apoc.case(

  conditionals: List of alternating Boolean/String,

  elseQuery: String,

  params: Map)

YIELD value




If we wanted to MATCH to selection nodes in a column, we could use entirely different MATCHES depending on query parameters, or based on data already in the graph:



MATCH (me:User {id:$myId})

CALL apoc.case([

  $selection = 'friends', "RETURN [(me)-[:FRIENDS]-(friend) | friend] as selection",

  $selection = 'coworkers', "RETURN [(me)-[:WORKS_AT*2]-(coworker) | coworker] as selection",

  $selection = 'all', "RETURN apoc.coll.union([(me)-[:FRIENDS]-(friend) | friend], [(me)-[:WORKS_AT*2]-(coworker) | coworker]) as selection"],

  'RETURN [] as selection',

  {me:me}

)

YIELD value

RETURN value.selection as selection;




References:

https://neo4j.com/labs/apoc/4.1/cypher-execution/conditionals/




APOC create relationship

MATCH (p:Person {name: "Tom Hanks"})

MATCH (m:Movie {title:"You've Got Mail"})

CALL apoc.create.relationship(p, "ACTED_IN", {roles:['Joe Fox']}, m)

YIELD rel

RETURN rel;


The example below shows ways of creating a node with the Person and Actor labels, with a name property of "Tom Hanks":




If we might want to create a relationship with a relationship type or properties passed in as parameters.



:param relType =>  ("ACTED_IN");

:param properties => ({roles: ["Joe Fox"]});



MATCH (p:Person {name: "Tom Hanks"})

MATCH (m:Movie {title:"You've Got Mail"})

CALL apoc.create.relationship(p, $relType, $properties, m)

YIELD rel

RETURN rel;



References:

https://neo4j.com/labs/apoc/4.1/overview/apoc.create/apoc.create.relationship/


Friday, December 18, 2020

Lets Encrypt renew past the expiration date gives error

Cleaning up challenges

Attempting to renew cert (kingkaotix.com) from /etc/letsencrypt/renewal/kingkaotix.com.conf produced an unexpected

error: Failed authorization procedure. www.kingkaotix.com (http-01): urn:ietf:params:acme:error:unauthorized :: The

client lacks sufficient authorization :: Invalid response from https://www.kingkaotix.com/.well-known/acme-challen

ge/QydpGssCryC803kh7TeCn_PHwaOj1_nVZ1zk-vLM6w4 [2606:4700:3033::681f:4b41]: "\n\n<!--[if IE 7]> <html class="no-js ". Skipping.

All renewal attempts failed. The following certs could not be renewed:

/etc/letsencrypt/live/kingkaotix.com/fullchain.pem (failure)


It gave the error similar to the above. The solution was to do the below and add the TXT record under the domain name 


certbot certonly --server https://acme-v02.api.letsencrypt.org/directory --manual --preferred-challenges dns -d 'relationmonitor.dk,*.relationmonitor.dk'


references:


https://community.letsencrypt.org/t/an-authentication-script-must-be-provided-with-manual-auth-hook/74301/3

Thursday, December 17, 2020

Dissecting glTF loading example

Main items functions are


createCamera

createControls

createLights

loadModels

createRenderer


Camera created is PerspectiveCAmera 


camera = new THREE.PerspectiveCamera(35, container.clientWidth / container.clientHeight, 1, 100);

camera.position.set(-1.5, 1.5, 6.5);



Control Created is simple Orbit Controls 

controls = new THREE.OrbitControls(camera, container);


Model loading is done using below steps. These are gLTF models

const loader = new THREE.GLTFLoader();


const parrotPosition = new THREE.Vector3(0, 0, 2.5);

loader.load('models/Parrot.glb', gltf => onLoad(gltf, parrotPosition), onProgress, onError);


const flamingoPosition = new THREE.Vector3(7.5, 0, -10);

loader.load('models/Flamingo.glb', gltf => onLoad(gltf, flamingoPosition), onProgress, onError);


const storkPosition = new THREE.Vector3(0, -2.5, -10);

loader.load('models/Stork.glb', gltf => onLoad(gltf, storkPosition), onProgress, onError);


Once the models are initiated for loading, there are three call backs, 

One is on Progress, on Load, on Error.


The gltf object come inside the onLoad method. 


Below are the main things done inside the onLoad method. 


1. The model object is inside the scene object. This is extracted. 


const model = gltf.scene.children[0];

model.position.copy(position) 


Once the model is loaded, its position is set by the above statement. The position is passed in from the application layer. 

Once the models position is set, now we need to mix the animations. This is done by the below statements 


const mixer = new THREE.AnimationMixer(model);

mixers.push(mixer);


mixers is an array which contains all the mixer instances. In the update loop, each of these mixers are to be given the update cycle, which is like this below .


function update() {


  const delta = clock.getDelta();

  for (const mixer of mixers) {

    mixer.update(delta);

  }

}



Now the mixer to be added with the animation 


const action = mixer.clipAction(animation);

action.play();



Now the model can be set in the scene


scene.add(model);


References:

 

Three JS Animation Mixer

The AnimationMixer is a player for animations on a particular object in the scene. When multiple objects in the scene are animated independently, one AnimationMixer may be used for each object.


Below is how animation mixer can be used. In this each individual object is having animation. Once the objects are loaded, they are added to the AnimationMixer. The clip actions play it. 


 const model = gltf.scene.children[0];

    model.position.copy(position);


    const animation = gltf.animations[0];


    const mixer = new THREE.AnimationMixer(model);

    mixers.push(mixer);


    const action = mixer.clipAction(animation);

    action.play();


    scene.add(model);


References:

https://threejs.org/docs/#api/en/animation/AnimationMixer

Wednesday, December 16, 2020

Neo4J create nodes procedure not found

Below was the procedure that was trying to be executed. 


CALL apoc.load.xls('neo_test_book1.xls','Sheet1') YIELD map as m 

CALL apoc.create.node(['Person'], {ID:m.ID,name:m.Name,email:m.email})  YIELD node as t

return m


The error that was throws was 


There is no procedure with the name `apoc.create.node` registered for this database instance. Please ensure you've spelled the procedure name correctly and that the procedure is properly deployed.


Basically any procedure that needs registration to be specified in the neo4j configuration file using the below configuration


dbms.security.procedures.allowlist=apoc.coll.*,apoc.load.*,apoc.create.*


Like this above, with this, the procedure started working and all good. 


References: 

https://staging.thepavilion.io/t/neo4j-beginner-issues-with-call-apoc-create-relationship/15164/2


Loading 3D models in glTF formats

The Best Way to Send 3D Assets Over the Web: glTF: 

There have been many attempts at creating a standard 3D asset exchange format over the last thirty years or so. FBX, OBJ (Wavefront) and DAE (Collada) formats were the most popular of these until recently, although they all have problems that prevented their widespread adoption. 


The original glTF Version 1 never found widespread use and is no longer supported by three.s. glTF files can contain models, animations, geometries, materials, lights, cameras, or even entire scenes. This means you can create an entire scene in an external program then load it into three.js.


However, recently, a newcomer called glTF has become the de facto standard format for exchanging 3D assets on the web. glTF (GL Transmission Format), sometimes referred to as the JPEG of 3D, was created by the Kronos Group, the same people who are in charge of WebGL, OpenGL, and a whole host of other graphics APIs.


Originally released in 2017, glTF is now the best format for exchanging 3D assets on the web, and in many other fields. In this book, we’ll always use glTF, and if possible, you should do the same. It’s designed for sharing models on the web, so the file size is as small as possible and your models will load quickly.

glTF files come in standard and binary form. These have different extensions:

Standard .gltf files are uncompressed and may come with an extra .bin data file.

Binary .glb files include all data in one single file.

Both standard and binary glTF files may contain textures embedded in the file or may reference external textures. Since binary .glb files are considerably smaller, it’s best to use this type. On the other hand, uncompressed .gltf are easily readable in a text editor, so they may be useful for debugging purposes.

The GLTFLoader Plugin#

To load glTF files, first, you need to add the GLTFLoader plugin to your app. This works the same way as adding the OrbitControls plugin. You can find the loader in examples/jsm/loaders/GLTFLoader.js on the repo, and we have also included this file in the editor


import { GLTFLoader } from 'three/examples/jsm/loaders/GLTFLoader.js'

const loader = new GLTFLoader();

const loadedData = await loader.loadAsync('path/to/yourModel.glb');



Another issue is that we cannot mark a constructor as async. A common solution to this is to create a separate .init method.



class Foobazzer {

  constructor() {

    // constructor cannot be async: ERROR!

    await loader.loadAsync('yourModel.glb');

  }


  async init() {

    // inside an async function: OK!

    await loader.loadAsync('yourModel.glb')

  }

}




References:

https://discoverthreejs.com/book/first-steps/load-models/


Monday, December 14, 2020

What are privacy concerns of AR and VR

Even though today these technologies don’t pose so many security and privacy risks, in the next few years, as their popularity increases, they could be prone to real threats.


#1. Eye Tracking


Some are saying that eye tracking in VR will be a game changer. Why? Because it improves accuracy and the user experience and facilitates dynamic focus or understanding the players’ emotions so developers can craft better VR experiences. Others are also thinking that eye tracking in VR will even increase the security in these systems, in the sense that eye scanning can be used as a biometric identifier, enabling users to safely log in using this method.


But does gaze tracking have anything else to do with VR besides the monitoring done for the purpose of playing games and other similar activities? Well, it could have. For instance, advertisements could be easily included in VR games, just like they are displayed in regular mobile games. Your interest in a certain brand would be expressed much better than through the traditional clicking on an online banner – advertisers could be able to see more accurately how you perceive an ad.


According to Gerald Zaltman, 95% of purchase decisions happen in the subconscious mind. And one of the best ways for marketers to take a peek into the unconscious of consumers is by using eye tracking technologies. This way, market researchers can literally see through their customers’ eyes.


#2. Blackmailing / Sextorsion

Malicious actors may take advantage of the industry’s popularity and resort to sextorsion. This means they would try to trick you into believing they have proof you’ve visited adult websites and urge you to pay them so they don’t leak the content. Sometimes, they would even attach a password you’ve been using, which was stolen from a data breach, to make the email scam look more legitimate. But don’t worry, even though it can be an unpleasant experience to receive such an email, keep in mind this is fake.


#3. Altering reality


AR glasses can display a visual indicator of the temperature of your cooking pan. They can sense if you’re low on certain kitchen supplies and order more for you. They are able to simulate an experience where you are in a movie theater when instead you are watching a film on your 42-inch TV. Owners are able to alter their looks, removing facial imperfections or show themselves as being thinner to other AR glasses users.


But the more popular a technology becomes, problems also arise. In Gewirtz’s world, adult movies vendors side-load apps onto the smart glasses platform, which allow users to make people look like someone else – for instance like a celebrity, or appear naked. Gender or race change hacks also become available. The appearance of violence and other gruesome details can also be shown. Malicious actors trick drivers into thinking a road went straight when, actually, there is a tight curve coming up, causing serious accidents.


Fortunately, these are just some imaginary scenarios for now. But how far are we from this kind of reality?


#4. Fake identities / Deepfakes


Advances in facial recognition and machine learning now allow the manipulation of voices and appearances of people, resulting in what may look like genuine footage. In short, this is what the deepfake technology is based on.


Of course, you can easily spot these as fake. But soon enough, through the power of tracking sensors found in VR systems, deepfakes could get much more convincing. Motion tracking sensors could record the movement of someone and use it against them to create digital replicas.


References:

https://heimdalsecurity.com/blog/vr-ar-security-privacy-risks/


WebXR Lighting a WebXR setting

 Because the WebXR Device API relies on other technologies—namely, WebGL and frameworks based upon it—to perform all rendering, texturing, and lighting of a scene, the same general lighting concepts apply to WebXR settings or scenes as to any other WebGL-generated display.


Lighting estimation



Lighting estimation is a technique used by augmented reality platforms to attempt to match the lighting of the virtual objects in the scene to the lighting of the real world surrounding the viewer. This involves the collection of data that may come from various sensors (including the accelerometer and compass, if available), cameras, and potentially others. Other data is collected using the Geolocation API, and then all this data is put through algorithms and machine learning engines to generate the estimated lighting information.



At present, WebXR doesn't offer support for lighting estimation. However, a specification is currently being drafted under the auspices of the W3C. You can learn all about the proposed API and a fair amount about the concept of lighting estimation in the explainer documnent that's included in the specification's GitHub repository.


In essence, lighting estimation collects this information about the light sources and the shape and orientation of the objects in the scene, along with information about the materials they're made of, then returns data you can use to create virtual light source objects that approximately match the real world's lighting.




References:

https://developer.mozilla.org/en-US/docs/Web/API/WebXR_Device_API/Lighting


Security Concerns on WebXR

There are a number of potential security issues involved with collecting all of this data in order to generate and apply lighting to your virtual objects using real-world data.

Of course, many AR applications make it pretty clear where the user is located. If the user is running an app called Touring the Louvre, there's a very good chance the user is located in the Musée du Louvre in Paris, France. But browsers are required to take a number of steps to make it difficult to physically locate the user without their consent.

Ambient Light Sensor API

The collection of light data using the Ambient Light Sensor API introduces various potential privacy issues.

Lighting information can leak to the web information about the user's surroundings and device usage patterns. Such information can be used to enhance user profiling and behavior analysis data.

If two or more devices access content that uses the same third-party script, that script can be used to correlate lighting information and how it changes over time to attempt to determine a spatial relationship between the devices; this could in theory indicate that the devices are in the same general area, for example.

How browsers mitigate these issues

In order to help mitigate these risks, browsers are required by the WebXR Lighting Estimation API specification to report lighting information that is fudged somewhat from the true value. There are many ways this could be done.

Spherical harmonics precision

Browsers can mitigate the risk of fingerprinting by reducing the precision of spherical harmonics. When performing real-time rendering—as is the case with any virtual or augmented reality application—spherical harmonic lighting is used to simplify and accelerate the process of generating highly realistic shadows and shading. By altering the accuracy of these functions, the browser makes the data less consistent, and, importantly, makes the data generated by two computers differ, even in the same setting.

Decoupling orientation from lighting

In an AR application that uses geolocation to determine orientation and potentially position information, avoiding having that information directly correlate to the state of the lighting is another way browsers can protect users from fingerprinting attacks. By simply ensuring that the compass direction and the light directionality aren't identical on every device that's near (or claims to be near) the user's location, the ability to find users based on the state of the lighting around them is removed.

When the browser provides details about a very bright, directional light source, that source probably represents the sun. The directionality of this bright light source combined with the time of day can be used to determine the user's geographic location without involving the Geolocation API. By ensuring that the coordinates of the AR scene don't align with compass coordinates, and by reducing the precision of the sun's light angle, the location can no longer be accurately estimated using this technique.

Temporal and spatial filtering

Consider an attack that uses a building's automated lighting system to flash the lights on and off quickly in a known pattern. Without proper precautions, the lighting estimation data could be used to detect this pattern and thus determine that a user is in a particular location. This could be done remotely, or it could be performed by an attacker who's located in the same room but wants to determine if the other person is also in the same room

Another scenario in which lighting estimation can be used to obtain information about the user without permission: if the light sensor is close enough to the user's display to detect lighting changes caused by the contents of the display, an algorithm could be used to determine whether or not the user is watching a particular video—or even to potentially identify which of a number of videos the user is watching.

The Lighting Estimation API specification mandates that all user agents perform temporal and spatial filtering to fuzz the data in a manner that reduces its usefulness for the purpose of locating the user or performing side-channel attacks.

References: 

https://developer.mozilla.org/en-US/docs/Web/API/WebXR_Device_API/Lighting

gltf 2.0 with Blender

glTF™ (GL Transmission Format) is used for transmission and loading of 3D models in web and native applications. glTF reduces the size of 3D models and the runtime processing needed to unpack and render those models. This format is commonly used on the web, and has support in various 3D engines such as Unity3D, Unreal Engine 4, and Godot.


This importer/exporter supports the following glTF 2.0 features:


Meshes

Materials (Principled BSDF) and Shadeless (Unlit)

Textures

Cameras

Punctual lights (point, spot, and directional)

Animation (keyframe, shape key, and skinning)


Meshes


glTF’s internal structure mimics the memory buffers commonly used by graphics chips when rendering in real-time, such that assets can be delivered to desktop, web, or mobile clients and be promptly displayed with minimal processing. As a result, quads and n-gons are automatically converted to triangles when exporting to glTF. Discontinuous UVs and flat-shaded edges may result in moderately higher vertex counts in glTF compared to Blender, as such vertices are separated for export. Likewise, curves and other non-mesh data are not preserved, and must be converted to meshes prior to export.



Materials


The core material system in glTF supports a metal/rough PBR workflow with the following channels of information:


Base Color

Metallic

Roughness

Baked Ambient Occlusion

Normal Map

Emissive



Imported Materials

The glTF material system is different from Blender’s own materials. When a glTF file is imported, the add-on will construct a set of Blender nodes to replicate each glTF material as closely as possible.


The importer supports Metal/Rough PBR (core glTF), Spec/Gloss PBR (KHR_materials_pbrSpecularGlossiness) and Shadeless (KHR_materials_unlit) materials.


Exported Materials


The exporter supports Metal/Rough PBR (core glTF) and Shadeless (KHR_materials_unlit) materials. It will construct a glTF material based on the nodes it recognizes in the Blender material. The material export process handles the settings described below.


When image textures are used by materials, glTF requires that images be in PNG or JPEG format. The add-on will automatically convert images from other formats, increasing export time.


File Format Variations

The glTF specification identifies different ways the data can be stored. The importer handles all of these ways. The exporter will ask the user to select one of the following forms:



glTF Binary (.glb)

This produces a single .glb file with all mesh data, image textures, and related information packed into a single binary file.


glTF Separate (.gltf + .bin + textures)

This produces a JSON text-based .gltf file describing the overall structure, along with a .bin file containing mesh and vector data, and optionally a number of .png or .jpg files containing image textures referenced by the .gltf file.




References:

https://docs.blender.org/manual/en/2.80/addons/io_scene_gltf2.html#file-format-variations


Sunday, December 13, 2020

The Augmented Reality rise and Future of AR

if it seems to you that technology firms are racing to create augmented reality (AR) /mixed reality (MR) headsets, you’re not wrong. Recently The Information reported that Apple eyes 2022 for the release of an augmented reality headset and 2023 for glasses. According to information leaked from an internal meeting, Apple Vice President Mike Rockwell shared details about the design and features of the AR Apple headset and AR Apple glasses. Meanwhile, The Verge reported that Samsung has applied for an AR Headset patent. And those are but two examples


Most of the biggest names in Big Tech are racing to create smart glasses that we wear everywhere and that may replace our phones. Microsoft, Amazon, Google, Snap, Facebook Apple, Magic Leap and others are all working on some form of smart glasses or headset that will change how we view the world around us. Instead of pulling a phone out of our pockets to talk to people or interact with apps, we may do these things simply by speaking to, and looking through, a set of glasses.


References:

https://medium.com/datadriveninvestor/the-augmented-reality-headset-race-and-what-it-all-means-661a534bfeb5


Augmented Reality Robotics

Mars rovers and your smartphone have the same problem: figuring out where they are without GPS.

Visual Odometry has been around for decades but is really taking off with mobile augmented reality. 

Take a look at all the references from Larry Matthies on using Visual Inertial Odometry on the Mars Exploration Rovers. Well, it should come as no surprise that his lab at NASA JPL was an early participant in Google’s Project Tango effort to give smartphones that same capability. 


Visual Odometry has been around for decades but is really taking off with mobile augmented reality. Take a look at all the references from Larry Matthies on using Visual Inertial Odometry on the Mars Exploration Rovers. Well, it should come as no surprise that his lab at NASA JPL was an early participant in Google’s Project Tango effort to give smartphones that same capability. Steve Goldberg, also with JPL and now full time at Google, is one of the few people who can claim to have optimized the VO pipeline for a rover on Mars, and the one powering AR on your your smartphone!


The higher level technology that JPL’s rovers and Tango (now known as ARCore) use is called SLAM, or Simultaneous Localization and Mapping. More specifically, they use VSLAM and VIO, where cameras and motion sensors come together to create Visual Inertial Odometry. Just like your eyes and ears work together, robots and augmented reality devices uses cameras and motion sensors.


These virtual characters that are supposed to be the future of our digital lives still have zero understanding of the world. They rely on imperfect mobile sensors that struggle with “optically uncooperative surfaces”. AR devices can’t yet recognize, track, and predict the movement of people or animals.



The big winners are going to be those embracing ARKit and ARCore, which make heavy use of cameras. Those two platforms alone are expected to power more than a billion devices in 2018!


As the “AR Cloud” grows to include more maps and semantics about the world, you can expect to be provided product-level navigation with centimeter level accuracy 


. Google is in the unique position of offering both LiDAR mapping, as used by Cartographer, and with VSLAM, underlying Project Tango / AR Core / VPS.



References:

https://medium.com/@ryanhickman/augmented-reality-robotics-bb6db40ab754

Building Social VR apps in AltspaceVR with A-Frame

 A lot ideas for virtual reality applications includes a social aspect where a group of people are brought together in a virtual space to collaborate in some way.



One of the many social VR platform is AltspaceVR. It has plenty of competition, but I like Altspace because it works on all major platforms and its 3rd party development infrastructure is based on Javascript and browser rendering. 


If you can render 3d in the browser using WebGL and ThreeJS, it’s easy to get this content into Altspace. Most rooms in Altspace have a dedicated “hologram” area where you can bring up any compatible web page and get it rendered as a “3d hologram” in the client.


The Altspace team has created an A-Frame component that makes it easy to render A-Frame scenes inside an Altspace room



Add the “altspace” attribute to the “<a-scene>” element in the HTML code, so it becomes “<a-scene altspace>”. This is enough to make the example render in Altspace, so click the “Change view” button in Codepen and copy the “Debug Mode” link.


Head to the Altspace account site and sign in. Click the “My Events” link, select “Start a Quick Event”, select a room (I prefer “SDK Testing Medium App”), click “Create Event” and then “Visit Now”. This should make you join your own newly created room in the Altspace client.



<a-scene altspace="usePixelScale: false; verticalAlign: bottom;" vr-mode-ui="enabled: false;">

  <a-sphere position="0 1.25 -1" radius="1.25" color="#EF2D5E"></a-sphere>

  <a-cube position="-1 0.5 1" rotation="0 45 0" width="1" height="1" depth="1" color="#4CC3D9"></a-cube>

  <a-cylinder position="1 0.75 1" radius="0.5" height="1.5" color="#FFC65D"></a-cylinder>

  <a-plane rotation="-90 0 0" width="4" height="4" color="#7BC8A4"></a-plane>

  <a-sky color="#ECECEC"></a-sky>

</a-scene>


It is recommended to go with Webpack, React and Redux as your stack. I’m working on an Altspace application using this stack,

Authors sample can be seen here. https://github.com/RSpace/agile-space



References:

https://medium.com/immersion-for-the-win/building-social-vr-apps-in-altspacevr-with-a-frame-81cb1bbc3ec4

Why Minecraft is a Big Deal for Virtual Reality

Looks like in this article it is presented as a cool thing good for education and children as well. As this enhance creativity. Although it does not say that the problem with kid is that addiction to the game and ends up in playing all the times


The tech part of it, it is interesting that we can do this 


build it Minecraft, you know how to do that. Then export it withMineways and import it into, say, A-Frame and you have created your own easily shared WebVR content using Minecraft. Or import some of the hundreds of thousands of shared Minecraft maps to experience everything from roller coasters and mazes to a full simulation of a working CPU.



We need a metaverse for virtual reality. One that connects people and content, and allows them to socialize and build stuff together. Sure, Altspace, High Fidelity, Beloola and all the others are great initiatives, but it seems like Minecraft already offers so much of what we desire from a metaverse.


This link gives the download link for Mineways 


http://www.realtimerendering.com/erich/minecraft/public/mineways/



References:

https://medium.com/immersion-for-the-win/why-minecraft-is-a-big-deal-for-virtual-reality-4250b11c979b


JavaScript in 3D: an Introduction to Three.js - refreshing a bit

the two most important classes in Three.js are Vector3 and Box3 .

Vector3

The most basic 3D class, containing three numbers x , y and z . This can be used to represent a point in 3D space or a direction and length. For example:


const vect = new THREE.Vector3(1, 1, 1);


Box3

This class represents a cuboid (3D box). Its main purpose is to get the bounding box of other objects — that is, the smallest possible cuboid that a 3D object could fit in. Every Box3 is aligned to the world x , y and z axes.



const vect = new THREE.Vector3(1, 1, 1);

const box = new THREE.Box3(vect);


Meshes

In Three.js, the basic visual element in a scene is a Mesh. This is a 3D object made up of triangular polygons. It’s built using two objects:

a Geometry — which defines its shape,

a Material — which defines its appearance.



Geometry

Depending on your use-case, you’ll either want to define a geometry within Three.js or import one from a file.


const geometry = new THREE.BoxGeometry( 20, 20, 20 );


const geometry = new THREE.SphereGeometry( 20, 64, 64 );

const geometry = new THREE.ConeBufferGeometry( 5, 20, 32 );

const geometry = new THREE.TorusKnotGeometry(10, 1.3, 500, 6, 6, 20);



Three.js comes with 10 mesh materials, each with its own advantages and customisable properties. We’ll look into a handful of the most useful ones.


MeshNormalMaterial

Useful for: getting up and running quickly


const material = new THREE.MeshNormalMaterial();


const material = new THREE.MeshBasicMaterial({ 

  wireframe: true, 

  color: 0xdaa520

});


MeshLambertMaterial

Useful for: high performance (but lower accuracy)


This is the first material which is affected by lights, so to see what we’re doing we’ll need to add some light to our scene. In the code below, we’ll add to spotlights, with a hint of yellow to create a warmer effect:


const scene = new THREE.Scene();

const frontSpot = new THREE.SpotLight(0xeeeece);

frontSpot.position.set(1000, 1000, 1000);

scene.add(frontSpot);

const frontSpot2 = new THREE.SpotLight(0xddddce);

frontSpot2.position.set(-500, -500, -500);

scene.add(frontSpot2);


MeshPhongMaterial

Useful for: medium performance and accuracy



This material offers a compromise between performance and appearance, and therefore it’s a good middle-ground for applications that need to be performant while also achieving a higher level of quality than the MeshLambertMaterial.


MeshStandardMaterial

Useful for: high accuracy (but lower performance)


Loaders

it’s possible to manually define the geometry of your meshes. However, in practice, many people will often prefer to import their geometries from 3D files. Luckily, Three.js has plenty of supported loaders, covering most of the major 3D file formats.



The basic ObjectLoader loads a JSON resource, using the JSON Object/Scene format. But most loaders need to be imported manually


// GLTF

import { GLTFLoader } from 'three/examples/jsm/loaders/GLTFLoader.js';

// OBJ

import { OBJLoader } from 'three/examples/jsm/loaders/OBJLoader.js';

// STL

import { STLLoader } from 'three/examples/jsm/loaders/STLLoader.js';

// FBX

import { FBXLoader } from 'three/examples/jsm/loaders/FBXLoader.js';

// 3MF

import { 3MFLoader } from 'three/examples/jsm/loaders/3MFLoader.js';



The recommended file format for online viewing is GLTF , on the grounds that it’s ‘focused on runtime asset delivery, compact to transmit and fast to load’.


import { GLTFLoader } from 'three/examples/jsm/loaders/GLTFLoader.js';

import model from '../models/sample.gltf';

let loader = new GLTFLoader();

loader.load(model, function (geometry) {

  // if the model is loaded successfully, add it to your scene here

}, undefined, function (err) {

  console.error(err);

});





references

https://medium.com/javascript-in-plain-english/javascript-in-3d-an-introduction-to-three-js-780f1e4a2e6d


How to Turn Physical Products into Realistic 3D Models for AR

Available Methods

There are a couple different options to consider when it comes to creating a 3D model from a real world object.



Option One: Photogrammetry

Using a series of photos taken of a real world object, photogrammetry software can create an accurate high-density mesh of most objects. The “mesh” is a group triangles that define the shape of the object. Along with the mesh, it will also create texture images that define the colour of the object.



These photos also define how light interacts with that object, letting the program know how rough or smooth the object is. However, while photogrammetry can be extremely effective for some objects, it can be highly ineffective for others


Below given good and bad candidate 


Good Candidates

rough surface

opaque

lots of visual patterns on surface (i.e. colour changes, texture, depth)


Bad Candidates

smooth surface

transparent

reflective/ shiny

featureless ( i.e. one solid colour; no visual patterns to detect)



Option Two: 3D Scanning

3D scanning is similar to photogrammetry, but uses more specialized hardware.This technology shares a lot of the pitfalls that photogrammetry does. Though it can be highly effective when it comes to accuracy, it provides a non-optimized model and texture set. This means the file size will be larger than necessary and will potentially require manual work to make it ready for use. Furthermore, it can be expensive to buy a good 3D scanner or have your object 3D scanned elsewhere.



Option Three: 3D Modeling Programs

In this option an artist starts with a blank digital space and creates the model from scratch. This can be a time consuming process and needs the skills of an experienced modeller to get right. However, the results can be visually accurate and fully optimized for purposes such as ours.

The majority of our products were modeled using a 3D modeling program called Maya. Next they were brought into Substance Painter, or Mudbox for texture painting. For the few products we saw as good candidates for photogrammetry, we used a program called RealityCapture.


Process:

Step One: Taking Reference Photos and Measurements

The first step for each product is taking good reference photos. We were lucky enough to have Magnolia send us each product, which was a huge help during the entire process.


Long focal length: it’s important to use a lens that has a long focal length. Otherwise the photo will be skewed by a perspective that makes things closer to the camera appear to be much larger. This kind of photo is not ideal to model against.



Varying views: Usually front, back, left side, right side and bottom are sufficient to create an accurate model.


These photos are then imported in the Maya scene for reference when we build the model. The goal here is accuracy. If the model does not reflect the real world proportions of the product, the AR representation becomes misleading.

Then we take careful measurements of height, length, width of each part. Sometimes it requires drawing up of an extensive diagram depending on how complex the object is.


Step Two: Modeling


After we have our measurements and scene file set up, we start with a primitive shape (i.e. a sphere, cylinder or cube) and add detail until we have an accurate representation of the product. It’s also important to remember that the mesh complexity has to stay relatively low, in order for the app to load it fast.


After the modeling portion is finished, we export two files. One with a low-density mesh, and one with a high-density mesh, and import them into Substance Painter, where we add texture.


Step Three: Painting the Textures


The high-density mesh will be used to generate smooth texture maps. Substance Painter will use these initial texture maps for the generation of various effects, like edge wear, scratches and rust.


Then we adjust our Substance Painter viewport to match the environmental lighting the models will be viewed in. We did this by using a 360° photo of the office.


Substance works a lot like Photoshop: you add detail, textures, and colour adjustments in layers. What makes a model look real is capturing the imperfections. The layers of scratches, fingerprints, and chipped paint of the real world object. Rarely is anything in real life one colour, completely clean, or perfectly reflective.



Prepping Models for AR

Before the models can be used, they must be exported in a format used by your 3D engine of choice. In our case, we used the Collada (DAE) format as we were building a native Swift app using XCode, and the textures were exported separately as JPEGs.



An additional texture is needed for the product’s contact shadow. Without shadows, products look like they’re floating above the surface. These are generated beforehand in Maya, and saved as a texture to be displayed underneath the product mesh.





References:

https://medium.com/shopify-vr/how-we-turn-physical-products-into-realistic-3d-models-for-ar-13f9dc20d964

Saturday, December 12, 2020

How does web AR compare to native app?

 Biggest struggles were centered around browser compatibility. Which is still an issue to date with web based AR experiences.


Not every mobile browser has support for the Sensors API or devices are missing certain sensors, which was a huge issue we saw with Android devices, particularly. When releasing an app through a store its possible to control which devices the app can be installed on, but with the web you don’t have that control. Yes its possible add checks within the web-page but then you serve a screen that says “Sorry your device is not supported” it feels like a punch in the gut.



Native apps also have the ability to tap into ARCamera and use that to do the heavy lifting at the OS level instead of competing. What I mean is there are many ways to apply computer vision and SLAM tracking, but with ARKit and ARCore, the native integrations are bundled together and optimized with the OS.⁹


How can WebAR become more competitive?



Currently web browsers don’t have enough access in terms of the AR camera. The AR camera differs from the traditional camera as it handles the augmentation at the OS level and not on-top of it. Current implementations of web based AR requires the calculations to be done on top of the OS causing computational lag, limiting rendering, and sometimes even causing visible lag.


A huge step towards making AR even more accessible through the web would be for the Web Standards to adopt an API for direct access to the ARCamera object.



If that abstraction could exist as a standard web API, any browser app could leverage ARkit/ARCore or whatever underlying platform exists. Once a web API exists many different frameworks will emerge. There are a few experimental browsers that leverage ARKit/ARCore but they require a specific JS framework.


USDZ is a good start but its missing a vital component, a layer that adds support for interaction. Google’s efforts are still only available in the canary version of Chrome, so until its included within the production build it will lag Apple’s.



References:

https://medium.com/agora-io/web-vs-app-ar-edition-d9aafe988ba2

WebXR device API

Like the WebVR spec before it, the WebXR Device API is a product of the Immersive Web Community Group which has contributors from Google, Microsoft, Mozilla, and others. The 'X in XR is intended as a sort of algebraic variable that stands for anything in the spectrum of immersive experiences. It's available in the previously mentioned origin trial as well as through a polyfill.


The WebXR Device API does not provide image rendering features. That's up to you. Drawing is done using WebGL APIs. 

Three.js has supported WebXR since May. I've heard nothing about A-Frame.


Starting and running an app

The basic process is this:


Request an XR device.

If it's available, request an XR session. If you want the user to put their phone in a headset, it's called an immersive session and requires a user gesture to enter.

Use the session to run a render loop which provides 60 image frames per second. Draw appropriate content to the screen in each frame.

Run the render loop until the user decides to exit.

End the XR session.

Let's look at this in a little more detail and include some code. You won't be able to run an app from what I'm about to show you. But again, this is just to give a sense of it.


Starting and running an app

The basic process is this:


Request an XR device.

If it's available, request an XR session. If you want the user to put their phone in a headset, it's called an immersive session and requires a user gesture to enter.

Use the session to run a render loop which provides 60 image frames per second. Draw appropriate content to the screen in each frame.

Run the render loop until the user decides to exit.

End the XR session.

Let's look at this in a little more detail and include some code. You won't be able to run an app from what I'm about to show you. But again, this is just to give a sense of it.




Request an XR device

If you're not using an immersive session you can skip advertising the functionality and getting a user gesture and go straight to requesting a session. An immersive session is one that requires a headset. A non-immersive session simply shows content on the device screen. The former is what most people think of when you refer to virtual reality or augmented reality. The latter is sometimes called a 'magic window'.





if (navigator.xr) {

  navigator.xr.requestDevice()

  .then(xrDevice => {

    // Advertise the AR/VR functionality to get a user gesture.

  })

  .catch(err => {

    if (err.name === 'NotFoundError') {

      // No XRDevices available.

      console.error('No XR devices available:', err);

    } else {

      // An error occurred while requesting an XRDevice.

      console.error('Requesting XR device failed:', err);

    }

  })

} else{

  console.log("This browser does not support the WebXR API.");

}


Request an XR session

Now that we have our device and our user gesture, it's time to get a session. To create a session, the browser needs a canvas on which to draw.





xrPresentationContext = htmlCanvasElement.getContext('xrpresent');

let sessionOptions = {

  // The immersive option is optional for non-immersive sessions; the value

  //   defaults to false.

  immersive: false,

  outputContext: xrPresentationContext

}

xrDevice.requestSession(sessionOptions)

.then(xrSession => {

  // Use a WebGL context as a base layer.

  xrSession.baseLayer = new XRWebGLLayer(session, gl);

  // Start the render loop

})


Run the render loop

The code for this step takes a bit of untangling. To untangle it, I'm about to throw a bunch of words at you. If you want a peek at the final code, jump ahead to have a quick look then come back for the full explanation. There's quite a bit that you may not be able to infer.


The basic process for a render loop is this:


Request an animation frame.

Query for the position of the device.

Draw content to the position of the device based on it's position.

Do work needed for the input devices.

Repeat 60 times a second until the user decides to quit.





Request a presentation frame

The word 'frame' has several meanings in a Web XR context. The first is the frame of reference which defines where the origin of the coordinate system is calculated from, and what happens to that origin when the device moves. (Does the view stay the same when the user moves or does it shift as it would in real life?)


The second type of frame is the presentation frame, represented by an XRFrame object. This object contains the information needed to render a single frame of an AR/VR scene to the device. This is a bit confusing because a presentation frame is retrieved by calling requestAnimationFrame(). This makes it compatible with window.requestAnimationFrame().


xrSession.requestFrameOfReference('eye-level')

.then(xrFrameOfRef => {

  xrSession.requestAnimationFrame(onFrame(time, xrFrame) {

    // The time argument is for future use and not implemented at this time.

    // Process the frame.

    xrFrame.session.requestAnimationFrame(onFrame);

  }

});



Poses

Before drawing anything to the screen, you need to know where the display device is pointing and you need access to the screen. In general, the position and orientation of a thing in AR/VR is called a pose. Both viewers and input devices have a pose. (I cover input devices later.) Both viewer and input device poses are defined as a 4 by 4 matrix stored in a Float32Array in column major order. You get the viewer's pose by calling XRFrame.getDevicePose() on the current animation frame object. Always test to see if you got a pose back. If something went wrong you don't want to draw to the screen.


let pose = xrFrame.getDevicePose(xrFrameOfRef);

if (pose) {

  // Draw something to the screen.

}


Views


After checking the pose, it's time to draw something. The object you draw to is called a view (XRView). This is where the session type becomes important. Views are retrieved from the XRFrame object as an array. If you're in a non-immersive session the array has one view. If you're in an immersive session, the array has two, one for each eye.



for (let view of xrFrame.views) {

  // Draw something to the screen.

}



Below is how it looks like altogether 


xrSession.requestFrameOfReference('eye-level')

.then(xrFrameOfRef => {

  xrSession.requestAnimationFrame(onFrame(time, xrFrame) {

    // The time argument is for future use and not implemented at this time.

    let pose = xrFrame.getDevicePose(xrFrameOfRef);

    if (pose) {

      for (let view of xrFrame.views) {

        // Draw something to the screen.

      }

    }

    // Input device code will go here.

    frame.session.requestAnimationFrame(onFrame);

  }

}


Ending XR session 


xrDevice.requestSession(sessionOptions)

.then(xrSession => {

  // Create a WebGL layer and initialize the render loop.

  xrSession.addEventListener('end', onSessionEnd);

});


// Restore the page to normal after immersive access has been released.

function onSessionEnd() {

  xrSession = null;


  // Ending the session stops executing callbacks passed to the XRSession's

  // requestAnimationFrame(). To continue rendering, use the window's

  // requestAnimationFrame() function.

  window.requestAnimationFrame(onDrawFrame);

}


How does interaction work?



The WebXR Device API adopts a "point and click" approach to user input. With this approach every input source has a defined pointer ray to indicate where an input device is pointing and events to indicate when something was selected. Your app draws the pointer ray and shows where it's pointed. When the user clicks the input device, events are fired—select, selectStart, and selectEnd, specifically. Your app determines what was clicked and responds appropriately.


To users, the pointer ray is just a faint line between the controller and whatever they're pointing at. But your app has to draw it. That means getting the pose of the input device and drawing a line from its location to an object in AR/VR space. That process looks roughly like this:


let inputSources = xrSession.getInputSources();

for (let xrInputSource of inputSources) {

  let inputPose = frame.getInputPose(inputSource, xrFrameOfRef);

  if (!inputPose) {

    continue;

  }

  if (inputPose.gripMatrix) {

    // Render a virtual version of the input device

    //   at the correct position and orientation.

  }

  if (inputPose.pointerMatrix) {

    // Draw a ray from the gripMatrix to the pointerMatrix.

  }

}


References:

https://developers.google.com/web/updates/2018/05/welcome-to-immersive

Using 3D models with AR.js and A-Frame

This seems simple at the very basic 


AR.js is the perfect library to get started with Augmented Reality (AR) on the browser. 

Its integration with A-Frame is what makes it extremely simple to integrate into any AR project.


We will be able to load below formats in AR 


glTF 2.0 and glTF

OBJ

COLLADA (DAE)

PLY

JSON

FBX


glTF, OBJ and COLLADA are already supported on A-Frame and have good documentation available on the A-Frame docs.

To include the other model formats, Don McCurdy built aframe-extras ( providing a whole lot of other A-Frame components of which model loaders is one) which allows you to include the other formats such as PLY, JSON, FBX and three.js.


Creating the Base

We first include the latest A-Frame build that we will be using in our project.


<script src=”https://aframe.io/releases/0.6.1/aframe.min.js"></script>


Continue by includling AR.js which will make our a-frame project AR enabled.

<script src=”https://jeromeetienne.github.io/AR.js/aframe/build/aframe-ar.js"></script>


We then define the body


<body style=’margin : 0px; overflow: hidden;’>

</body>


Once the body is defined, we create an a-frame scene and define that we would like to use arjs to create an AR scene.


<a-scene embedded arjs=’sourceType: webcam;’>

</a-scene>


We then add a camera to the a-scene we just created. The project which we are working on will let us use multiple markers but if you would like to use just one, use the <a-marker-camera> tag instead.


<a-entity camera></a-entity>


Finally, we add a marker to the scene so that camera displays the 3D model when the marker is in focus.


<a-marker preset=’hiro’>

</a-marker>



Combining all steps, it should look like this below like what you see below.


<script src=”https://aframe.io/releases/0.6.1/aframe.min.js"></script>

<script src=”https://jeromeetienne.github.io/AR.js/aframe/build/aframe-ar.js"></script>


<body style=’margin : 0px; overflow: hidden;’>

  <a-scene embedded arjs=’sourceType: webcam;’>

      <a-marker preset=’hiro’>

        

      </a-marker>

  <a-entity camera>

  </a-entity>

  </a-scene>

</body>



Now below is how we can include various models 


Below is how to include Obj models


<a-entity 

obj-model=”obj: url(/path/to/nameOfFile.obj); 

mtl: url(/path/to/nameOfFile.mtl)”>

</a-entity>



Below is how to include glTF 2.0 and glTF Models


<a-entity 

gltf-model=”url(/path/to/nameOfFile.gltf)”>

</a-entity>


To include glTF 2.0 models need to include extra loaders


<script src=”https://rawgit.com/donmccurdy/aframe-extras/master/dist/aframe-extras.loaders.min.js"></script>



<a-entity

gltf-model-next=”src: url(/path/to/nameOfFile.gltf);” >

</a-entity>






References:

https://medium.com/@akashkuttappa/using-3d-models-with-ar-js-and-a-frame-84d462efe498

Types of Memory leaks in NodeJS

A Node application is a long-running process that is bootstrapped once until the process is killed or the server restarts. It handles all incoming requests and consumes resources until these are garbage collected by V8. Leaks are the kind of resources that keep their reference in memory and do not get garbage collected. 


The 4 Types of Memory Leaks

Global resources

Closures

Caching

Promises


Preparation

We will need the excellent Clinic.js and autocannon to debug these leaks. You can use any other load testing tool you want. Almost all of them will produce the same results. Clinic.js is an awesome tool developed by NearForm. This will help us do an initial diagnosis of performance issues like memory leaks and even loop delays. So, let’s install these tools first


npm i autocannon -g

npm i clinic -g


Global Resources

This is one of the most common causes of leaks in Node. Due to the nature of JavaScript as a language, it is very easy to add to global variables and resources. If these are not cleaned over time, they keep adding up and eventually crash the application. Let’s see a very simple example. Imagine this is the application’s server.js:


const http = require("http");


const requestLogs = [];

const server = http.createServer((req, res) => {

    requestLogs.push({ url: req.url, array: new Array(10000).join("*")

    res.end(JSON.stringify(requestLogs));

});


server.listen(3000);

console.log("Server listening to port 3000. Press Ctrl+C to stop it.");


Now, if we run:


clinic doctor --on-port 'autocannon -w 300 -c 100 -d 20 localhost:3000' -- node server.js



We are doing two things at once: load testing the server with autocannon and catching trace data to analyze with Clinic.


What we see is a steady increase of memory and delay in the event loop to serve requests. This is not only adding to the heap usage but also hampering the performance of the requests. Analyzing the code simply reveals that we are adding to the requestLogs global variable with each request and we never free it up. So it keeps growing and leaking.


We can trace the leak with Chrome’s Node Inspector by taking a heap dump when the application runs for the first time, taking another heap dump after 30 seconds of load testing, and then comparing the objects allocated between these two. 


It is the global variable requestLogs that’s causing the leak. Snapshot 2 is significantly higher in memory usage than Snapshot 1. Let’s fix that:


const http = require("http");


const server = http.createServer((req, res) => {

    const requestLogs = [];

    requestLogs.push({ url: req.url, array: new Array(10000).join("*")

    res.end(JSON.stringify(requestLogs));

});


server.listen(3000);

console.log("Server listening to port 3000. Press Ctrl+C to stop it.");


This is one solution. If you do need to persist this data, you could add external storage like databases to store the logs. And if we run the Clinic load testing again, we see everything is normal now:


Closures



Closures are common in JavaScript, and they can cause memory leaks that are elusive in nature. 


const http = require("http");


var theThing = null;

var replaceThing = function () {

    var originalThing = theThing;

    var unused = function () {

        if (originalThing) console.log("hi");

    };

    theThing = {

        longStr: new Array(10000).join("*"),

        someMethod: function () {

            console.log(someMessage);

        },

    };

};


const server = http.createServer((req, res) => {

    replaceThing();

    res.writeHead(200);

    res.end("Hello World");

});


server.listen(3000);

console.log("Server listening to port 3000. Press Ctrl+C to stop it.");



The memory is increasing fast, and if we wonder what is wrong with the script, it is clear that theThing is overwritten with every API call. Let’s take heap dumps and see the result:


When we compare two heap dumps, we see that someMethod is kept in memory for all its invocations and it is holding onto longStr, which is adding to the rapid increase in memory. In the code above, the someMethod closure is creating enclosing scope that is holding onto unused variable even though it is never invoked. This prevents the garbage collector from freeing originalThing. The solution is simply nullifying the originalThing at the end. We are freeing the object so that the closure scope is not retained anymore:



const http = require("http");


var theThing = null;

var replaceThing = function () {

    var originalThing = theThing;

    var unused = function () {

        if (originalThing) console.log("hi");

    };

    theThing = {

        longStr: new Array(10000).join("*"),

        someMethod: function () {

            console.log(someMessage);

        },

    };

    originalThing = null;

};


const server = http.createServer((req, res) => {

    replaceThing();

    res.writeHead(200);

    res.end("Hello World");

});


server.listen(3000);

console.log("Server listening to port 3000. Press Ctrl+C to stop it.");


The memory is increasing fast, and if we wonder what is wrong with the script, it is clear that theThing is overwritten with every API call. Let’s take heap dumps and see the result:


When we compare two heap dumps, we see that someMethod is kept in memory for all its invocations and it is holding onto longStr, which is adding to the rapid increase in memory. In the code above, the someMethod closure is creating enclosing scope that is holding onto unused variable even though it is never invoked. This prevents the garbage collector from freeing originalThing. The solution is simply nullifying the originalThing at the end. We are freeing the object so that the closure scope is not retained anymore:


Now if we run our load testing along with Clinic, we see:


No leaking of closures! Nice. We have to keep an eye on closures, as they create their own scope and retain references to outer scope variables.





References:

https://medium.com/better-programming/the-4-types-of-memory-leaks-in-node-js-and-how-to-avoid-them-with-the-help-of-clinic-js-part-1-3f0c0afda268


The future of our augmented worlds

The idea of augmenting reality brings an interesting promise and could possibly be a fundamental shift in how we interact with computers.

A common definition of Augmented Reality (AR) is of a digital overlay on top of the real world. The digital overlay may consist of 3D graphics, text, video, audio and other multimedia formats.

it rests on the idea that AR only supplements reality, but what if it becomes one? What if it enables you to switch between multiple realities? The ‘real’ real, the ‘augmented’ real and a ‘parallel/virtual’ real.

We already live in a world of illusions.

AR’s evolution

We have made great strides in image recognition, machine learning, 3D graphics optimization and a whole host of other technical challenges to have this first wave of AR within hands reach. Most of us have seen a 3D architectural model of a building in an AR environment being showcased as the pinnacle of AR’s potential. This is only the beginning.


If the first wave of AR was all about overlaying an accurate representation of digital artifacts in the physical environment, the next wave would be about these digital artifacts being contextually relevant. For example, if I am reading about frog anatomy, then a first wave AR experience would be an accurate representation of the frog and its anatomy. An evolved experience would be showing me a representation that is adapted to my persona. For a layman like me, it would show a simpler model, while for a medical student it could show a detailed one. The experience could also show me an engineer’s perspective on the frog anatomy where it uses familiar terminologies to label organs and systems. The system adapts itself to my learning style and maybe uses what I learnt about frogs when teaching me about dinosaurs.

Context would be key in the next wave. Your AR experience would be defined by the information that the system collects/refers to via social media, wearables, sensors, Internet of Things (IoT), physical and internet history. This would enable a deeper sense of immersion with our machines, our environment and other humans.

Early AR experiences were centred around 3D objects being placed on printed patterns known as markers. These were mainly used for motion capture in films. From there we moved to face recognition where our faces became the target (thanks Snapchat!). Microsoft’s Kinect enabled our bodies to become the target. A lot of recent smartphone camera-based AR removes the need for a multitude of sensors. A horizontal or vertical surface becomes the target of these AR experiences. In a matter of a few decades, we moved from patterns on paper to the environment as the target for our AR experiences. Soon these environments will become intelligent and the world will be our target.


The other side

The most common ideas around AR have been centred around visual systems in smartphones or smart glasses. They still require us to experience the augmented world through these glass windows. But that’s not how you interact with the real world. In the real world, you can open these windows and touch, hear, smell or even taste what’s on the other side. The next stage of AR evolution would be enhancing and augmenting all our senses or even discovering new ones. We have already begun doing this with voice interfaces enabling us to interact with computers much more naturally. Thereby interweaving the physical and digital so seamlessly that it almost feels like magic.


References:

https://uxdesign.cc/the-future-of-our-augmented-worlds-d43d334e0118

Friday, December 11, 2020

Building Blocks of Augmented Vision

Most early and popular ideas around AR focus on augmenting human vision and technology was developed to support these ideas. The camera plays the main role in this type of Augmented Reality (AR). A camera paired with a computer (smartphone) uses computer vision(CV)to scan its surroundings and content is superimposed on the camera view. A large number of modern AR applications readily use the smartphone’s camera to show 3D objects in real space without having to use special markers. This method is sometimes called marker-less AR. However, this was not always the standard. There are a number of techniques used to augment content on the camera view.


Fiducial markers and images

Fiducial markers are black and white patterns often printed on a plane surface. The computer vision algorithm uses these markers to scan the image to place and scale the 3D object in the camera view accordingly. Earlier AR solutions regularly relied on fiducial markers. As an alternative, images too can be used instead of fiducial markers. Fiducial markers are the most accurate mechanisms for AR content creation and are regularly used in motion capture (MOCAP) in the film industry.



3D depth-sensing

With ‘You are the controller’ as it’s tagline, Microsoft’s Kinect was a revolutionary device for Augmented reality research. It is a 3D depth-sensing camera which recognizes and maps spatial data. 3D depth-sensing was available much before the Kinect, however, the Kinect made the technology a lot more accessible. It changed the way regular computers see and augment natural environments. Depth sensing cameras analyze and map spatial environments to place 3D objects in the camera view. A more mainstream depth-sensing camera in the recent times would-be iPhone X’s front camera.



Simultaneous localization and mapping (SLAM)

For a robot or a computer to be able to move through or augment an environment, it needs to map the environment and understand it’s location within it. Simultaneous localization and mapping (SLAM) is a technology that enables just that. It was originally built for robots to navigate complex terrains and is even used by Google’s self-driving car. As the name suggests, SLAM enables realtime mapping of the environment to generate a 3D map with the help of a camera and a few sensors. This 3D map can be used by the computer to place multimedia content in the environment.


Point cloud

3D depth-sensing cameras like Microsoft’s Kinect and Intel’s real sense, and SLAM generate a set of data points in space known as a point cloud. Point clouds are referenced by the computer to place content in 3D environments. Once mapped to an environment, they enable the system to remember where a 3D object is placed in an environment or even at a particular GPS location.



Machine learning + Normal camera

Earlier AR methods relied on a multitude of sensors in addition to the camera. Software libraries like OpenCV, Vuforia, ARCore, ARKit, MRKit have enabled AR on small computing devices like the smartphone with surprising accuracy. These libraries use machine learning algorithms to place 3D objects in the environment and require only a digital camera for input. The frugality of these algorithms in terms of sensor requirements have largely been responsible for the ensuing excitement around AR in recent times.


Machine learning + Normal camera

Earlier AR methods relied on a multitude of sensors in addition to the camera. Software libraries like OpenCV, Vuforia, ARCore, ARKit, MRKit have enabled AR on small computing devices like the smartphone with surprising accuracy. These libraries use machine learning algorithms to place 3D objects in the environment and require only a digital camera for input. The frugality of these algorithms in terms of sensor requirements have largely been responsible for the ensuing excitement around AR in recent times.




References:

https://uxplanet.org/building-blocks-for-augmented-vision-cc9b6172b461

Understanding the different types of AR devices

Mainly the below 

Heads up display (HUD) 

Heads up displays were mainly invented for mission critical applications like flight controllers and weapons system dashboards.

A regular HUD contains three main components; a projector unit, a viewing glass (combiner) and a computer (symbol generator). HUDs help increase situational awareness by reducing the shift of focus for pilots. Increasingly heads up displays have been finding ways into new automobile designs.


Helmet mounted displays

elmet mounted displays which use the same underlying principles of heads up displays are being used in aviation and other industries.

Holographic displays

Popularized in the Star wars series, Minority report and the Iron man series in the recent times, these type of displays use light diffraction to generate three dimensional forms of objects in real space. The fact that holographic displays do not require users to wear any gear to view them is one of their greatest advantages. These type of displays have always been in the realm of science fiction and have recently started gaining traction with products like Looking Glass and Holovect.

Smart glasses

these are glasses that augment your vision. Smart glasses are of two types:


Optical see through

In Optical see through glasses, the user views reality directly through optical elements such as holographic wave guides and other systems that enable graphical overlay on the real world.


Microsoft’s Hololens, Magic Leap One and the Google Glass are recent examples of optical see through smart glasses.


Video see through

With these type of smart glasses, the user views reality that is first captured by one or two cameras mounted on the display. These camera views are then combined with computer generated imagery for the user to see. The HTC Vive VR headset has an inbuilt camera which is often used for creating AR experiences on the device.


Handheld AR

Although handheld AR is a type of video see through, it deserves special mention. The rise of handheld AR is the tipping point for the technology being truly pervasive. Augmented reality libraries like ARKit, ARCore, MRKit, have enabled sophisticated computer vision algorithms to be available for anyone to use. In handheld or mobile AR, all you need is a smartphone to have access to a host of AR experiences.


A brief note on VR

Virtual Reality could be awarded the prize for the most popular oxymoron. It provides the user an immersive experience within a 3D computer generated model or simulation. VR is generally accomplished with the use of head mounted displays (HMDs) or full immersive projection based systems. Google’s introduction of the smartphone based VR experience using cardboard, biconvex plastic lenses, and few other cheap components was a game changer. Anyone with 15$ and a smartphone had access to VR experiences earlier reserved for tech elites in industry and academia. It has been regarded as one of the most intelligent and high-impact moves that enabled a wider adoption and awareness of the technology.


Despite the hype around Virtual reality, it is almost certain that Augmented reality will be the more widely adopted one. Like the smartphone, it makes technology a part of the daily life of users without obstructing reality.


References

https://uxdesign.cc/augmented-reality-device-types-a7668b15bf7a