Thursday, August 27, 2020

Docker - Memory profiling

 How to know how much memory is used by Docker 

quick and easy utility is docker stats it gives the output like below 


$ docker stats


CONTAINER ID        NAME                                    CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O           PIDS

b95a83497c91        awesome_brattain                        0.28%               5.629MiB / 1.952GiB   0.28%               916B / 0B           147kB / 0B          9

67b2525d8ad1        foobar                                  0.00%               1.727MiB / 1.952GiB   0.09%               2.48kB / 0B         4.11MB / 0B         2

e5c383697914        test-1951.1.kay7x1lh1twk9c0oig50sd5tr   0.00%               196KiB / 1.952GiB     0.01%               71.2kB / 0B         770kB / 0B          1

4bda148efbc0        random.1.vnc8on831idyr42slu578u3cr      0.00%               1.672MiB / 1.952GiB   0.08%               110kB / 0B          578kB / 0B          2

What is reverse Proxy?

A reverse proxy is a server that sits in front of web servers and forwards client (e.g. web browser) requests to those web servers. Reverse proxies are typically implemented to help increase security, performance, and reliability. In order to better understand how a reverse proxy works and the benefits it can provide, let’s first define what a proxy server is.


A reverse proxy is a server that sits in front of one or more web servers, intercepting requests from clients. This is different from a forward proxy, where the proxy sits in front of the clients. With a reverse proxy, when clients send requests to the origin server of a website, those requests are intercepted at the network edge by the reverse proxy server. The reverse proxy server will then send requests to and receive responses from the origin server.


The difference between a forward and reverse proxy is subtle but important. A simplified way to sum it up would be to say that a forward proxy sits in front of a client and ensures that no origin server ever communicates directly with that specific client. On the other hand, a reverse proxy sits in front of an origin server and ensures that no client ever communicates directly with that origin server.


Once again, let’s illustrate by naming the computers involved:


D: Any number of users’ home computers

E: This is a reverse proxy server

F: One or more origin servers



Below we outline some of the benefits of a reverse proxy:


Load balancing - A popular website that gets millions of users every day may not be able to handle all of its incoming site traffic with a single origin server. Instead, the site can be distributed among a pool of different servers, all handling requests for the same site. In this case, a reverse proxy can provide a load balancing solution which will distribute the incoming traffic evenly among the different servers to prevent any single server from becoming overloaded. In the event that a server fails completely, other servers can step up to handle the traffic.

Protection from attacks - With a reverse proxy in place, a web site or service never needs to reveal the IP address of their origin server(s). This makes it much harder for attackers to leverage a targeted attack against them, such as a DDoS attack. Instead the attackers will only be able to target the reverse proxy, such as Cloudflare’s CDN, which will have tighter security and more resources to fend off a cyber attack.

Global Server Load Balancing (GSLB) - In this form of load balancing, a website can be distributed on several servers around the globe and the reverse proxy will send clients to the server that’s geographically closest to them. This decreases the distances that requests and responses need to travel, minimizing load times.

Caching - A reverse proxy can also cache content, resulting in faster performance. For example, if a user in Paris visits a reverse-proxied website with web servers in Los Angeles, the user might actually connect to a local reverse proxy server in Paris, which will then have to communicate with an origin server in L.A. The proxy server can then cache (or temporarily save) the response data. Subsequent Parisian users who browse the site will then get the locally cached version from the Parisian reverse proxy server, resulting in much faster performance.

SSL encryption - Encrypting and decrypting SSL (or TLS) communications for each client can be computationally expensive for an origin server. A reverse proxy can be configured to decrypt all incoming requests and encrypt all outgoing responses, freeing up valuable resources on the origin server.

 

References:

https://www.cloudflare.com/learning/cdn/glossary/reverse-proxy/



What’s a proxy server?

A forward proxy, often called a proxy, proxy server, or web proxy, is a server that sits in front of a group of client machines. When those computers make requests to sites and services on the Internet, the proxy server intercepts those requests and then communicates with web servers on behalf of those clients, like a middleman.

For example, let’s name 3 computers involved in a typical forward proxy communication:


A: This is a user’s home computer

B: This is a forward proxy server

C: This is a website’s origin server (where the website data is stored)


In a standard Internet communication, computer A would reach out directly to computer C, with the client sending requests to the origin server and the origin server responding to the client. When a forward proxy is in place, A will instead send requests to B, which will then forward the request to C. C will then send a response to B, which will forward the response back to A.


Why would anyone add this extra middleman to their Internet activity? There are a few reasons one might want to use a forward proxy:


To avoid state or institutional browsing restrictions - Some governments, schools, and other organizations use firewalls to give their users access to a limited version of the Internet. A forward proxy can be used to get around these restrictions, as they let the user connect to the proxy rather than directly to the sites they are visiting.

To block access to certain content - Conversely, proxies can also be set up to block a group of users from accessing certain sites. For example, a school network might be configured to connect to the web through a proxy which enables content filtering rules, refusing to forward responses from Facebook and other social media sites.

To protect their identity online - In some cases, regular Internet users simply desire increased anonymity online, but in other cases, Internet users live in places where the government can impose serious consequences to political dissidents. Criticizing the government in a web forum or on social media can lead to fines or imprisonment for these users. If one of these dissidents uses a forward proxy to connect to a website where they post politically sensitive comments, the IP address used to post the comments will be harder to trace back to the dissident. Only the IP address of the proxy server will be visible.


References:

https://www.cloudflare.com/learning/cdn/glossary/reverse-proxy/



Google Chrome Password Prompt


I am getting a popup message saying that "A data breach on a site or app exposed your password.


Just to reiterate, Google nor Chrome has been hacked or breached. Please read below to understand what this message means for you.


Confirming this is a genuine message from Chrome. When you type your credentials into a website, Chrome will now warn you if your username and password have been compromised in a data breach on some site or app. It will suggest that you change them everywhere they were used. 

 

Google first introduced this technology early this year as the Password Checkup extension. In October it became a part of the Password Checkup in your Google Account, where you can conduct a scan of your saved passwords anytime. And now it has evolved to offer warnings as you browse the web in Chrome. 

 

You can control this feature in Chrome Settings under Sync and Google Services. For now, it is being gradually rolled out for everyone signed in to Chrome as a part of Google's Safe Browsing protections.

 

Username/email and password combination

Firstly, it does not matter on which website you see this new warning. The new message is a warning about the username/email and password combination that you just entered. That combination has been compromised in a breach of a website/app. What that actually means is you need to change your password on all websites/apps where you are using the same username/email and password combination.


Data breach source

Today, the warning does not show the source of the breach or what website the data was obtained from. It may be that Google does not store that and only stores the combinations (I suspect this to be the case.) Remember, you need to change your password on all websites/apps where you have used that same username/email and password combination anyway so where the breach came from only helps you so much.



References:

https://support.google.com/chrome/thread/23534509?hl=en

What is Green Unicorn or gun icon

Gunicorn 'Green Unicorn' is a Python WSGI HTTP Server for UNIX. It's a pre-fork worker model. The Gunicorn server is broadly compatible with various web frameworks, simply implemented, light on server resources, and fairly speedy.


Installation

Here's a quick rundown on how to get started with Gunicorn. For more details read the documentation.


$ pip install gunicorn

  $ cat myapp.py

    def app(environ, start_response):

        data = b"Hello, World!\n"

        start_response("200 OK", [

            ("Content-Type", "text/plain"),

            ("Content-Length", str(len(data)))

        ])

        return iter([data])

  $ gunicorn -w 4 myapp:app

  [2014-09-10 10  :22:28 +0000] [30869] [INFO] Listening at: http://127.0.0.1:8000 (30869)

  [2014-09-10 10:22:28 +0000] [30869] [INFO] Using worker: sync

  [2014-09-10 10:22:28 +0000] [30874] [INFO] Booting worker with pid: 30874

  [2014-09-10 10:22:28 +0000] [30875] [INFO] Booting worker with pid: 30875

  [2014-09-10 10:22:28 +0000] [30876] [INFO] Booting worker with pid: 30876

  [2014-09-10 10:22:28 +0000] [30877] [INFO] Booting worker with pid: 30877


Deployment

Gunicorn is a WSGI HTTP server. It is best to use Gunicorn behind an HTTP proxy server. We strongly advise you to use nginx.

Here's an example to help you get started with using nginx:

server {

    listen 80;

    server_name example.org;

    access_log  /var/log/nginx/example.log;


    location / {

        proxy_pass http://127.0.0.1 :8000;

        proxy_set_header Host $host;

        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    }

  }


Nginx is set up as reverse proxy server to a Gunicorn server running on localhost port 8000.

Read the full documentation at docs.gunicorn.org



References:

https://gunicorn.org/index.html

What is Virtual Environment in Python

venv — Creation of virtual environments. The venv module provides support for creating lightweight “virtual environments” with their own site directories, optionally isolated from system site directories. Each virtual environment has its own Python binary (which matches the version of the binary that was used to create this environment) and can have its own independent set of installed Python packages in its site directories.


python3 -m venv /path/to/new/virtual/environment


Running this command creates the target directory (creating any parent directories that don’t exist already) and places a pyvenv.cfg file in it with a home key pointing to the Python installation from which the command was run (a common name for the target directory is .venv). It also creates a bin (or Scripts on Windows) subdirectory containing a copy/symlink of the Python binary/binaries (as appropriate for the platform or arguments used at environment creation time). It also creates an (initially empty) lib/pythonX.Y/site-packages subdirectory (on Windows, this is Lib\site-packages). If an existing directory is specified, it will be re-used.


Deprecated since version 3.6: pyvenv was the recommended tool for creating virtual environments for Python 3.3 and 3.4, and is deprecated in Python 3.6.


he created pyvenv.cfg file also includes the include-system-site-packages key, set to true if venv is run with the --system-site-packages option, false otherwise.


Unless the --without-pip option is given, ensurepip will be invoked to bootstrap pip into the virtual environment.


You don’t specifically need to activate an environment; activation just prepends the virtual environment’s binary directory to your path, so that “python” invokes the virtual environment’s Python interpreter and you can run installed scripts without having to use their full path. However, all scripts installed in a virtual environment should be runnable without activating it, and run with the virtual environment’s Python automatically.


You can deactivate a virtual environment by typing “deactivate” in your shell. The exact mechanism is platform-specific and is an internal implementation detail (typically a script or shell function will be used).


 A virtual environment is a Python environment such that the Python interpreter, libraries and scripts installed into it are isolated from those installed in other virtual environments, and (by default) any libraries installed in a “system” Python, i.e., one which is installed as part of your operating system.

A virtual environment is a directory tree which contains Python executable files and other files which indicate that it is a virtual environment.



When a virtual environment is active (i.e., the virtual environment’s Python interpreter is running), the attributes sys.prefix and sys.exec_prefix point to the base directory of the virtual environment, whereas sys.base_prefix and sys.base_exec_prefix point to the non-virtual environment Python installation which was used to create the virtual environment. If a virtual environment is not active, then sys.prefix is the same as sys.base_prefix and sys.exec_prefix is the same as sys.base_exec_prefix (they all point to a non-virtual environment Python installation).



References:

https://docs.python.org/3/library/venv.html

What is OWL - Web Ontology Language

The Web Ontology Language (OWL) is a family of knowledge representation languages for authoring ontologies. Ontologies are a formal way to describe taxonomies and classification networks, essentially defining the structure of knowledge for various domains: the nouns representing classes of objects and the verbs representing relations between the objects.

Ontologies resemble class hierarchies in object-oriented programming but there are several critical differences. Class hierarchies are meant to represent structures used in source code that evolve fairly slowly (typically monthly revisions) whereas ontologies are meant to represent information on the Internet and are expected to be evolving almost constantly. Similarly, ontologies are typically far more flexible as they are meant to represent information on the Internet coming from all sorts of heterogeneous data sources. Class hierarchies on the other hand are meant to be fairly static and rely on far less diverse and more structured sources of data such as corporate databases


The OWL languages are characterized by formal semantics. They are built upon the World Wide Web Consortium's (W3C) XML standard for objects called the Resource Description Framework (RDF).[2] OWL and RDF have attracted significant academic, medical and commercial interest.


In October 2007,[3] a new W3C working group[4] was started to extend OWL with several new features as proposed in the OWL 1.1 member submission.[5] W3C announced the new version of OWL on 27 October 2009.[6] This new version, called OWL 2, soon found its way into semantic editors such as Protégé and semantic reasoners such as Pellet,[7] RacerPro,[8] FaCT++[9][10] and HermiT.[11]


The OWL family contains many species, serializations, syntaxes and specifications with similar names. OWL and OWL2 are used to refer to the 2004 and 2009 specifications, respectively. Full species names will be used, including specification version (for example, OWL2 EL). When referring more generally, OWL Family will be used.[12][13][14]


References:

https://en.wikipedia.org/wiki/Web_Ontology_Language#:~:text=The%20Web%20Ontology%20Language%20(OWL,representation%20languages%20for%20authoring%20ontologies.&text=In%20October%202007%2C%20a%20new,OWL%20on%2027%20October%202009.


What is BDD? - Behavior Driven Development

Behavior Driven Development (BDD) is an agile software development practice – introduced by Dan North in 2006 – that encourages collaboration between everyone involved in developing software: developers, testers, and business representatives such as product owners or business analysts.


BDD aims to create a shared understanding of how an application should behave by discovering new features based on concrete examples. Key examples are then formalized with natural language following a Given/When/Then structure. 


Gherkin is the most commonly used syntax for describing examples with Given/When/Then in plain text files, called feature files.


Gherkin scenarios can be automated to validate the expected behavior. At this point, BDD tools – such as SpecFlow – come in handy. Automated acceptance tests, however, are an optional by-product of using BDD, not the sole purpose.


BDD is an agile software engineering practice that supports feature discovery and encourages collaboration among developers, testers and business participants in a software development team. BDD is facilitated through examples expressed in natural-language constructs expressing the expected system behavior, and automation validating these examples as acceptance tests.



Why BDD?

BDD describes application behavior from a user’s point of view. Overall, the main goal of BDD is to improve the collaboration between all stakeholders involved in developing software and form a shared understanding among them.


Reduced Rework / Shared Understanding: Concrete examples of expected system behavior foster a shared understanding by being specific enough for developers and testers while still making sense to business participants. Think of a BDD scenario like a bug report that is written in natural-language and posted before the system has been implemented. It describes the steps to reproduce (Given/When) and expected outcome (Then) without the symptoms (unwanted behavior) that usually triggered the bug report.


Faster Feedback: Example scenarios describe focused, granular changes to the system under development. This enables teams to evolve the system in small steps while keeping it potentially shippable, allowing for shorter release cycles and faster feedback.


Effectiveness: Since you extend the system in small increments, you can validate business value earlier and avoid unnecessary features. This prevents gold-plating and makes the overall implementation of the system more effective.


Lower Cost: Driving automated acceptance tests through test-first BDD scenarios is much cheaper than post-automating acceptance tests. Teams practicing ATDD (Acceptance Test Driven Development) use their shared understanding to develop the feature and the test automation, while teams separating development and test automation need to interpret and fine-tune scenarios multiple times. This causes extra effort and can lead to misaligned interpretations. 


Single Source of Truth: Specification through examples that is guarded with automated acceptance tests is an always-up-to-date description of the current system behavior (“Living Documentation”). This provides a single source of truth for the team, business stakeholders and important external parties like regulatory authorities or partners relying on the system specification.


User Satisfaction: By focusing on the needs of the business, you get satisfied users — which translates to customer loyalty and better business outcomes. The higher degree of test automation frees more time for manual exploratory testing and yields less errors in production, especially when shipping new versions at a high cadence. More frequent, high quality releases enable rapid response to evolving user needs and dynamic market pressuress.


Code Quality: Using Acceptance Test Driven Development has a positive impact on code quality: it promotes emergent design that ensures loosely-coupled, highly-cohesive architecture and avoids over-engineering and technical debt. This ensures that a system stays testable and maintainable– and that it can be quickly changed to support new requirements without sacrificing stability.


References:

https://specflow.org/bdd/?utm_source=google&utm_medium=cpc&utm_campaign=bdd


Sunday, August 23, 2020

React Router

So you want to do routing with your Redux app. You can use it with React Router. Redux will be the source of truth for your data and React Router will be the source of truth for your URL. In most of the cases, it is fine to have them separate unless you need to time travel and rewind actions that trigger a URL change.

Connecting React Router with Redux App

First we will need to import <Router /> and <Route /> from React Router. Here's how to do it:

import { BrowserRouter as Router, Route } from 'react-router-dom'

In a React app, usually you would wrap <Route /> in <Router /> so that when the URL changes, <Router /> will match a branch of its routes, and render their configured components. <Route /> is used to declaratively map routes to your application's component hierarchy. You would declare in path the path used in the URL and in component the single component to be rendered when the route matches the URL.

const Root = () => (

  <Router>

    <Route path="/" component={App} />

  </Router>

)

However, in our Redux App we will still need <Provider />. <Provider /> is the higher-order component provided by React Redux that lets you bind Redux to React 

We will then import the <Provider /> from React Redux:

import { Provider } from 'react-redux'

We will wrap <Router /> in <Provider /> so that route handlers can get access to the store.


const Root = ({ store }) => (

  <Provider store={store}>

    <Router>

      <Route path="/" component={App} />

    </Router>

  </Provider>

)


Now the <App /> component will be rendered if the URL matches '/'. Additionally, we will add the optional :filter? parameter to /, because we will need it further on when we try to read the parameter :filter from the URL.


<Route path="/:filter?" component={App} />

Navigating with React Router

React Router comes with a <Link /> component that lets you navigate around your application. If you want to add some styles, react-router-dom has another special <Link /> called <NavLink />, which accepts styling props. For instance, the activeStyle property lets us apply a style on the active state.

In our example, we can wrap <NavLink /> with a new container component <FilterLink /> so as to dynamically change the URL.

References:

https://redux.js.org/advanced/usage-with-react-router

AKKA vs Node JS. Which is better?

Akka vs Node.js: What are the differences?


What is Akka? Build powerful concurrent & distributed applications more easily. Akka is a toolkit and runtime for building highly concurrent, distributed, and resilient message-driven applications on the JVM.


What is Node.js? A platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.


Akka belongs to "Concurrency Frameworks" category of the tech stack, while Node.js can be primarily classified under "Frameworks (Full Stack)".


"Great concurrency model" is the top reason why over 22 developers like Akka, while over 1320 developers mention "Npm" as the leading cause for choosing Node.js.


Akka and Node.js are both open source tools. It seems that Node.js with 35.5K GitHub stars and 7.78K forks on GitHub has more adoption than Akka with 9.99K GitHub stars and 3.03K GitHub forks.


According to the StackShare community, Node.js has a broader approval, being mentioned in 4055 company stacks & 3897 developers stacks; compared to Akka, which is listed in 75 company stacks and 54 developer stacks.


References:

https://stackshare.io/stackups/akka-vs-nodejs#:~:text=Akka%20is%20a%20toolkit%20and,building%20fast%2C%20scalable%20network%20applications.


What is NetFlow

NetFlow is a network protocol developed by Cisco for collecting IP traffic information and monitoring network traffic. By analyzing flow data, a picture of network traffic flow and volume can be built. Using a NetFlow collector and analyzer, you can see where network traffic is coming from and going to and how much traffic is being generated.

What is a NetFlow Collector?

Routers that have the NetFlow feature enabled generate NetFlow records. These records are exported from the router and collected using a NetFlow collector. The NetFlow collector then processes the data to perform the traffic analysis and presentation in a user-friendly format. NetFlow collectors can take the form of hardware-based collectors (probes) or software-based collectors

Other manufacturer standard are: 

Juniper® (Jflow)

3Com/HP® , Dell® , and Netgear® (s-flow)

Huawei® (NetStream)

Alcatel-Lucent® (Cflow)

Ericsson® (Rflow)

References: 

https://www.solarwinds.com/netflow-traffic-analyzer/use-cases/what-is-netflow


Three JS Smallest Hello cube setup

Loading threejs 

<script type="module"

import * as THREE from './resources/threejs/r119/build/three.module.js';

</script

It's important you put type="module" in the script tag. This enables us to use the import keyword to load three.js. There are other ways to load three.js but as of r106 using modules is the recommended way. Modules have the advantage that they can easily import other modules they need. That saves us from having to manually load extra scripts they are dependent on.

Next we need is a <canvas tag so

<body

  <canvas id="c"</canvas

</body

We will ask three.js to draw into that canvas so we need to look it up.

script type="module">

import * as THREE from './resources/threejs/r119/build/three.module.js';

 function main() {

  const canvas = document.querySelector('#c');

  const renderer = new THREE.WebGLRenderer({canvas});

  ...

</script

After we look up the canvas we create a WebGLRenderer. The renderer is the thing responsible for actually taking all the data you provide and rendering it to the canvas. In the past there have been other renderers like CSSRenderer, a CanvasRenderer and in the future there may be a WebGL2Renderer or WebGPURenderer. For now there's the WebGLRenderer that uses WebGL to render 3D to the canvas.


Note there are some esoteric details here. If you don't pass a canvas into three.js it will create one for you but then you have to add it to your document. Where to add it may change depending on your use case and you'll have to change your code so I find that passing a canvas to three.js feels a little more flexible. I can put the canvas anywhere and the code will find it where as if I had code to insert the canvas into to the document I'd likely have to change that code if my use case changed.


Next up we need a camera. We'll create a PerspectiveCamera.


const fov = 75;

const aspect = 2;  // the canvas default

const near = 0.1;

const far = 5;

const camera = new THREE.PerspectiveCamera(fov, aspect, near, far);


fov is short for field of view. In this case 75 degrees in the vertical dimension. Note that most angles in three.js are in radians but for some reason the perspective camera takes degrees.


aspect is the display aspect of the canvas. We'll go over the details in another article but by default a canvas is 300x150 pixels which makes the aspect 300/150 or 2.


near and far represent the space in front of the camera that will be rendered. Anything before that range or after that range will be clipped (not drawn).


Those 4 settings define a "frustum". A frustum is the name of a 3d shape that is like a pyramid with the tip sliced off. In other words think of the word "frustum" as another 3D shape like sphere, cube, prism, frustum.


The height of the near and far planes are determined by the field of view. The width of both planes is determined by the field of view and the aspect.


Anything inside the defined frustum will be be drawn. Anything outside will not.


The camera defaults to looking down the -Z axis with +Y up. We'll put our cube at the origin so we need to move the camera back a little from the origin in order to see anything.


camera.position.z = 2;



References:

https://threejsfundamentals.org/threejs/lessons/threejs-fundamentals.html

Threejs Basics Part 1

WebGL is a very low-level system that only draws points, lines, and triangles.

To do anything useful with WebGL generally requires quite a bit of code and that is where three.js comes in. It handles stuff like scenes, lights, shadows, materials, textures, 3d math, all things that you'd have to write yourself if you were to use WebGL directly.

Most browsers that support three.js are auto-updated so most users should be able to run this code.For 3D one of the most common first things to do is to make a 3D cube. So let's start with "Hello Cube!

A three.js app requires you to create a bunch of objects and connect them together. Here's a diagram that represents a small three.js app

There is a Renderer. This is arguably the main object of three.js. You pass a Scene and a Camera to a Renderer and it renders (draws) the portion of the 3D scene that is inside the frustum of the camera as a 2D image to a canvas.

There is a scenegraph which is a tree like structure, consisting of various objects like a Scene object, multiple Mesh objects, Light objects, Group, Object3D, and Camera objects. A Scene object defines the root of the scenegraph and contains properties like the background color and fog. These objects define a hierarchical parent/child tree like structure and represent where objects appear and how they are oriented. Children are positioned and oriented relative to their parent. For example the wheels on a car might be children of the car so that moving and orienting the car's object automatically moves the wheels.

Note in the diagram Camera is half in half out of the scenegraph. This is to represent that in three.js, unlike the other objects, a Camera does not have to be in the scenegraph to function. Just like other objects, a Camera, as a child of some other object, will move and orient relative to its parent object

Mesh objects represent drawing a specific Geometry with a specific Material. Both Material objects and Geometry objects can be used by multiple Mesh objects. For example to draw 2 blue cubes in different locations we could need 2 Mesh objects to represent the position and orientation of each cube. We would only need 1 Geometry to hold the vertex data for a cube and we would only need 1 Material to specify the color blue. Both Mesh objects could reference the sameGeometry object and the same Material object.

Geometry objects represent the vertex data of some piece of geometry like a sphere, cube, plane, dog, cat, human, tree, building, etc... Three.js provides many kinds of built in geometry primitives. You can also create custom geometry as well as load geometry from files.

Material objects represent the surface properties used to draw geometry including things like the color to use and how shiny it is. A Material can also reference one or more Texture objects which can be used, for example, to wrap an image onto the surface of a geometry.

Texture objects generally represent images either loaded from image files, generated from a canvas or rendered from another scene.

Light objects represent different kinds of lights.

References:

https://threejsfundamentals.org/threejs/lessons/threejs-fundamentals.html


Friday, August 21, 2020

Introducing vh, vw, and vmin

The vh unit stands for viewport height, vw for Volkswagen viewport width, and vmin represents whichever of vh or vw is the shortest length.

The values used can be somewhat confusing if you’ve not used these units before, as 1vh represents 1% of the current viewport (the content area of the browser window) height, rather than 100% as you may expect. Therefore if you want an element to be the full height of your viewport you should set it to 100vh. As you would expect, vw works exactly the same way as vh units.


If you have a widescreen monitor and your browser window is set to full screen, the width would be wider than the height. In this case the vmin unit would be the same as the vh unit.


The viewport units are dynamic rather than static, so if you resize the browser window, the computed value of the units will also change. If for example your browser window is 1000px wide, and element with a width of 10vw would be 100px wide. If you shrink the browser window to only 100px wide, the width of the element would resize with it to 10px wide.



references:

https://generatedcontent.org/post/21279324555/viewportunits