Thursday, August 26, 2021

Angular - What are entryComponents

This is for dynamically added components that are added using ViewContainerRef.createComponent(). Adding them to entryComponents tells the offline template compiler to compile them and create factories for them.

The components registered in route configurations are added automatically to entryComponents as well because router-outlet also uses ViewContainerRef.createComponent() to add routed components to the DOM.

Offline template compiler (OTC) only builds components that are actually used. If components aren't used in templates directly the OTC can't know whether they need to be compiled. With entryComponents you can tell the OTC to also compile this components so they are available at runtime.


If you don't list a dynamically added component to entryComponents you'll get an error message a bout a missing factory because Angular won't have created one.


A Bit of Background about entryComponent


entryComponent is any component Angular loads imperatively. You can declare entryComponent by bootstrapping it in NgModule or in route definitions.


@NgModule({

  declarations: [

    AppComponent

  ],

  imports: [

    BrowserModule,

    FormsModule,

    HttpClientModule,

    AppRoutingModule

  ],

  providers: [],

  bootstrap: [AppComponent] // bootstrapped entry component

})



References:

https://stackoverflow.com/questions/39756192/what-is-entrycomponents-in-angular-ngmodule





Friday, August 13, 2021

How to hide computer name in the terminal

Open the preferences in the terminal (top right)

Then go into the shell tab

Then copy/paste the command export PS1="\W \$"; clear;

Then restart the terminal and should work


References:

https://apple.stackexchange.com/questions/34910/how-to-hide-computer-name-and-user-name-in-terminal-command-prompt

Wednesday, August 11, 2021

Node Server too busy rpm

 toobusy polls the node.js event loop and keeps track of "lag", which is long requests wait in node's event queue to be processed. When lag crosses a threshold, toobusy tells you that you're too busy. At this point you can stop request processing early (before you spend too much time on them and compound the problem), and return a "Server Too Busy" response. This allows your server to stay responsive under extreme load, and continue serving as many requests as possible.



ar toobusy = require('toobusy-js'),

    express = require('express');


var app = express();


// middleware which blocks requests when we're too busy

app.use(function(req, res, next) {

  if (toobusy()) {

    res.send(503, "I'm busy right now, sorry.");

  } else {

    next();

  }

});


app.get('/', function(req, res) {

  // processing the request requires some work!

  var i = 0;

  while (i < 1e5) i++;

  res.send("I counted to " + i);

});


var server = app.listen(3000);


process.on('SIGINT', function() {

  server.close();

  // calling .shutdown allows your process to exit normally

  toobusy.shutdown();

  process.exit();

});


References 

https://github.com/STRML/node-toobusy 

Mongo DB Encryption at rest

 Encryption at rest, when used in conjunction with transport encryption and good security policies that protect relevant accounts, passwords, and encryption keys, can help ensure compliance with security and privacy standards, including HIPAA, PCI-DSS, and FERPA.


Encrypted Storage Engine¶

Available in MongoDB Enterprise only, Available for the WiredTiger Storage Engine only. 


MongoDB Enterprise 3.2 introduces a native encryption option for the WiredTiger storage engine. This feature allows MongoDB to encrypt data files such that only parties with the decryption key can decode and read the data.



If encryption is enabled, the default encryption mode that MongoDB Enterprise uses is the AES256-CBC (or 256-bit Advanced Encryption Standard in Cipher Block Chaining mode) via OpenSSL. AES-256 uses a symmetric key; i.e. the same key to encrypt and decrypt text. MongoDB Enterprise for Linux also supports authenticated encryption AES256-GCM (or 256-bit Advanced Encryption Standard in Galois/Counter Mode). FIPS mode encryption is also available.


AES256-GCM and Filesystem Backups


For encrypted storage engines that use AES256-GCM encryption mode, AES256-GCM requires that every process use a unique counter block value with the key.


For encrypted storage engine configured with AES256-GCM cipher:


Restoring from Hot Backup

Starting in 4.2, if you restore from files taken via "hot" backup (i.e. the mongod is running), MongoDB can detect "dirty" keys on startup and automatically rollover the database key to avoid IV (Initialization Vector) reuse.

Restoring from Cold Backup

However, if you restore from files taken via "cold" backup (i.e. the mongod is not running), MongoDB cannot detect "dirty" keys on startup, and reuse of IV voids confidentiality and integrity guarantees.


Starting in 4.2, to avoid the reuse of the keys after restoring from a cold filesystem snapshot, MongoDB adds a new command-line option --eseDatabaseKeyRollover. When started with the --eseDatabaseKeyRollover option, the mongod instance rolls over the database keys configured with AES256-GCM cipher and exits.


In general, if using filesystem based backups for MongoDB Enterprise 4.2+, use the "hot" backup feature, if possible.

For MongoDB Enterprise versions 4.0 and earlier, if you use AES256-GCM encryption mode, do not make copies of your data files or restore from filesystem snapshots ("hot" or "cold").



The data encryption process includes:


Generating a master key.

Generating keys for each database.

Encrypting data with the database keys.

Encrypting the database keys with the master key.



The encryption occurs transparently in the storage layer; i.e. all data files are fully encrypted from a filesystem perspective, and data only exists in an unencrypted state in memory and during transmission.


Key Management¶


The database keys are internal to the server and are only paged to disk in an encrypted format. MongoDB never pages the master key to disk under any circumstances.



Only the master key is external to the server (i.e. kept separate from the data and the database keys), and requires external management. To manage the master key, MongoDB's encrypted storage engine supports two key management options:


Integration with a third party key management appliance via the Key Management Interoperability Protocol (KMIP). Recommended

Local key management via a keyfile.



Using a key manager meets regulatory key management guidelines, such as HIPAA, PCI-DSS, and FERPA, and is recommended over the local key management.



Your key manager must support the KMIP communication protocol.

To authenticate MongoDB to a KMIP server, you must have a valid certificate issued by the key management appliance.


Encrypt Using a New Key

To create a new key, connect mongod to the key manager by starting mongod with the following options:



mongod --enableEncryption --kmipServerName <KMIP Server HostName> \

  --kmipPort <KMIP server port> --kmipServerCAFile ca.pem \

  --kmipClientCertificateFile client.pem



When connecting to the KMIP server, the mongod verifies that the specified --kmipServerName matches the Subject Alternative Name SAN (or, if SAN is not present, the Common Name CN) in the certificate presented by the KMIP server. [1] If SAN is present, mongod does not match against the CN. If the hostname does not match the SAN (or CN), the mongod will fail to connect.


To verify that the key creation and usage was successful, check the log file. If successful, the process will log the following messages:


[initandlisten] Created KMIP key with id: <UID>

[initandlisten] Encryption key manager initialized using master key with id: <UID>



Local Key Management

Using the keyfile method does not meet most regulatory key management guidelines and requires users to securely manage their own keys.

The safe management of the keyfile is critical.


To encrypt using a keyfile, you must have a base64 encoded keyfile that contains a single 16 or 32 character string. The keyfile must only be accessible by the owner of the mongod process.


Create the base64 encoded keyfile with the 16 or 32 character string. You can generate the encoded keyfile using any method you prefer. For example,



openssl rand -base64 32 > mongodb-keyfile


Update the file permissions.

chmod 600 mongodb-keyfile



To use the key file, start mongod with the following options:

--enableEncryption,

--encryptionKeyFile <path to keyfile>,


mongod --enableEncryption --encryptionKeyFile  mongodb-keyfile



References:

https://docs.mongodb.com/manual/core/security-encryption-at-rest/


Tuesday, August 10, 2021

Setting up nginx with Node JS


NGINX is a high-performance HTTP server as well as a reverse proxy.

Unlike traditional servers, NGINX follows an event-driven, asynchronous architecture. As a result, the memory footprint is low and performance is high. If you’re running a Node.js-based web app, you should seriously consider using NGINX as a reverse proxy.



NGINX can be very efficient in serving static assets. For all other requests, it will talk to your Node.js back end and send the response to the client



To install nginx on AWS, yum can be used. 


sudo yum install nginx


Once the nginx is installed, configuration can be done on the below file. 


sudo vi /etc/nginx/nginx.conf


to start the nginx 


sudo nginx

to reload nginx 

nginx -s reload

Now at this point if type in the IP or domain, should be able to see the Amazon landing page from nginx. 


Now it is time to forward the request to the Application server, to do that , need to edit the /etc/nginx/nginx.conf


Below is a sample configuration for SSL and for forwarding all requests on Port 80 to Port 1337 where Application server is listening  


Now lets encrypt might have given in form of chain and actual cert. nginx can take cert and key form. so create new cert like this below 


cat cert.pem fullchain.pem >newcert.pem



     server {

        listen       80;

        listen       443 ssl http2 default_server;

        listen       [::]:443 ssl http2 default_server;

        server_name  _;

        root         /usr/share/nginx/html;


        ssl_certificate "/etc/nginx/ssl/newcert.pem";

        ssl_certificate_key "/etc/nginx/ssl/privkey.pem";

        ssl_session_cache shared:SSL:1m;

        ssl_session_timeout  10m;

        ssl_ciphers HIGH:!aNULL:!MD5;

        ssl_prefer_server_ciphers on;


        # Load configuration files for the default server block.

        include /etc/nginx/default.d/*.conf;



        location / {

           proxy_pass http://localhost:1337;

           proxy_http_version 1.1;

           proxy_set_header Upgrade $http_upgrade;

           proxy_set_header Connection 'upgrade';

   proxy_set_header Host $host;

           proxy_cache_bypass $http_upgrade;

        }


        error_page 404 /404.html;

        location = /404.html {

        }


        error_page 500 502 503 504 /50x.html;

        location = /50x.html {

        }

    }


                                                       



references

https://www.sitepoint.com/configuring-nginx-ssl-node-js/

https://medium.com/@mertcal/install-nginx-on-ec2-instance-in-30-seconds-e714d427b01b



Angular - What are @Input and @Output

Both are use to transform the data from one component to another component.  Or, you can say pass the different types of data form parent to child component and child to parent component.


Or

 

In a simple way, transform/exchange data between two components.

 


@Input is a decorator to mark a property as an input.  @Input is used to define an input property, to achieve component property binding.  @Inoput decorator is used to pass data (property binding) from parent to child component.  The component property should be annotated with @Input decorator to act as input property.


To make the parent-child relation, keep the instance (selector of student component) of student component inside the template URL (app.component.html) of app component.

 


Open app.component.html:  Inside this file, we keep an instance of student component. 


  1. <div> <app-student></app-student></div>  



  1. import { Component, Input, OnInit } from '@angular/core';  
  2.   
  3. @Component({  
  4.    selector: 'app-student',  
  5.    templateUrl: './student.component.html',  
  6.    styleUrls: ['./student.component.css']  
  7. })  
  8.   
  9. export class StudentComponent implements OnInit {  
  10. @Input() myinputMsg:string;  
  11.   
  12. constructor() { }  
  13.   
  14. ngOnInit() {  
  15.    console.log(this.myinputMsg);  
  16.    }  
  17.   
  18. }  




  1. <div>  
  2. <app-student [myinputMsg]="myInputMessage"></app-student>  
  3. </div>  





@Output Decorator

 

@Output decorator is used to pass the data from child to parent component.  @Output decorator binds a property of a component, to send data from one component to the calling component.  @Output binds a property of the type of angular EventEmitter class.

 

To transfer the data from child to parent component, we use @Output decorator.

 

Lets's Open the child component' .ts file (student.component.ts).

 

For use the @Output decorator we have to import, two important decorators, they are Output and EventEmitter.

 

EventEmitter


Use in components with the @Output directive to emit custom events synchronously or asynchronously, and register handlers for those events by subscribing to an instance. 


import { Component, Input, Output,EventEmitter, OnInit } from '@angular/core'; 


Now, create any variable with @Output decorator


@Output() myOutput:EventEmitter<string>= new EventEmitter();  


Here in the place of string, we can pass different types of DataTypes.

After that create a variable to store and pass the message to the parent component.


outputMessage:string="I am child component."  


import { Component, Input, Output,EventEmitter, OnInit } from '@angular/core'; 

  1. @Component({  
  2.    selector: 'app-student',  
  3.    templateUrl: './student.component.html',  
  4.    styleUrls: ['./student.component.css']  
  5. })  
  6. export class StudentComponent implements OnInit {  
  7.    @Input() myinputMsg:string;  
  8.    @Output() myOutput:EventEmitter<string>= new EventEmitter();  
  9.    outputMessage:string="I am child component."  
  10.    constructor() { }  
  11.   
  12.    ngOnInit() {  
  13.       console.log(this.myinputMsg);  
  14.    }  
  15. }  



tudent.component.html


<button (click)="sendValues"> Send Data </button>  


sendValues(){  

   this.myOutput.emit(this.outputMessage);  

}  


References 

https://www.c-sharpcorner.com/article/input-and-output-decorator-in-angular/


Angular Component Selectors

A decorator is used to mark the class as the component in Angular, and it provides informational metadata that defines what kind of properties can be used by the existing component.


A component takes properties as metadata as object, and the object contains key-value pairs like selector, style, or styleUrl. All these properties make a component a complete reusable chunk for the Angular application.


A selector is one of the properties of the object that we use along with the component configuration.


A selector is used to identify each component uniquely into the component tree, and it also defines how the current component is represented in the HTML DOM.


When we create a new component using Angular CLI, the newly created component looks like this.


import { Component } from '@angular/core';


@Component({

  selector: 'my-app',

  templateUrl: './app.component.html',

  styleUrls: [ './app.component.css' ]

})

export class AppComponent  {

  name = 'This is simple component';

}


Here in app.component.ts, notice that we have one property called selector along with the unique name used to identify the app component in the HTML DOM tree once it is rendered into the browser.


 <my-app> is rendered initially because the app component is the root component for our application. If we have any child components, then they all are rendered inside the parent selector.


Basically, the selector property of the component is just a string that is used to identify the component or an element in the DOM.


By default, the selector name may have an app as a prefix at the time of component creation, but we can update it. Keep in mind that two or more component selectors must not be the same.



Selector as an Attribute



We have looked at an example of how to use a selector as an element name, but we are not limited to that. We can also use a selector as an attribute of an element, just like we do along with other HTML elements.

@Component({

4  selector: '[my-app]',

5  templateUrl: './app.component.html',

6  styleUrls: [ './app.component.css' ]

7})



Selector as a Class



@Component({

4  selector: '.my-app',

5  templateUrl: './app.component.html',

6  styleUrls: [ './app.component.css' ]

7})

8export class AppComponent  {

9  name = 'Angular';

10}







References

https://www.pluralsight.com/guides/understanding-the-purpose-and-use-of-the-selector-in-angular

Angular - What are component spec files

The spec files are unit tests for your source files. The convention for Angular applications is to have a .spec.ts file for each .ts file. They are run using the Jasmine javascript test framework through the Karma test runner (https://karma-runner.github.io/) when you use the ng test command.


Set up testing


The Angular CLI downloads and installs everything you need to test an Angular application with the Jasmine test framework.


The project you create with the CLI is immediately ready to test. Just run the ng test CLI command:

ng test


The ng test command builds the application in watch mode, and launches the Karma test runner.


The console output looks a bit like this:


10% building modules 1/1 modules 0 active

...INFO [karma]: Karma v1.7.1 server started at http://0.0.0.0:9876/

...INFO [launcher]: Launching browser Chrome ...

...INFO [launcher]: Starting browser Chrome

...INFO [Chrome ...]: Connected on socket ...

Chrome ...: Executed 3 of 3 SUCCESS (0.135 secs / 0.205 secs)


The last line of the log is the most important. It shows that Karma ran three tests that all passed.


A Chrome browser also opens and displays the test output in the "Jasmine HTML Reporter" like this.


References:

https://stackoverflow.com/questions/37502809/what-are-the-spec-ts-files-generated-by-angular-cli-for


Mongo Role Based access Control

MongoDB employs Role-Based Access Control (RBAC) to govern access to a MongoDB system. A user is granted one or more roles that determine the user's access to database resources and operations. Outside of role assignments, the user has no access to the system.

Enable Access Control

MongoDB does not enable access control by default. You can enable authorization using the --auth or the security.authorization setting. Enabling internal authentication also enables client authorization.

Once access control is enabled, users must authenticate themselves.

Roles

A role grants privileges to perform the specified actions on resource. Each privilege is either specified explicitly in the role or inherited from another role or both.


Privileges

A privilege consists of a specified resource and the actions permitted on the resource.

A resource is a database, collection, set of collections, or the cluster. If the resource is the cluster, the affiliated actions affect the state of the system rather than a specific database or collection. 

An action specifies the operation allowed on the resource.

Inherited Privileges

A role can include one or more existing roles in its definition, in which case the role inherits all the privileges of the included roles.

A role can inherit privileges from other roles in its database. A role created on the admin database can inherit privileges from roles in any database.

View Role's Privileges

You can view the privileges for a role by issuing the rolesInfo command with the showPrivileges and showBuiltinRoles fields both set to true.

Users and Roles

You can assign roles to users during the user creation. You can also update existing users to grant or revoke roles.

A user assigned a role receives all the privileges of that role. A user can have multiple roles. By assigning to the user roles in various databases, a user created in one database can have permissions to act on other databases.

The first user created in the database should be a user administrator who has the privileges to manage other users

Built-In Roles and User-Defined Roles

MongoDB provides built-in roles that provide set of privileges commonly needed in a database system

If these built-in-roles cannot provide the desired set of privileges, MongoDB provides methods to create and modify user-defined roles.

User-Defined Roles¶

To add a role, MongoDB provides the db.createRole() method. MongoDB also provides methods to update existing user-defined roles.

Scope

When adding a role, you create the role in a specific database. MongoDB uses the combination of the database and the role name to uniquely define a role.

Except for roles created in the admin database, a role can only include privileges that apply to its database and can only inherit from other roles in its database.

A role created in the admin database can include privileges that apply to the admin database, other databases or to the cluster resource, and can inherit from roles in other databases as well as the admin database.

Centralized Role Data

MongoDB stores all role information in the system.roles collection in the admin database

Do not access this collection directly but instead use the role management commands to view and edit custom roles.

Manage Users and Roles

If you have enabled access control for your deployment, you must authenticate as a user with the required privileges specified in each section. A user administrator with the userAdminAnyDatabase role, or userAdmin role in the specific database can manage mostly all the role management operations 

Create a User-Defined Role

Roles grant users access to MongoDB resources. MongoDB provides a number of built-in roles that administrators can use to control access to a MongoDB system. However, if these roles cannot describe the desired set of privileges, you can create new roles in a particular database.

Prerequisites

To create a role in a database, you must have:


the createRole action on that database resource.

the grantRole action on that database to specify privileges for the new role as well as to specify roles to inherit from.

Built-in roles userAdmin and userAdminAnyDatabase provide createRole and grantRole actions on their respective resources.


To create a role with authenticationRestrictions specified, you must have the setAuthenticationRestriction action on the database resource which the role is created.



Create a Role to Manage Current Operations


The following example creates a role named manageOpRole which provides only the privileges to run both db.currentOp() and db.killOp()


Step 1: Login 


mongosh --port 27017 -u myUserAdmin -p 'abc123' --authenticationDatabase 'admin'


The myUserAdmin has privileges to create roles in the admin as well as other databases.


Step 2: Create a new role to manage current operations.


manageOpRole has privileges that act on multiple databases as well as the cluster resource. As such, you must create the role in the admin database.


use admin

db.createRole(

   {

     role: "manageOpRole", 

     privileges: [

       { resource: { cluster: true }, actions: [ "killop", "inprog" ] },

       { resource: { db: "", collection: "" }, actions: [ "killCursors" ] }

     ],

     roles: []

   }

)



Create a Role to Run mongostat


The following example creates a role named mongostatRole that provides only the privileges to run

Step 1: Connect to MongoDB with the appropriate privileges.


mongosh --port 27017 -u myUserAdmin -p 'abc123' --authenticationDatabase 'admin'


Step 2: Create a new role to drop the system.views collection in any database.¶



Create a new role to drop the system.views collection in any database.

For the role, specify a privilege that consists of:


an actions array that contains the dropCollection action, and

a resource document that specifies an empty string ("") for the database and the string "system.views" for the collection. See Specify Collections Across Databases as Resource for more information.


use admin

db.createRole(

   {

     role: "dropSystemViewsAnyDatabase", 

     privileges: [

       {

         actions: [ "dropCollection" ],

         resource: { db: "", collection: "system.views" }

       }

     ],

     roles: []

   }

)





references:

https://docs.mongodb.com/manual/core/authorization/

Monday, August 9, 2021

What is MongoDB SCRAM

Salted Challenge Response Authentication Mechanism (SCRAM) is the default authentication mechanism for MongoDB. SCRAM is based on the IETF RFC 5802 standard that defines best practices for implementation of challenge-response mechanisms for authenticating users with passwords.

Using SCRAM, MongoDB verifies the supplied user credentials against the user's name, password and authentication database. The authentication database is the database where the user was created, and together with the user's name, serves to identify the user.

MongoDB's implementation of SCRAM provides:

A tunable work factor (i.e. the iteration count),

Per-user random salts, and

Authentication of the server to the client as well as the client to the server.

SCRAM Mechanisms

MongoDB supports the following SCRAM mechanisms:

SCRAM Mechanism

Description

SCRAM-SHA-1

Uses the SHA-1 hashing function.

To modify the iteration count for SCRAM-SHA-1, see scramIterationCount.

SCRAM-SHA-256

Uses the SHA-256 hashing function and requires featureCompatibilityVersion (fcv) set to 4.0.

To modify the iteration count for SCRAM-SHA-256, see scramSHA256IterationCount.

When creating or updating a SCRAM user, you can indicate the specific SCRAM mechanism as well as indicate whether the server or the client digests the password. When using SCRAM-SHA-256, MongoDB requires server-side password hashing, i.e. the server digests the password. For details, see db.createUser() and db.updateUser().

Driver Support

To use SCRAM, you must upgrade your driver if your current driver version does not support SCRAM.

References:

https://docs.mongodb.com/manual/core/security-scram/#std-label-authentication-scram

MongoDB Authentication

Authentication is the process of verifying the identity of a client. When access control, i.e. authorization, is enabled, MongoDB requires all clients to authenticate themselves in order to determine their access.

Authentication Methods

To authenticate as a user, you must provide a username, password, and the authentication database associated with that user.


To authenticate using mongosh, either:


Use the mongosh command-line authentication options (--username, --password, and --authenticationDatabase) when connecting to the mongod or mongos instance, or

Connect first to the mongod or mongos instance, and then run the authenticate command or the db.auth() method against the authentication database.


Authentication Mechanisms


MongoDB supports a number of authentication mechanisms that clients can use to verify their identity. These mechanisms allow MongoDB to integrate into your existing authentication system.


MongoDB supports multiple authentication mechanisms:


SCRAM (Default)

x.509 Certificate Authentication.


In addition to supporting the aforementioned mechanisms, MongoDB Enterprise also supports the following mechanisms:


LDAP proxy authentication, and

Kerberos authentication.


Internal Authentication

In addition to verifying the identity of a client, MongoDB can require members of replica sets and sharded clusters to authenticate their membership to their respective replica set or sharded cluster. See Internal/Membership Authentication for more information.


Authentication on Sharded Clusters



References

https://docs.mongodb.com/manual/core/authentication/


MongoDB Security Checklist

Pre-production Checklist/Considerations

Enable Access Control and Enforce Authentication

Enable access control and specify the authentication mechanism. You can use MongoDB's SCRAM or x.509 authentication mechanism or integrate with your existing Kerberos/LDAP infrastructure. Authentication requires that all clients and servers provide valid credentials before they can connect to the system

Configure Role-Based Access Control

Create a user administrator first, then create additional users. Create a unique MongoDB user for each person/application that accesses the system.

Follow the principle of least privilege. Create roles that define the exact access rights required by a set of users. Then create users and assign them only the roles they need to perform their operations. A user can be a person or a client application.

A user can have privileges across different databases. If a user requires privileges on multiple databases, create a single user with roles that grant applicable database privileges instead of creating the user multiple times in different databases.

Encrypt Communication (TLS/SSL)¶

Configure MongoDB to use TLS/SSL for all incoming and outgoing connections. Use TLS/SSL to encrypt communication between mongod and mongos components of a MongoDB deployment as well as between all applications and MongoDB.

Starting in version 4.0, MongoDB uses the native TLS/SSL OS libraries:

Encrypt and Protect Data¶

Starting with MongoDB Enterprise 3.2, you can encrypt data in the storage layer with the WiredTiger storage engine's native Encryption at Rest.

If you are not using WiredTiger's encryption at rest, MongoDB data should be encrypted on each host using file-system, device, or physical encryption (e.g. dm-crypt). Protect MongoDB data using file-system permissions. MongoDB data includes data files, configuration files, auditing logs, and key files.


Collect logs to a central log store. These logs contain DB authentication attempts including source IP address.


Limit Network Exposure


Ensure that MongoDB runs in a trusted network environment and configure firewall or security groups to control inbound and outbound traffic for your MongoDB instances.


Disable direct SSH root access.


Allow only trusted clients to access the network interfaces and ports on which MongoDB instances are available.



Audit System Activity


Track access and changes to database configurations and data. MongoDB Enterprise includes a system auditing facility that can record system events (e.g. user operations, connection events) on a MongoDB instance. These audit records permit forensic analysis and allow administrators to verify proper controls. You can set up filters to record specific events, such as authentication events.


Run MongoDB with a Dedicated User

Run MongoDB processes with a dedicated operating system user account. Ensure that the account has permissions to access data but no unnecessary permissions.



Run MongoDB with Secure Configuration Options

MongoDB supports the execution of JavaScript code for certain server-side operations: mapReduce, $where, $accumulator, and $function. If you do not use these operations, disable server-side scripting by using the --noscripting option on the command line.


Keep input validation enabled. MongoDB enables input validation by default through the net.wireObjectCheck setting. This ensures that all documents stored by the mongod instance are valid BSON.


Request a Security Technical Implementation Guide (where applicable)

The Security Technical Implementation Guide (STIG) contains security guidelines for deployments within the United States Department of Defense. MongoDB Inc. provides its STIG, upon request, for situations where it is required. Please request a copy for more information.


Consider Security Standards Compliance


For applications requiring HIPAA or PCI-DSS compliance, please refer to the MongoDB Security Reference Architecture to learn more about how you can use the key security capabilities to build compliant application infrastructure.


Periodic/Ongoing Production Checks

Periodically check for MongoDB Product CVE and upgrade your products .

Consult the MongoDB end of life dates and upgrade your MongoDB installation. In general, try to stay on the latest version.

Ensure that your information security management system policies and procedures extend to your MongoDB installation, including performing the following:

Periodically apply patches to your machine and review guidelines.

Review policy/procedure changes, especially changes to your network rules to prevent inadvertent MongoDB exposure to the Internet.

Review MongoDB database users and periodically rotate them.


References:

https://docs.mongodb.com/manual/administration/security-checklist/

Installing Mongo on Linux2

To verify which Linux distribution you are running by running the following command on the command-line:


grep ^NAME  /etc/*release


 This gave like this below 


/etc/os-release:NAME="Amazon Linux"



Below setting in the /etc/yum.repos.d/mongodb-org-5.0.repo  had to be made 


[mongodb-org-5.0]

name=MongoDB Repository

baseurl=https://repo.mongodb.org/yum/amazon/2/mongodb-org/5.0/x86_64/

gpgcheck=1

enabled=1

gpgkey=https://www.mongodb.org/static/pgp/server-5.0.asc



And then sudo yum install -y mongodb-org


Installed the mongo package. 


By default, a MongoDB instance stores:

its data files in /var/lib/mongo

its log files in /var/log/mongodb


To run and manage your mongod process, you will be using your operating system's built-in init system. Recent versions of Linux tend to use systemd (which uses the systemctl command), while older versions of Linux tend to use System V init (which uses the service command).


ps --no-headers -o comm 1


This actually starts the mongo 


sudo systemctl start mongod


This shows whether it is running 


sudo systemctl status mongod



To enable at the system boot, do the below 


sudo systemctl enable mongod


To stop mongo


sudo systemctl stop mongod


To restart mongo


sudo systemctl restart mongod



References:

https://docs.mongodb.com/manual/tutorial/install-mongodb-on-amazon/

Setting up nginx on AWS Ec2 Linux 2 AMI

Trying to run sudo yum install nginx 


Gave the below error 


sudo yum install nginx

Failed to set locale, defaulting to C

Loaded plugins: extras_suggestions, langpacks, priorities, update-motd

amzn2-core                                                                                                 | 3.7 kB  00:00:00     

No package nginx available.

Error: Nothing to do



nginx is available in Amazon Linux Extra topic "nginx1"


To use, run

# sudo amazon-linux-extras install nginx1


The /etc/nginx/ folder showed the installation. However, running nginx was giving error not found 



Learn more at

https://aws.amazon.com/amazon-linux-2/faqs/#Amazon_Linux_Extras



sudo amazon-linux-extras install nginx1



I'd personally use Amazon's own repo.


The version provided by the Amazon repo is relatively old (1.12.2 at the time of writing). To see what versions the Amazon repo has access to run


amazon-linux-extras list | grep nginx



Alternative way to install that could be easier (has a fairly recent version of Nginx):


Below followed and go the nginx working well 


 sudo amazon-linux-extras list | grep nginx

 38  nginx1=latest            disabled      [ =stable ]


$ sudo amazon-linux-extras enable nginx1

 38  nginx1=latest            enabled      [ =stable ]

        

Now you can install:

$ sudo yum clean metadata

$ sudo yum -y install nginx

    

$ nginx -v

nginx version: nginx/1.16.1



References:

https://stackoverflow.com/questions/57784287/how-to-install-nginx-on-aws-ec2-linux-2

Additional Security Options with EC2

Security is like an onion - its all about layers, stinky ogre-like layers.

By allowing SSH connections from everywhere you've removed one layer of protection and are now depending solely on the SSH key, which is thought to be secure at this time, but in the future a flaw could be discovered reducing or removing that layer.

And when there are no more layers, you have nothing left.

A quick layer is to install fail2ban or similar. These daemons monitor your auth.log file and as SSH connections fail, their IPs are added to an iptables chain for a while. This reduces the number of times a clinet can attempt connections every hour/day. I end up blacklisting bad sources indefinitely - but hosts that have to hang SSH out listening promiscuously might still get 3000 failed root login attempts a day. Most are from China, with Eastern Europe and Russia close behind.

If you have static source IPs then including them in your Security Group policy is good, and this means the rest of the world can't connect. Downside, what if you can't come from an authorised IP for some reason, like your ISP is dynamic or your link is down?

A reasonable solution is to run a VPN server on your instance, listening to all source IPs, and then once the tunnel is up, connect over the tunnel via SSH. Sure its not perfect protection, but its one more layer in your shield of ablative armour... OpenVPN is a good candidate,

You can also leverage AWS's "Client VPN" solution, which is a managed OpenVPN providing access to your VPC. No personal experience of this sorry.

Other (admittedly thin) layers are to move SSH to a different port. This doesn't really do much other than reducing the script-kiddy probes that default to port 22/tcp. Anyone trying hard will scan all ports and find your SSH server on 2222/tcp or 31337/tcp or whatever.

If possible, you can investigate IPv6 ssh only, again it merely limits the exposure without adding any real security. The number of unsolicited SSH connections on IPv6 is currently way lower than IPv4, but still non-zero.

References:

https://serverfault.com/questions/1029247/is-it-safe-to-allow-inbound-0-0-0-0-0-on-ec2-security-group


HTTPS Strict Transport Security

strict Transport Security (STS) is an opt-in security enhancement that forces usage of HTTPS instead of HTTP (in modern browsers, at least).


trict Transport Security (STS) is an opt-in security enhancement that forces usage of HTTPS instead of HTTP (in modern browsers, at least).




lusca is open-source under the Apache license

npm install lusca --save


Then in the middleware config object in config/http.js:


// ...

  // maxAge ==> Number of seconds strict transport security will stay in effect.

  strictTransportSecurity: require('lusca').hsts({ maxAge: 31536000 })

  // ...




References:

https://sailsjs.com/documentation/concepts/security/strict-transport-security

XSS protection in Sails

Cross-site scripting (XSS) is a type of attack in which a malicious agent manages to inject client-side JavaScript into your website, so that it runs in the trusted environment of your users' browsers.


Protecting against XSS attacks

The cleanest way to prevent XSS attacks is to escape untrusted data at the point of injection. That means at the point where it's actually being injected into the HTML.



When exposing view locals to client-side JavaScript...

#

Use the exposeLocalsToBrowser partial to safely expose some or all of your view locals to client-side JavaScript:




On the client

#

A lot of XSS prevention is about what you do in your client-side code. Here are a few examples:


When injecting data into a client-side JST template...

#


Use <%- %> to HTML-encode data:

<div data-template-id="welcome-box">

  <h3 is="welcome-msg">Hello <%- me.username %>!</h3>

</div>


When modifying the DOM with client-side JavaScript...

#

Use something like $(...).text() to HTML-encode data:


var $welcomeMsg = $('#signup').find('[is="welcome-msg"]');

welcomeMsg.text('Hello, '+window.SAILS_LOCALS.me.username+'!');


// Avoid using `$(...).html()` to inject untrusted data.

// Even if you know an XSS is not possible under particular circumstances,

// accidental escaping issues can cause really, really annoying client-side bugs.




references:

https://sailsjs.com/documentation/concepts/security/xss

Sails Socket hijacking

Unfortunately, cross-site request forgery attacks are not limited to the HTTP protocol. WebSocket hijacking (sometimes known as CSWSH) is a commonly overlooked vulnerability in most realtime applications. Fortunately, since Sails treats both HTTP and WebSocket requests as first-class citizens, its built-in CSRF protection and configurable CORS rulesets apply to both protocols.



You can prepare your Sails app against CSWSH attacks by enabling the built-in protection in config/security.js and ensuring that a _csrf token is sent with all relevant incoming socket requests. Additionally, if you're planning on allowing sockets to connect to your Sails app cross-origin (i.e. from a different domain, subdomain, or port) you'll want to configure your CORS settings accordingly. You can also define the authorization setting in config/sockets.js as a custom function which allows or denies the initial socket connection based on your needs.




CSWSH prevention is only a concern in scenarios where people use the same client application to connect sockets to multiple web services (e.g. cookies in a browser like Google Chrome can be used to connect a socket to Chase.com from both Chase.com and Horrible-Hacker-Site.com.) 


references:

https://sailsjs.com/documentation/concepts/security/socket-hijacking

Sails JS Content - security Policy

Content Security Policy (CSP) is a W3C specification for instructing the client browser as to which location and/or which type of resources are allowed to be loaded. This spec uses "directives" to define loading behaviors for target resource types. Directives can be specified using HTTP response headers or HTML <meta> tags.



Using lusca

#

lusca is open-source under the Apache license


# In your sails app

npm install lusca --save --save-exact



Then add csp in config/http.js:


// ...


  csp: require('lusca').csp({

    policy: {

      'default-src': '*'

    }

  }),


  // ...


  order: [

    // ...

    'csp',

    // ...

  ]


Supported directives


default-src Loading policy for all resources type in case a resource type dedicated directive is not defined (fallback)


script-src Defines which scripts the protected resource can execute

object-src Defines from where the protected resource can load plugins

style-src Defines which styles (CSS) the user applies to the protected resource

img-src Defines from where the protected resource can load images

media-src Defines from where the protected resource can load video and audio

frame-src Defines from where the protected resource can embed frames

font-src Defines from where the protected resource can load fonts

connect-src Defines which URIs the protected resource can load using script interfaces

form-action Defines which URIs can be used as the action of HTML form elements

sandbox Specifies an HTML sandbox policy that the user agent applies to the protected resource

script-nonce Defines script execution by requiring the presence of the specified nonce on script elements

plugin-types Defines the set of plugins that can be invoked by the protected resource by limiting the types of resources that can be embedded

reflected-xss Instructs a user agent to activate or deactivate any heuristics used to filter or block reflected cross-site scripting attacks, equivalent to the effects of the non-standard X-XSS-Protection header

report-uri Specifies a URI to which the user agent sends reports about policy violation


Browser compatibility

#

Different CSP response headers are supported by different browsers. For example, Content-Security-Policy is the W3C standard, but various versions of Chrome, Firefox, and IE use X-Content-Security-Policy or X-WebKit-CSP. For the latest information on browser support, see OWasp.




References:

https://sailsjs.com/documentation/concepts/security/content-security-policy

P3P on Sails

P3P stands for the "Platform for Privacy Preferences" and is a browser/web standard designed to facilitate better consumer web privacy control. Currently (as of 2014), out of all the major browsers, only Internet Explorer supports it. P3P most often comes into play when dealing with legacy applications.


Many modern organizations are willfully ignoring P3P. Here's what Facebook has to say on the subject:


The organization that established P3P, the World Wide Web Consortium, suspended its work on this standard several years ago because most modern web browsers don't fully support P3P. As a result, the P3P standard is now out of date and doesn't reflect technologies that are currently in use on the web, so most websites currently don't have P3P policies.


Supporting P3P with Sails

#

All of that aside, sometimes you have to support P3P anyways.


Fortunately, a few different modules exist that bring P3P support to Express and Sails by enabling the relevant P3P headers. To use one of these modules for handling P3P headers, install it from npm using the directions below, then open config/http.js in your project and configure it as a custom middleware. To do that, define your P3P middleware as "p3p", and add the string "p3p" to your middleware.order array wherever you'd like it to run in the middleware chain (a good place to put it might be right before cookieParser):


// .....

module.exports.http = {


  middleware: {


    p3p: require('p3p')(p3p.recommmended), // <==== set up the custom middleware here and named it "p3p"


    order: [

      'startRequestTimer',

      'p3p', // <============ configured the order of our "p3p" custom middleware here

      'cookieParser',

      'session',

      'bodyParser',

      'handleBodyParserError',

      'compress',

      'methodOverride',

      'poweredBy',

      '$custom',

      'router',

      'www',

      'favicon',

      '404',

      '500'

    ],

    // .....

  }

};


Using node-p3p

#

node-p3p is open-source under the MIT license.


npm install p3p --save


Then in the middleware config object in config/http.js:


// ...

  // node-p3p provides a recommended compact privacy policy out of the box

  p3p: require('p3p')(require('p3p').recommended)

  // ...


Using lusca

lusca is open-source under the Apache license



# In your sails app

npm install lusca --save


// ...

  // "ABCDEF" ==> The compact privacy policy to use.

  p3p: require('lusca').p3p('ABCDEF')

  // ...




References:

https://sailsjs.com/documentation/concepts/security/p-3-p