Saturday, December 29, 2018

Android : launch app from browser automatically if installed

Android launch app from browser automatically if installed

script type = “text/javascript”
    setTimeout function () { window.location = "https://itunes.apple.com/appdir"; }, 25);
    window.location = " myapp:// ";
    For IOS:              https://itunes.apple.com/appdir/  *
    For ANDROID:  https://play.google.com/store/apps/ *
/script

references:
http://findnerd.com/list/view/How-to-Detect-if-an-app-is-installed-in-iOS-or-Android-Using-Javascript-and-Deep-Linking/4287/

Android : Deep linking

If the android app is to be launched from a url link, first the activity should be added with an intent filter

    android:name="com.example.android.GizmosActivity"
    android:label="@string/title_gizmos" >
   
       
       
       
       
                      android:host="www.example.com"
              android:pathPrefix="/gizmos" />
       
   
   
       
       
       
       
                      android:host="gizmos" />
   




Now in the app just handle it

@Override
public void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.main);

    Intent intent = getIntent();
    String action = intent.getAction();
    Uri data = intent.getData();
}

Inorder to quickly test it, use the below adb command

$ adb shell am start
        -W -a android.intent.action.VIEW
        -d


references:
https://developer.android.com/training/app-links/deep-linking

Android : Provide an Easy share option

What we have to do is, just the below

public static void shareOnSocialMedia(Activity parentActivity, String subject, String body){
        Intent myIntent = new Intent(Intent.ACTION_SEND);
        myIntent.setType("text/plain");
        String shareBody = body;
        String shareSub = subject;
        myIntent.putExtra(Intent.EXTRA_SUBJECT, shareBody);
        myIntent.putExtra(Intent.EXTRA_TEXT, shareBody);
        parentActivity.startActivity(Intent.createChooser(myIntent, "Share using"));
    }

references
https://developer.android.com/training/sharing/shareaction

Thursday, December 27, 2018

iOS Swift file upload

open func uploadFile(fileName : String, contentType:String, fileData:Data?, parameters:Dictionary,headerParams:Dictionary, completionHandler:@escaping (Dictionary) -> (Void)){
       
        var request  = URLRequest(url: URL(string: A2Config.getBaseURL() + "/uploadFile")!)
        request.httpMethod = "POST"
        let boundary = "Boundary-\(UUID().uuidString)"
        request.setValue("multipart/form-data; boundary=\(boundary)", forHTTPHeaderField: "Content-Type")
        for (key,valueForKey) in  headerParams
        {
            request.setValue(valueForKey, forHTTPHeaderField: key)
        }
       
        let body = NSMutableData()
        let boundaryPrefix = "--\(boundary)\r\n"
       
        for (key, value) in parameters {
            body.appendString(boundaryPrefix)
            body.appendString("Content-Disposition: form-data; name=\"\(key)\"\r\n\r\n")
            body.appendString("\(value)\r\n")
        }
       
        body.appendString(boundaryPrefix)
        body.appendString("Content-Disposition: form-data; name=\"uploadFile\"; filename=\"\(fileName)\"\r\n")
        body.appendString("Content-Type: \(contentType)\r\n\r\n")
        body.append(fileData!)
        body.appendString("\r\n")
        body.appendString("--".appending(boundary.appending("--")))
        request.httpBody = body as Data
        let task = URLSession.shared.dataTask(with: request) { data, response, error in
            guard let data = data, error == nil else {                                                 // check for fundamental networking error
                print("error=\(String(describing: error))")
                return
            }
           
            if let httpStatus = response as? HTTPURLResponse, httpStatus.statusCode != 200 {           // check for http errors
                print("statusCode should be 200, but is \(httpStatus.statusCode)")
                print("response = \(String(describing: response))")
                completionHandler(["status" : "error","errMsg":"Error completing request, please re-login status :\(httpStatus)"])
                return
            }
           
            let responseString = String(data: data, encoding: .utf8)
            print("responseString = \(String(describing: responseString))")
            do
            {
                let jsonData : Dictionary? = try! JSONSerialization.jsonObject(with: data, options: JSONSerialization.ReadingOptions.allowFragments) as? Dictionary
                guard jsonData != nil else {
                    completionHandler(["status" : "error"])
                    return
                }
                completionHandler(["status" : "success","data":jsonData!])
            }
            catch let err {
                print("met with an error \(err)")
                completionHandler(["status" : "error"])
            }
           
        }
        task.resume()
       
    }

Android: Place Picker to pick address from map

the package needed is,
implementation 'com.google.android.gms:play-services-places:16.0.0'

As given in the references section, if the intent is to show the Android ui to pick the address, then key is probably not required.

Just have the below code

int PLACE_PICKER_REQUEST = 1;
PlacePicker.IntentBuilder builder = new PlacePicker.IntentBuilder();
startActivityForResult(builder.build(this), PLACE_PICKER_REQUEST);



protected void onActivityResult(int requestCode, int resultCode, Intent data) {
  if (requestCode == PLACE_PICKER_REQUEST) {
    if (resultCode == RESULT_OK) {
        Place place = PlacePicker.getPlace(data, this);
        String toastMsg = String.format("Place: %s", place.getName());
        Toast.makeText(this, toastMsg, Toast.LENGTH_LONG).show();
    }
  }
}



references:
https://developers.google.com/places/android-sdk/placepicker

Firebase SDK update from beta to 1.0 or 2.0

Version 1.0.0 of the Firebase SDK for Cloud Functions introduces some important changes in the API. The primary change, a replacement of event.data format with data and context parameters, affects all asynchronous (non-HTTP) functions. The updated SDK can also be used with firebase-functions-test, a brand new unit testing companion SDK. See Unit Testing Functions for more information.

exports.dbWrite = functions.database.ref('/path/with/{id}').onWrite((data, context) => {
  const authVar = context.auth; // Auth information for the user.
  const authType = context.authType; // Permissions level for the user.
  const pathId = context.params.id; // The ID in the Path.
  const eventId = context.eventId; // A unique event ID.
  const timestamp = context.timestamp; // The timestamp at which the event happened.
  const eventType = context.eventType; // The type of the event that triggered this function.
  const resource = context.resource; // The resource which triggered the event.
  // ...
});

Before (<= v0.9.1)

const functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp(functions.config().firebase);
Now (>= v1.0.0)

const functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp();

functions.config().firebase removed

functions.config().firebase has been removed. If you'd like to access the config values from your Firebase project, use process.env.FIREBASE_CONFIG instead:

let firebaseConfig = JSON.parse(process.env.FIREBASE_CONFIG);
/* {  databaseURL: 'https://databaseName.firebaseio.com',
       storageBucket: 'projectId.appspot.com',
       projectId: 'projectId' }
*/




references:
https://firebase.google.com/docs/functions/beta-v1-diff#realtime-database

Wednesday, December 26, 2018

NodeJS : How to read file content line by line without loading entire content into memory

The link in reference section is giving idea on how to do this. Basically, it utilises the FileReader
api and reads each time 256 characters buffer size. 

references:
https://gist.github.com/peteroupc/b79a42fffe07c2a87c28

Javascript : Replace occurrences of string

The important thing to note is that the replace function is taking regex argument.

var re = /apples/gi;
var str = 'Apples are round, and apples are juicy.';
var newstr = str.replace(re, 'oranges');
console.log(newstr);  // oranges are round, and oranges are juicy.


references:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/replace

Firebase : is class supported in cloud functions?

Based on my tests, both of the below work awesomely with cloud functions

class Hero {
    constructor(name, level) {
        this.name = name;
        this.level = level;
    }

    // Adding a method to the constructor
    greet() {
        return `${this.name} says hello.`;
    }
}

and

unction Hero(name, level) {
    this.name = name;
    this.level = level;
}

// Adding a method to the constructor
Hero.prototype.greet = function() {
    return `${this.name} says hello.`;
}


references:
https://www.digitalocean.com/community/tutorials/understanding-classes-in-javascript#classes-are-functions

Firebase : File upload using formidable

Below is the code for uploading using formidable. However, this did not really work for me.
probably need to dig more on this.

exports.uploadFile = functions.https.onRequest((req, res) => {
   var form = new formidable.IncomingForm();
   return new Promise((resolve, reject) => {
     form.parse(req, function(err, fields, files) {
       var file = files.fileToUpload;
       if(!file){
         reject("no file to upload, please choose a file.");
         return;
       }
       console.info("about to upload file as a json: " + file.type);
       var filePath = file.path;
       console.log('File path: ' + filePath);

       var bucket = gcs.bucket('bucket-name');
       return bucket.upload(filePath, {
           destination: file.name
       }).then(() => {
         resolve();  // Whole thing completed successfully.
       }).catch((err) => {
         reject('Failed to upload: ' + JSON.stringify(err));
       });
     });
   }).then(() => {
     res.status(200).send('Yay!');
     return null
   }).catch(err => {
     console.error('Error while parsing form: ' + err);
     res.status(500).send('Error while parsing form: ' + err);
   });
 });

references:
https://stackoverflow.com/questions/45098305/firebase-cloud-function-for-file-upload

Firebase : Cloud functions write to local storage from multi part

busboy is the nam package that can be used for this purpose. Below is the code snippet for it.

exports.uploadFile = functions.https.onRequest((req, res) => {
  if (req.method === 'POST') {
    const busboy = new Busboy({headers: req.headers});
    const tmpdir = os.tmpdir();

    // This object will accumulate all the fields, keyed by their name
    const fields = {};

    // This object will accumulate all the uploaded files, keyed by their name.
    const uploads = {};

    // This code will process each non-file field in the form.
    busboy.on('field', (fieldname, val) => {
      // TODO(developer): Process submitted field values here
      console.log(`Processed field ${fieldname}: ${val}.`);
      fields[fieldname] = val;
    });

    let fileWrites = [];

    // This code will process each file uploaded.
    busboy.on('file', (fieldname, file, filename) => {
      // Note: os.tmpdir() points to an in-memory file system on GCF
      // Thus, any files in it must fit in the instance's memory.
      console.log(`Processed file ${filename}`);
      const filepath = path.join(tmpdir, filename);
      uploads[fieldname] = filepath;
      console.log(`Processed full file path ${filepath}`);

      const writeStream = fs.createWriteStream(filepath);
      file.pipe(writeStream);

      // File was processed by Busboy; wait for it to be written to disk.
      const promise = new Promise((resolve, reject) => {
        file.on('end', () => {
          writeStream.end();
        });
        writeStream.on('finish', resolve);
        writeStream.on('error', reject);
      });
      fileWrites.push(promise);
    });

    // Triggered once all uploaded files are processed by Busboy.
    // We still need to wait for the disk writes (saves) to complete.
    busboy.on('finish', () => {
      Promise.all(fileWrites).then(() => {
        // TODO(developer): Process saved files here
        for (const name in uploads) {
          const file = uploads[name];
          fs.unlinkSync(file);
        }
        res.send();
      });
    });

    busboy.end(req.rawBody);
  } else {
    // Return a "method not allowed" error
    res.status(405).end();
  }
});

reference:
https://cloud.google.com/functions/docs/writing/http#multipart_data_and_file_uploads

Firebase : Cloud functions with https trigger

Just running the quick start example seemed to be pretty easy. had to do the below in sequence

1. firebase login
2. firebase use --add from the app directory
3. npm install -g firebase-tools (i had this already installed, so skipped it)
4. cd functions && npm install; cd ..
5. firebase deploy

Done below was the output and hitting the url from browser shown the date.

mylap-MacBook-Pro:time-server mylap$ firebase deploy
⚠  functions: package.json indicates an outdated version of firebase-functions.
 Please upgrade using npm install --save firebase-functions@latest in your functions directory.

=== Deploying to 'mylap-8cb35'...

i  deploying functions
i  functions: ensuring necessary APIs are enabled...
✔  functions: all necessary APIs are enabled
i  functions: preparing functions directory for uploading...
i  functions: packaged functions (52.42 KB) for uploading
✔  functions: functions folder uploaded successfully
i  functions: creating Node.js 6 function date(us-central1)...
✔  functions[date(us-central1)]: Successful create operation.
Function URL (date): https://us-central1-mylap-8cb35.cloudfunctions.net/date

✔  Deploy complete!

Project Console: https://console.firebase.google.com/project/mylap-8cb35/overview
mylap-MacBook-Pro:time-server mylap$


references:
https://github.com/firebase/functions-samples/tree/master/quickstarts/time-server


Friday, December 21, 2018

Javascript : Setup Firebase cloud messaging app

The FCM JavaScript API lets you receive notification messages in web apps running in browsers that support the Push API.
Send messages to JavaScript clients using the HTTP and XMPP app server protocols, as described in Send Messages https://firebase.google.com/docs/cloud-messaging/send-message

An important note: The FCM SDK is supported only in pages served over HTTPS. This is due to its use of service workers, which are available only on HTTPS sites. Need a provider? Firebase Hosting is an easy way to get free HTTPS hosting on your own domain.

There are main two steps in enabling the Javascript to receive push notifications

1. Add firebase to the app
2. configure web credentials with FCM
3. add logic to access registration tokens

Adding the FCM to the firebase app is pretty straightforward.

The FCM Web interface uses Web credentials called "Voluntary Application Server Identification," or "VAPID" keys, to authorize send requests to supported web push services. To subscribe your app to push notifications, you need to associate a pair of keys with your Firebase project. You can either generate a new key pair or import your existing key pair through the Firebase Console.


Below are the steps to generate keypair

1. Open the Cloud Messaging tab of the Firebase console Settings pane and scroll to the Web configuration section.
2. In the Web Push certificates tab, click Generate Key Pair. The console displays a notice that the key pair was generated, and displays the public key string and date added.

Next step is to configure browser to receive push messages.

We need to add web app manifest that specifies gcm_sender_id. This is a hardcoded value indicating that FCM is authorised to send messages to this app.


The sample project that is given by Google for messaging is good enough to understand this,

1. First we need to request for permission
2. Get the token Instance ID token (IID), send it to server
3. now push from message from the server.

function requestPermission() {
    console.log('Requesting permission...');
    // [START request_permission]
    messaging.requestPermission().then(function() {
      console.log('Notification permission granted.');
      // TODO(developer): Retrieve an Instance ID token for use with FCM.
      // [START_EXCLUDE]
      // In many cases once an app has been granted notification permission, it
      // should update its UI reflecting this.
      resetUI();
      // [END_EXCLUDE]
    }).catch(function(err) {
      console.log('Unable to get permission to notify.', err);
    });
    // [END request_permission]
  }


messaging.getToken().then(function(currentToken) {
      if (currentToken) {
        sendTokenToServer(currentToken);
        updateUIForPushEnabled(currentToken);
      } else {
        // Show permission request.
        console.log('No Instance ID token available. Request permission to generate one.');
        // Show permission UI.
        updateUIForPushPermissionRequired();
        setTokenSentToServer(false);
      }
    }).catch(function(err) {
      console.log('An error occurred while retrieving token. ', err);
      showToken('Error retrieving Instance ID token. ', err);
      setTokenSentToServer(false);
    });

function deleteToken() {
    // Delete Instance ID token.
    // [START delete_token]
    messaging.getToken().then(function(currentToken) {
      messaging.deleteToken(currentToken).then(function() {
        console.log('Token deleted.');
        setTokenSentToServer(false);
        // [START_EXCLUDE]
        // Once token is deleted update UI.
        resetUI();
        // [END_EXCLUDE]
      }).catch(function(err) {
        console.log('Unable to delete token. ', err);
      });
      // [END delete_token]
    }).catch(function(err) {
      console.log('Error retrieving Instance ID token. ', err);
      showToken('Error retrieving Instance ID token. ', err);
    });

  }

Now to send the message below is what to be done

curl -X POST -H "Authorization: key=3MGdZowXUqtwFW58QeINgjvMIIWKcpu8mXJbFgudZe-FOLVRY3jaeQ8yn8B_uYfC_ClQA1Tv5Oas7QIxC8PISJbB2R79DwRpoDXKbRkbGh0CJN7Ydau-jLtkQG-7PR1w" -H "Content-Type: application/json" -d '{
  "notification": {
    "title": "Portugal vs. Denmark",
    "body": "5 to 1",
    "icon": "firebase-logo.png",
    "click_action": "http://localhost:8081"
  },
  "to": "-7Hii2UuMYiovu2MKEsiFCcqaC2gXIKlPVN1zrmMR7yca153Qw5tgoR1_gnrmMH-fucXLX9a6X6WEnh8fBaRzN7MHnwqzfDmShB_NgzQBu2sEpIPuVrL4MrcDZCf"
}' "https://fcm.googleapis.com/fcm/send"

references:
https://github.com/firebase/quickstart-js/blob/405fdde76b292f361e568e553de5d2b696c62b50/messaging/README.md

Web : Does cloud messaging work in incognito mode?

I was trying to run the example of push message from the incognito mode of Chrome. But it was always giving the below error message

messaging/permission-blocked"
message:"Messaging: The required permissions were not granted and blocked instead. (messaging/permission-blocked)."
stack:"FirebaseError: Messaging: The required permissions were not granted and blocked instead. (messaging/permission-blocked).↵ at d (https://www.gstatic.com/firebasejs/3.6.2/firebase-messaging.js:30:200)"

apparently this is because the notifications are blocked in the incognito mode.

references:
https://documentation.onesignal.com/docs/web-push-setup

NodeJS : What is Async Waterfall?


The async library for Node and client side is a pretty nifty library which takes out some of the pain of doing things asynchronously. This example is for the server side using Node.

According to the repo, this is what the waterfall does - "Runs an array of functions in series, each passing their results to the next in the array. However, if any of the functions pass an error to the callback, the next function is not executed and the main callback is immediately called with the error."

var create = function (req, res) {
    async.waterfall([
        _function1(req),
        _function2,
        _function3
    ], function (error, success) {
        if (error) { alert('Something is wrong!'); }
        return alert('Done!');
    });
};

references:
https://coderwall.com/p/zpjrra/async-waterfall-in-nodejs

Android: Newer versions of Android Studio and only two drawable directory - drawable and drawable-v21

For starters, it looks like this started in Android Studio 1.4. I am in 1.5 right now. It seems that Android is moving in the direction of no longer needing you to create your own density folders (i.e. mdpi, hdpi, etc.) for drawables (mipmaps is different, so please don't confuse that with what I am talking about). As of Android Studio 1.4, it will take the SVGs you put in the regular drawable folder (as in not the v21 folder), convert them to PNGs, and place them in auto-generated density folders for you during the build sequence (so Gradle does this for you, essentially) for all versions older than API 21. For 21 and up, SVG is supported different, which is a whole other topic. But this essentially makes SVG support backwards compatible all the way to API 1!!!

HOWEVER, there is a BIG catch. This SVG conversion is not always as successful as you might hope. It only supports a subset of SVG files, so depending on how you save it (i.e. what settings are applied when saving), it may not render properly. Even commonly used settings, such as gradient and pattern fills, local IRI references, and transformations are NOT supported (yet). If you are working with SVG files that you didn't generate, you will likely have problems importing them. If you or someone you work with directly generates them, you may have to experiment with how you save the files, and you should test builds often on older versions of Android to make sure it turned out as expected.

To import SVGs into Android Studio 1.4+, follow these simple steps:

Right-click on the res/drawable folder
Select "New"
Select "Vector Asset"
At this point, you can select a "Material Icon", which works really well, and there are a bunch of beautiful "free" icons you can select from. For indie developers, without icon design support, this is nice!
OR - you can select "Local SVG File"
Then choose an SVG from either option with the "choose" option. WARNING: This is where it could possibly go wrong, if the SVG you import isn't saved properly.
Hit "Next"
Verify it is saving in the right place, and then Click "Finish"
At this point, it is reference-able with: android:icon="@drawable/ic_imagename" (using your image name instead of ic_imagename, of course)

references:
https://stackoverflow.com/questions/34343611/newer-versions-of-android-studio-and-only-two-drawable-directory-drawable-and

Thursday, December 20, 2018

What is a Web app manifest?

The web app manifest is a simple JSON file that tells the browser about your web application and how it should behave when 'installed' on the user's mobile device or desktop. Having a manifest is required by Chrome to show the Add to Home Screen prompt.

A typical manifest file includes information about the app name, icons it should use, the start_url it should start at when launched, and more.

To tell the browser about the manifest, we need to add a link tag to all the pages that encompass the web app.



Below are the important manifest properties

short_name and/or name => Name used by the device to use on the home screen

icons

When a user adds your site to their home screen, you can define a set of icons for the browser to use. These icons are used in places like the home screen, app launcher, task switcher, splash screen, etc.

its an array like below
"icons": [
  {
    "src": "/images/icons-192.png",
    "type": "image/png",
    "sizes": "192x192"
  },
  {
    "src": "/images/icons-512.png",
    "type": "image/png",
    "sizes": "512x512"
  }
]


start_url : The start_url tells the browser where your application should start when it is launched, and prevents the app from starting on whatever page the user was on when they added your app to their home screen.

background_color : The background_color property is used on the splash screen when the application is first launched.

display: You can customize what browser UI is shown when your app is launched. For example, you can hide the address bar and browser chrome. Or games may want to go completely full screen.

fullscreen : Opens the web application without any browser UI and takes up the entirety of the available display area.
standalone : Opens the web app to look and feel like a standalone native app. The app runs in its own window, separate from the browser, and hides standard browser UI elements like the URL bar, etc.
minimal-ui : Not supported by Chrome
This mode is similar to fullscreen, but provides the user with some means to access a minimal set of UI elements for controlling navigation (i.e., back, forward, reload, etc).

orientation : You can enforce a specific orientation, which is advantageous for apps that work in only one orientation, such as games. Use this selectively. Users prefer selecting the orientation.

scope : The scope defines the set of URLs that the browser considers to be within your app, and is used to decide when the user has left the app, and should be bounced back out to a browser tab. The scope controls the URL structure that encompasses all the entry and exit points in your web app. Your start_url must reside within the scope.

"scope": "/maps/"

A few other tips:

If you don't include a scope in your manifest, then the default implied scope is the directory that your web app manifest is served from.
The scope attribute can be a relative path (../), or any higher level path (/) which would allow for an increase in coverage of navigations in your web app.
The start_url must be in the scope.
The start_url is relative to the path defined in the scope attribute.
A start_url starting with / will always be the root of the origin.


theme_color The theme_color sets the color of the tool bar, and may be reflected in the app's preview in task switchers.

It may take a bit of time before app get paint cycle, till then Chrome will automatically create the splash screen from the manifest properties, including:

name
background_color
icons

The background_color should be the same color as the load page, to provide a smooth transition from the splash screen to your app.

references:
https://developers.google.com/web/fundamentals/web-app-manifest/

Javascript : Adding Firebase to project

If the plan is in using Node.js in privileged environments such as servers or serverless backends like Cloud Functions (as opposed to clients for end-user access like a Node.js desktop or IoT device), you should instead follow the instructions for setting up the Admin SDK.

Below are the 3 steps involved in adding firebase to the web project

1. Create a firebase project in the console.
2. From the project overview page in the Firebase console, click Add Firebase to your web app. If your project already has an app, select Add App from the project overview page.

It will look like the below




The snippet contains initialization information to configure the Firebase JavaScript SDK to use Authentication, Cloud Storage, the Realtime Database, and Cloud Firestore. You can reduce the amount of code that your app uses by only including the features that you need.

From the CDN, include the individual components that you need (include firebase-app first):

Now can run from the local server like this below

$ npm install -g firebase-tools
$ firebase init    # Generate a firebase.json (REQUIRED)
$ firebase serve   # Start development server

references:
https://firebase.google.com/docs/web/setup

Tuesday, December 18, 2018

Javascript how to remove unwanted characters



var desired = stringToReplace.replace(/[^\w\s]/gi, '')
As was mentioned in the comments it's easier to do this as a whitelist - replace the characters which aren't in your safelist.

The caret (^) character is the negation of the set [...], gi say global and case-insensitive (the latter is a bit redundant but I wanted to mention it) and the safelist in this example is digits, word characters, underscores (\w) and whitespace (\s).

references:
https://stackoverflow.com/questions/4374822/remove-all-special-characters-with-regexp

Firebase : Can the push notification be sent to any string?

Yes provided the topic string has the character range [a-zA-Z0-9-_.~%]+ , so if @ is present, it is not going to be a valid one.

Topic provided to sendToTopic() must be a string which matches the format "/topics/[a-zA-Z0-9-_.~%]+". at FirebaseMessagingError.Error (native) at FirebaseMessagingError.FirebaseError [as constructor] (/user_code/node_modules/firebase-admin/lib/utils/error.js:39:28) at FirebaseMessagingError.PrefixedFirebaseError [as constructor] (/user_code/node_modules/firebase-admin/lib/utils/error.js:85:28) at new FirebaseMessagingError (/user_code/node_modules/firebase-admin/lib/utils/error.js:241:16) at Messaging.validateTopic (/user_code/node_modules/firebase-admin/lib/messaging/messaging.js:642:19) at /user_code/node_modules/firebase-admin/lib/messaging/messaging.js:328:19 at process._tickDomainCallback (internal/process/next_tick.js:135:7)

references:
https://stackoverflow.com/questions/4374822/remove-all-special-characters-with-regexp

Background Restricted Apps (Android P or newer)

Starting Jan 2019, FCM will not deliver messages to apps which were put into background restriction by the user (such as via: Setting -> Apps and Notification -> [appname] -> Battery). Once your app is removed from background restriction, new messages to the app will be delivered as before. In order to prevent lost messages and other background restriction impacts, make sure to avoid bad behaviors listed by the Android vitals effort. These behaviors could lead to the Android device recommending to the user that your app be background restricted. Your app can check if it is background restricted using: isBackgroundRestricted().

references:
https://developer.android.com/topic/performance/vitals/
https://firebase.google.com/docs/cloud-messaging/android/topic-messaging

Sunday, December 16, 2018

XCUITesting : Using Assertions with Objective C and Swift

XCTest assertions that perform equality tests are divided between those that compare objects and those that compare nonobjects. For example, XCTAssertEqualObjects tests equality between two expressions that resolve to an object type while XCTAssertEqual tests equality between two expressions that resolve to the value of a scalar type.

This difference is marked in the XCTest assertions listing by including “this test is for scalars” in the description. Marking assertions with “scalar” this way informs you of the basic distinction, but it is not an exact description of which expression types are compatible.

For objective C, assertions marked for Scalar types can be used with the equality comparison operators: ==, !=, <=, <, >= and >. IF the expression resolves to any C type, struct or array comparison that works with these operators, it is considered as scalar.

For swift, assertions marked for scalar can be used to compare any expression type that conforms to the Equatable protocol ( for all of the "equal" or "not equal" assertions) and Comparable protocol for "greater than" or "less than"  assertions. In addition, assertions marked for scalars have overrides for (T) and for (K:V), where T, K and V conform to Equatable or Comparable protocols. For e.g. arrays of an equatable type are compatible with XCAssertEqual(_:_:_:file:line:), and dictionaries whose keys and values are both comparable types are compatible with XCAssertTypeLessThan(_:__:file:line:).

Using XCTest assertions in our test also differs between Objective-C and Swift because of how language differ in treating data types and implicit conversions.

For Objective-C, the use of implicit conversions in the XCTest implementation allows the comparisons to operate independent of the expressions’ data types, and no check is made of the input data types.

For Swift, implicit conversions are not allowed because Swift is stricter about type safety; both parameters to a comparison must be of the same type. Type mismatches are flagged at compile time and in the source editor.

references:
https://developer.apple.com/library/archive/documentation/DeveloperTools/Conceptual/testing_with_xcode/chapters/04-writing_tests.html#//apple_ref/doc/uid/TP40014132-CH4-DontLinkElementID_5

XCUITesting: Writing UI Tests, Tests with Swift

Creating UI tests with XCTest is an extension of the same programming model as creating unit tests. The differences in workflow and implementation are focused around using UI recording and the XCTest UI testing APIs, described in User Interface Testing.


The difference in Swift w.r.to Objective C is that Swift has access control model. And that prevents the external entity from accessing anything declared as internal in an app or framework.

1) When you set the Enable Testability build setting to Yes, which is true by default for test builds in new projects, Xcode includes the -enable-testing flag during compilation. This makes the Swift entities declared in the compiled module eligible for a higher level of access.

2) When you add the @testable attribute to an import statement for a module compiled with testing enabled, you activate the elevated access for that module in that scope. Classes and class members marked as internal or public behave as if they were marked open. Other entities marked as internal act as if they were declared public.


Below is how to do it. With this solution in place, your Swift app code’s internal functions are fully accessible to your test classes and methods. The access granted for @testable imports ensures that other, non-testing clients do not violate Swift’s access control rules even when compiling for testing. Further, because your release build does not enable testability, regular consumers of your module (for example if you distribute a framework) can’t gain access to internal entities this way.


import XCTest

// Importing AppKit because of NSApplication
import AppKit

// Importing MySwiftApp because of AppDelegate
@testable import MySwiftApp

class MySwiftAppTests: XCTestCase {
    func testExample() {
        let appDelegate = NSApplication.sharedApplication().delegate as! AppDelegate
        appDelegate.foo()
    }
}


references:
https://developer.apple.com/library/archive/documentation/DeveloperTools/Conceptual/testing_with_xcode/chapters/04-writing_tests.html#//apple_ref/doc/uid/TP40014132-CH4-SW8

XCUITesting : Flow of Test execution



In the default case when run tests, XCTest finds all the test classes and, for each class, runs all of its test methods.

There are options available to change specifically what tests XCTest runs. You can disable tests using the test navigator or by editing the scheme. You can also run just one test or a subset of tests in a group using the Run buttons in either the test navigator or the source editor gutter.


For each class, testing starts by running the class setup method. For each test method, a new instance of the class is allocated and its instance setup method executed. After that it runs the test method, and after that the instance teardown method. This sequence repeats for all the test methods in the class. After the last test method teardown in the class has been run, Xcode executes the class teardown method and moves on to the next class. This sequence repeats until all the test methods in all test classes have been run.


One question that came in mind is, how about if we need some state from the previous method execution? Probably, need to call the method from this method under test?

references:
https://developer.apple.com/library/archive/documentation/DeveloperTools/Conceptual/testing_with_xcode/chapters/04-writing_tests.html#//apple_ref/doc/uid/TP40014132-CH4-SW33

Saturday, December 15, 2018

Android: Error:Program type already present: android.arch.lifecycle.LiveData



com.firebaseui:firebase-ui-firestore:3.1.0 depends on android.arch.lifecycle:extensions:1.0.0-beta1. Switching to version 3.2.2 fixes the issue by using the Lifecycle 1.1 libraries that Support Library 27.1.0 are built upon. - Issue Tracker

For me, removing the firebase-ui dependencies solved the issue since I wasn't even using the library in the first place.
Or this went away when upgrading the firebase ui to 4.3.0

references:
https://github.com/firebase/FirebaseUI-Android#compatibility-with-firebase--google-play-services-libraries

Friday, December 14, 2018

Android : Gradle, how to give arguments?

While building project, met with an issue which eventually said run with --stacktrace or --info or --debug option to get more details.

The detail of the error on app building was like below. To add arguments to the cradle, one of the option is below



Error:Note: Some input files use or override a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':app:transformClassesWithInstantRunForDebug'.
> java.lang.NoClassDefFoundError (no error message)

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

BUILD FAILED in 57s

Android: Changes for Location updates in Oreo

1. Background apps will receive location updates only a few times each hour (the location update interval may be adjusted in the future based on system impact and feedback from developers).
2. Foreground apps are not affected by these limits
3. These background limits apply to all apps running on an O device, regardless of the target SDK.
4. Apps targeting of O are further subject to limits on services started in the background. For these reasons, apps targeting O should not use PendingIntent.getService() when requesting location updates. Instead they should use PendingIntent.getBroadcast().

Below are the alternative to consider

1. Use foreground updates
Request updates in the foreground. This means requesting and removing updates as part of the activity lifecycle(request in onResume() and remove in onPause()) Apps running in the foreground are not subject to any location limits on O devices.

2. Use a foreground service. Request updates using a foreground service. This involves displaying a non-dismissible, persistent notification to users. While this may make sense for some use cases, developers should be thoughtful about using foreground services and what the messaging the user should see.

3. Use Geo fencing : If the use case relies on the device entering, dwelling, or existing a particular area of interest, this API provides a performant way to get these notifications. This approach is more efficient that checking location at regular intervals.

Below are some more tips when requesting location.

1. Use bactched location updates :

references:

Android: Fixing Duplicate Class Errors

When building a project, the issue that came up was like below

* What went wrong:
Execution failed for task ':app:transformDexArchiveWithExternalLibsDexMergerForDebug'.
Program type already present: android.arch.lifecycle.ViewModel

Based on the details given below in the reference, the issue can occur in below cases

This error typically occurs due to one of the following circumstances:

1. A binary dependency includes a library that your app also includes as a direct dependency.
For example, your app declares a direct dependency on Library A and Library B, but Library A already includes Library B in its binary.
To resolve this issue, remove Library B as a direct dependency.

2. Your app has a local binary dependency and a remote binary dependency on the same library.
To resolve this issue, remove one of the binary dependencies.

Fix conflicts between classpaths

When Gradle resolves the compile classpath, it first resolves the runtime classpath and uses the result to determine what versions of dependencies should be added to the compile classpath. In other words, the runtime classpath determines the required version numbers for identical dependencies on downstream classpaths.

Your app's runtime classpath also determines the version numbers that Gradle requires for matching dependencies in the runtime classpath for the app's test APK. The hierarchy of classpaths is described in figure 1.

Conflict with dependency 'com.example.library:some-lib:2.0' in project 'my-library'.
Resolved versions for runtime classpath (1.0) and compile classpath (2.0) differ.

You might see this conflict when, for example, your app includes a version of a dependency using the implementation dependency configuration and a library module includes a different version of the dependency using the runtimeOnly configuration. To resolve this issue, do one of the following:

Include the desired version of the dependency as an api dependency to your library module. That is, only your library module declares the dependency, but the app module will also have access to its API, transitively.

Alternatively, you can declare the dependency in both modules, but you should make sure that each module uses the same version of the dependency. Consider configuring project-wide properties to ensure versions of each dependency remain consistent throughout your project.


references:
https://developer.android.com/studio/build/dependencies#duplicate_classes

Android : JaCoCo for Code coverage


JaCoCo should provide the standard technology for code coverage analysis in Java VM based environments. The focus is providing a lightweight, flexible and well documented library for integration with various build and development tools.

There are several open source coverage technologies for Java available. While implementing the Eclipse plug-in EclEmma the observation was that none of them are really designed for integration. Most of them are specifically fit to a particular tool (Ant tasks, command line, IDE plug-in) and do not offer a documented API that allows embedding in different contexts. Two of the best and widely used available open source tools are EMMA and Cobertura. Both tools are not actively maintained by the original authors any more and do not support the current Java versions. Due to the lack of regression tests maintenance and feature additions is difficult.

Therefore we started the JaCoCo project to provide a new standard technology for code coverage analysis in Java VM based environments. The focus is providing a lightweight, flexible and well documented library for integration with various build and development tools. Ant tasks, a Maven plug-in and the EclEmma Eclipse plug-in are provided as reference usage scenarios. Also many other tool vendors and Open Source projects have integrated JaCoCo into their tools.

In Android studio, this option can be seen here



references:
https://www.jacoco.org/jacoco/trunk/doc/mission.html

XCUITesting : User Interface Testing iOS



UI testing gives you the ability to find and interact with the UI of your app in order to validate the properties and state of the UI elements.
UI testing also has UI recording which gives developer to generate code that exercise apps UI the same way a developer would do.

UI tests rest upon two core technologies. XCTest framework and Accessibility.

XCTest is the framework that provides UI testing capabilities. Developer need to create a UI test target, and also create UI test classes and UI test methods as a part of the project. You use XCTest assertions to validate that expected outcomes are true. Developer also get continuous integration via Xcode Server and xcodebuild. XCTest is fully compatible with both Objective-C and Swift.

Accessibility technology is used as it has rich semantic data about the UI that users can use

When we create a UI Test target, Xcode creates a default UI test group and implementation file for us with an example test template. When we create a UI test target, we specify the app that the test will address.

iOS devices need to be enabled for development and connected to a trusted host. macOS needs permission granted to a special Xcode Helper app (prompt automatically on first use)

When ui testing happens, the test code runs as a separate process, synthesising events that UI in our app responds to.

Below are the three APIs the XCUITesting is dependent on.

XCUIApplication
XCUIElement
XCUIElementQuery

Below given the sequence for UIRecroding

    Using the test navigator, create an UI testing target.

    In the template file that is created, place the cursor into the test function.

    Start UI recording.

    The app launches and runs. Exercise the app doing a sequence of UI actions. Xcode captures the actions into source in the body of the function.

    When done with the actions you want to test, stop UI recording.

    Add XCTest assertions to the source.


API tests can have both functional and performance aspects, and so can UI tests. UI tests operate at the surface space of the app and tend to integrate a lot of low level functionality into what the user sees as presentation and response.

UI tests fundamentally operate on the level of events and responses.

    Query to find an element.
    Know the expected behavior of the element as reference.
    Tap or click the element to elicit a response.
    Measure that the response matches or does not match the expected for a pass/fail result.

UI tests have both functional and performance aspects, just like unit tests.

The general pattern of a UI test for correctness is as follows:
    Use an XCUIElementQuery to find an XCUIElement.
    Synthesize an event and send it to the XCUIElement.
    Use an assertion to compare the state of the XCUIElement against an expected reference state.

To construct a UI test for performance, wrap a repeatable UI sequence of steps into the measureBlock structure


References:
https://developer.apple.com/library/archive/documentation/DeveloperTools/Conceptual/testing_with_xcode/chapters/09-ui_testing.html

XCUITesting - Test Basics



The more you can divide up the behavior of your app into components, the more effectively you can test that the behavior of your code meets the reference standards in all particulars as your project grows and changes. For a large project with many components, you’ll need to run a large number of tests to test the project thoroughly. Tests should be designed to run quickly, when possible, but some tests are necessarily large and execute more slowly. Small, fast running tests can be run often and used when there is a failure in order to help diagnose and fix problems easily.

Tests designed for the components of a project are the basis of test-driven development, which is a style of writing code in which you write test logic before writing the code to be tested. This development approach lets you codify requirements and edge cases for your code before you implement it. After writing the tests, you develop your algorithms with the aim of passing the tests. After your code passes the tests, you have a foundation upon which you can make improvements to your code, with confidence that any changes to the expected behavior (which would result in bugs in your product) are identified the next time you run the tests.

Various types of testing are:

1. Performance testing
2. User Interface testing
3. App and Library Tests

Below are the various types of Testing

Performance Testing

1. Tests of components can be either functional in nature or measure performance. XCTest provides API to measure time-based performance, enabling you to track performance improvements and regressions in a similar way to functional compliance and regressions.

2. To provide a success or failure result when measuring performance, a test must have a baseline to evaluate against. A baseline is a combination of the average time performance in ten runs of the test method with a measure of the standard deviation of each run. Tests that drop below the time baseline or that vary too much from run to run are reported as failures.

User Interface Testing

The functional and performance testing discussed so far is generally referred to as unit testing, where a “unit” is the component of functionality that you have decided upon with respect to granularity and level. Unit testing is primarily concerned with forming good components that behave as expected and interact with other components as expected. From the design perspective, unit testing approaches your development project from its inside, vetting that the components fulfill your intent.


App and Library Tests

Xcode offers two types of unit test contexts: app tests and library tests.

    App tests. App tests check the correct behavior of code in your app, such as the example of the calculator app’s arithmetic operations.

    Library tests. Library tests check the correct behavior of code in dynamic libraries and frameworks independent of their use in an app’s runtime. With library tests you construct unit tests that exercise the components of a library.
Testing your projects using these test contexts as appropriate helps you maintain expected and anticipated behavior as your code evolves over time.

Below is some of the guidelines when writing the ui tests.

When you start to create tests, keep the following ideas in mind:

    When creating unit tests, focus on testing the most basic foundations of your code, the Model classes and methods, which interact with the Controller.

    A high-level block diagram of your app most likely has Model, View, and Controller classes—it is a familiar design pattern to anyone who has been working with Cocoa and Cocoa Touch. As you write tests to cover all of your Model classes, you’ll have the certainty of knowing that the base of your app is well tested before you work your way up to writing tests for the Controller classes—which start to touch other more complex parts of your app, for example, a connection to the network with a database connected behind a web service.

    As an alternative starting point, if you are authoring a framework or library, you may want to start with the surface of your API. From there, you can work your way in to the internal classes.

    When creating UI tests, start by considering the most common workflows. Think of what the user does when getting started using the app and what UI is exercised immediately in that process. Using the UI recording feature is a great way to capture a sequence of user actions into a UI test method that can be expanded upon to implement tests for correctness and/or performance.

    UI tests of this type tend to start with a relatively coarse-grained focus and might cut across several subsystems; they can return a lot of information that can be hard to analyze at first. As you work with your UI test suite, you can refine the testing granularity and focus UI tests to reflect specific subsystem behaviors more clearly.

References:
https://developer.apple.com/library/archive/documentation/DeveloperTools/Conceptual/testing_with_xcode/chapters/04-writing_tests.html#//apple_ref/doc/uid/TP40014132-CH4-SW8

XCUITesting - Writing Test classes and Methods



Below is how a Test can be organised. Many of the tests are grouped as Test Bundles. Test bundles can contain multiple test classes. The Test classes can be mainly used for segregating tests into related groups, upto the developer either for functional or organisational purposes.

All test classes will be sub classes of XCTestCase



References:
https://developer.apple.com/library/archive/documentation/DeveloperTools/Conceptual/testing_with_xcode/chapters/04-writing_tests.html#//apple_ref/doc/uid/TP40014132-CH4-SW1

XCUITesting - Running Tests and Viewing Results

The tests can be run in a single or as a group. When hover the test function, a run method appear and to run in a group, a run method appear overall for the group.

To run the current active scheme

Product > Test. Runs the currently active scheme. The keyboard shortcut is Command-U.

Product > Build for > Testing and Product > Perform Action > Test without Building. These two commands can be used to build the test bundle products and run the tests independent of one another. These are convenience commands to shortcut the build and test processes. hey’re most useful when changing code to check for warnings and errors in the build process, and for speeding up testing when you know the build is up to date.

Product > Perform Action > Test . This dynamic menu item senses the current test method in which the editing insertion point is positioned when you’re editing a test method and allows you to run that test with a keyboard shortcut


For a performance test, click the value in the Time column to obtain a detailed report on the performance result. You can see the aggregate performance of the test as well as values for each of the ten runs by clicking the individual test run buttons. The Edit button enables you to set or modify the test baseline and the max standard deviation allowed in the indication of pass or fail.

Using the Logs panel, you can view the associated failure description string and other summary output. By opening the disclosure triangles, you can drill down to all the details of the test run

The debug console displays comprehensive information about the test run in a textual format. It’s the same information as shown by the log navigator, but if you have been actively engaged in debugging, any output from the debugging session also appears there.

Using Xcode Server and continuous integration requires a scheme to be set to Shared using the checkbox in the Manage Schemes sheet, and checked into a source repository along with your project and source code. cheme as shared and available for use by bots with Xcode Server.


App tests run in the context of your app, allowing you to create tests which combine behavior that comes from the different classes, libraries/frameworks, and functional aspects of your app. Library tests exercise the classes and methods within a library or framework, independent of your app, to validate that they behave as the library’s specification requires.




References:
https://developer.apple.com/library/archive/documentation/DeveloperTools/Conceptual/testing_with_xcode/chapters/05-running_tests.html#//apple_ref/doc/uid/TP40014132-CH5-SW1

Wednesday, December 12, 2018

GCP : Cloud Data Flow & Apache Beam SDK

Cloud Dataflow supports fast, simplified pipeline development via expressive Java and Python APIs in the Apache Beam SDK, which provides a rich set of windowing and session analysis primitives as well as an ecosystem of source and sink connectors. Plus, Beam’s unique, unified development model lets you reuse more code across streaming and batch pipelines.

below is a conceptual diagram for Apache Beam




Below given some concepts of Apache beam

Unified : Use a single programming model for both batch and streaming use cases.
Portable : Execute pipelines on multiple execution environments.
Extensible : Write and share new SDKs, IO connectors, and transformation libraries.



references:
https://cloud.google.com/dataflow/

GCP Cloud Data Flow

Simplified stream and batch data processing, with equal reliability and expressiveness.
Cloud Dataflow is a fully-managed service for transforming and enriching data in stream (real time)
and batch (historical) modes with equal reliability and expressiveness no more complex workarounds or compromises needed.
And with its serverless approach to resource provisioning and management, one has access to virtually
limitless capacity to solve their biggest data processing challenges, while paying only for what you use.

Cloud Dataflow unlocks transformational use cases across industries, including:


Clickstream, Point-of-Sale, and segmentation analysis in retail
Fraud detection in financial services
Personalized user experience in gaming
IoT analytics in manufacturing, healthcare, and logistics


references:
https://cloud.google.com/bigtable/

GCP: What is cloud data studio

Currently in beta, Google Data Studio allows you to create branded reports with data visualizations to share with your clients. ... Google Data Studio is part of the Google Analytics 360 Suite — the high-end (i.e., pricey) Google Analytics Enterprise package

Data Studio, part of Google Marketing Platform and closely integrated with Google Cloud, allows you to easily access data from Google Analytics, Google Ads, Display & Video 360, Search Ads 360, YouTube Analytics, Google Sheets, Google BigQuery and over 500 more data sources, both Google and non-Google, to visualize and interactively explore data. It allows you to easily share your insights with anyone in your organization. And beyond just sharing, Data Studio offers seamless real-time collaboration with others—whether you’re sitting across the room, or across the world.


references
https://datastudio.google.com/u/0/navigation/reporting

GCP : Cloud Big Table

Cloud Big table is a low latency, massively scalable NoSQL database.

It has below properties

- Consistent sub-10 ms latency
- Replication provides high availability, higher durability, and resilience in the face of zonal failures.
- Ideal for Ad Tech, Fintech and IoT.
- Storage engine for machine learning applications
- Easy integration with open source big data solutions

Below are some of the benefits of Cloud big table

1. Fast and performant
User cloud big table as the storage engine for large scale, low latency applications as well as thoughput intensive data processing and analytics.

2. Seamless scaling and replication
Provision and scale to hundreds of petabytes and smoothly handle millions of operations per second. Changes to the deployment configuration are immediate, so there is downtime during the reconfiguration. Republication adds high availability for live serving apps and workload isolation for serving vs analytics.

3. Simple and integrated
Cloud big table integrates easily with popular big data tools like hadoop, cloud data flow, and cloud data proc. Also, cloud big table supports industry standard HBase API, which makes it easy for development teams to get started.

4. Its a fully managed database.
It is fully managed by Google leaving application developers focusing on application logic.

references:
https://cloud.google.com/bigtable/

HTML, CSS : How to implement loading buttons

The simplest way is how it is done


  Loading

/* Style buttons */
.buttonload {
  background-color: #4CAF50; /* Green background */
  border: none; /* Remove borders */
  color: white; /* White text */
  padding: 12px 16px; /* Some padding */
  font-size: 16px /* Set a font size */
}

Another option is to use the library

https://loading.io/button/

Its quite nice and easy. Basically, below is what to be done



function animateDivForLoading(buttonid,add){
var element = document.getElementById(buttonid);
if(element){
if(add){
element.classList.add('running');
}
else {
element.classList.remove('running');
}
}
}

references:
https://www.w3schools.com/howto/howto_css_loading_buttons.asp
https://loading.io/button/

GCP: Cloud Pub/sub

Cloud Pub/Sub is a simple, reliable, scalable foundation for stream analytics and event-driven computing systems
As part of Google Cloud’s stream analytics solution, the service ingests event streams and delivers them to Cloud Dataflow for processing and BigQuery for analysis as a data warehousing solution.  Relying on the Cloud Pub/Sub service for delivery of event data frees one to focus on transforming their business and data systems with applications such as:

Real-time personalization in gaming
Fast reporting, targeting and optimization in advertising and media
Processing device data for healthcare, manufacturing, oil and gas, and logistics
Syndicating market-related data streams for financial services

Build multi cloud and hybrid applications on open architecture.
Syndicate data across projects and applications running on other clouds, or between cloud and on-premises apps. Cloud Pub/Sub easily fits in your existing environment via efficient client libraries for multiple languages, open REST/HTTP and gRPC service APIs, and an open source Apache Kafka connector.

Scale responsively and automatically
Scale to hundreds of millions of messages per second and pay only for the resources you use. There are no partitions or local instances to manage, reducing operational overhead. Data is automatically and intelligently distributed across data centers over our unique, high-speed private network.

Bring reliability and security tools to real-time apps
Use Cloud Pub/Sub to simplify scalable, distributed systems. All published data is synchronously replicated across availability zones to ensure that messages are available to consumers for processing as soon as they are ready. Fine-grained access controls allow for sophisticated cross-team and organizational data sharing. And end-to-end encryption adds security to your pipelines.

Below is how Cloud Pub/Sub can be used

- Applications, Devices, Databases can ingest data into Cloud Pub/Sub
- Which then get processed in the Cloud data flow.
- For Analysis, they get fed into Cloud Big Query, Cloud machine learning, Cloud Big Table
- cloud big query uses Data Studio, Third party tools for data warehousing.

Cloud machine learning can be used for predictive learning, Cloud big table is used for Caching and serving.

references:
https://cloud.google.com/pubsub

Monday, December 10, 2018

GCP : StackDriver Monitoring



Monitoring collects metrics, events, and metadata from Google Cloud Platform, Amazon Web Services (AWS), hosted uptime probes, application instrumentation, and a variety of common application components including Cassandra, Nginx, Apache Web Server, Elasticsearch and many others. Stackdriver ingests that data and generates insights via dashboards, charts, and alerts.

Access Control

Monitoring controls access to monitoring data in Workspaces using Cloud Identity and Access Management (Cloud IAM) roles and permission

In general, each REST method in an API has an associated permission, and you must have the permission to use the corresponding method. Permissions are not granted directly to users; permissions are instead granted indirectly through roles, which group multiple permissions to make managing them easier.

Roles for common combinations of permissions are predefined for you, but it is also possible to create your own combinations of permissions by creating Cloud IAM custom roles.

Below are the common predefined roles

roles/monitoring.viewer => Gives you read-only access to the Stackdriver Monitoring console and API
roles/monitoring.editor => Gives you read-write access to the Stackdriver Monitoring console and API, and lets you write monitoring data to a Workspace
roles/monitoring.admin  => Gives you full access to all Monitoring features
roles/monitoring.metricWriter => Permits writing monitoring data to a Workspace; does not permit access to the Stackdriver Monitoring console. For service accounts.

Alert Policies
roles/monitoring.alertPolicyViewer => Gives you read-only access to alerting policies
roles/monitoring.alertPolicyEditor => Gives you read-write access to alerting policies

Notification Channels
roles/monitoring.notificationChannelViewer => Read only access to Notification channel
roles/monitoring.notificationChannelEditor => Gives you read-write access to notification channels

If one has Google Cloud platform roles, they are mapped to the below permissions

roles/viewer => Gives read-only access to the Stackdriver Monitoring console and the API
roles/editor => Gives read-write access to the Stackdriver Monitoring console and the API
roles/owner  => Gives full access to the Stackdriver Monitoring console and the API

Custom Permissions
One can also create your own custom roles that contain specific lists of permissions.




References:
https://cloud.google.com/monitoring/docs/

GCP: Stack Driver Logging

Stackdriver Logging allows one to store, search, analyze, monitor, and alert on log data and events from Google Cloud Platform and Amazon Web Services (AWS)
Stackdriver Logging is a fully managed service that performs at scale and can ingest application and system log data from thousands of VMs.

Stackdriver Logging is a fully integrated solution that works seamlessly with Stackdriver Monitoring, Trace, Error Reporting, and Debugger.
The integration allows you to navigate between incidents, charts, traces, errors, and logs. This helps you quickly find the root cause of issues in your system and applications

Stackdriver Logging is built to scale and works well at sub-second ingestion latency at terabytes per second.
Stackdriver Logging is a fully managed solution that takes away overhead of deploying or managing a cluster

Below are the main features for Stack Driver Logging

Custom Logs / Ingestion API : Stackdriver Logging has a public API which can be used to write any custom log, from any source, into the service.
AWS Integration / Agent : Stackdriver Logging uses a Google-customized and packaged Fluentd agent that can be installed on any AWS or Cloud Platform VM to ingest log data from Cloud Platform instances (for example, Compute Engine, Managed VMs, or Containers) as well as AWS EC2 instances.
Logs Retention : Allows you to retain the logs in Stackdriver Logging for 30 days, and gives you a one-click configuration tool to archive data for a longer period in Google Cloud Storage.
Logs Search : A powerful interface to search, slice and dice, and browse log data.
Logs Based Metrics : Stackdriver Logging allows you to create metrics from log data which appear seamlessly in Stackdriver Monitoring, where you can visualize these metrics and create dashboards.
Logs Alerting : ntegration with Stackdriver Monitoring allows you to set alerts on the logs events, including the logs-based metrics you have defined.
Advanced Analytics with BigQuery : Take out data with one-click configuration in real time to BigQuery for advanced analytics and SQL-like querying.
Archive with Cloud Storage : Export log data to Google Cloud Storage to archive so you can store data for longer periods of time in a cost-effective manner.
Stream Logs with Cloud Pub/Sub : Stream your logging data via Cloud Pub/Sub with a third-party solution or a custom endpoint of your choice.
Third-party Integrations: Stackdriver Logging supports easy integration with Splunk and other partners.
Audit Logging : Stackdriver Logs Viewer, APIs, and the gCloud CLI can be used to access audit logs that capture all the admin and data access events within the Google Cloud Platform

The pricing for this is First 50 GiB is free and then onwards 0.5GiB.


References:
https://cloud.google.com/logging/

Saturday, December 8, 2018

Android : How to show user image on the marker, zoom to fit all the markers

Below is the code to do this!

public void updateMapWithUserLocations(){
        final ArrayList markers = new ArrayList();
        targets = new ArrayList();

        for (int i = 0 ; i < users.length; i++){
            final LatLng location = locations[i];
            final String imageURL = images[i];
            Target target = null;
            Marker userMarker = mMap.addMarker(new MarkerOptions()
                    .position(location)
                    .icon(BitmapDescriptorFactory.fromBitmap(BitmapFactory.decodeResource(getResources(), R.drawable.location_icon)))
                    .title("test")
                    .snippet("test address")
            );
            markers.add(userMarker);
            PicassoTarget pTarget = new PicassoTarget(userMarker);
            targets.add(pTarget);
            Picasso.with(this)
                    .load(imageURL)
                    .resize(100,100)
                    .centerCrop()
                    .transform(new BubbleTransformation(5))
                    .into(pTarget);

        }
        fitAlMarkersInZoom(mMap,markers);
    }

    class PicassoTarget implements Target{
        private Marker marker;

        PicassoTarget(Marker marker){
            this.marker = marker;
        }

        @Override
        public void onBitmapLoaded(Bitmap bitmap, Picasso.LoadedFrom from) {
            this.marker.setIcon(BitmapDescriptorFactory.fromBitmap(bitmap));
        }

        @Override
        public void onBitmapFailed(Drawable errorDrawable) {

        }

        @Override
        public void onPrepareLoad(Drawable placeHolderDrawable) {

        }
    }


    private void fitAlMarkersInZoom(GoogleMap googleMap, ArrayList markers){

        try{
            LatLngBounds.Builder builder = new LatLngBounds.Builder();
            for (Marker marker : markers) {
                builder.include(marker.getPosition());
            }
            final LatLngBounds bounds = builder.build();

            try {
                mMap.animateCamera(CameraUpdateFactory.newLatLngBounds(bounds, 50));
            } catch (IllegalStateException e) {
                // layout not yet initialized
                final View mapView = getFragmentManager()
                        .findFragmentById(R.id.map).getView();
                if (mapView.getViewTreeObserver().isAlive()) {
                    mapView.getViewTreeObserver().addOnGlobalLayoutListener(
                            new ViewTreeObserver.OnGlobalLayoutListener() {
                                @SuppressWarnings("deprecation")
                                @SuppressLint("NewApi")
                                // We check which build version we are using.
                                @Override
                                public void onGlobalLayout() {
                                    if (Build.VERSION.SDK_INT < Build.VERSION_CODES.JELLY_BEAN) {
                                        mapView.getViewTreeObserver()
                                                .removeGlobalOnLayoutListener(this);
                                    } else {
                                        mapView.getViewTreeObserver()
                                                .removeOnGlobalLayoutListener(this);
                                    }
                                    mMap.animateCamera(CameraUpdateFactory.newLatLngBounds(bounds, 50));
                                }
                            });
                }
            }
        }
        catch (Exception ex){
            ex.printStackTrace();
        }
    }

GCP : What is Cloud IAM


Cloud Identity & Access Management (Cloud IAM) lets administrators authorize who can take action on specific resources, giving you full control and visibility to manage cloud resources centrally. For established enterprises with complex organizational structures, hundreds of workgroups, and potentially many more projects, Cloud IAM provides a unified view into security policy across your entire organization, with built-in auditing to ease compliance processes.

Leverage Cloud Identity, Google Cloud’s built-in managed identity to easily create or sync user accounts across applications and projects. Cloud Identity makes it easy to provision and manage users and groups, set up single sign-on, and configure multi-factor authentication directly from the Google Admin Console. With Cloud Identity you get access to the GCP Organization, which enables you to centrally manage projects via the Cloud Resource Manager.

Cloud IAM provides the right tools to manage resource permissions with minimum fuss and high automation. Map job functions within your company to groups and roles. Users get access only to what they need to get the job done, and admins can easily grant default permissions to entire groups of users.


Cloud IAM enables you to grant access to cloud resources at fine-grained levels, well beyond project-level access.

Create more granular access control policies to resources based on attributes like device security status, IP address, resource type, and date/time. These policies help ensure that the appropriate security controls are in place when granting access to cloud resources.

A full audit trail history of permissions authorization, removal, and delegation gets surfaced automatically for your admins. Cloud IAM lets you focus on business policies around your resources and makes compliance easy.

Control resource permissions using a variety of options: graphically from the Cloud Platform console, programmatically via Cloud IAM methods, or using the gcloud command line interface.


references:
https://cloud.google.com/iam/

Friday, December 7, 2018

What is Google Play Protect?


Google Play Protect is Google's built-in malware protection for Android. Backed by the strength of Google's machine learning algorithms, it is always improving in real time.

Google Play Protect continuously works to keep your device, data and apps safe. It automatically scans your device and makes sure that you have the latest in mobile security, so you can rest easy.

If you’ve misplaced your device, Find My Device has your back. You can locate it by signing into your Google account, or even call it directly from your browser. Lock your device remotely or display a message on the lock screen, so if someone finds it they know who to contact. Plus, if you’re convinced that it’s lost for good you can delete all your data.

All Android apps undergo rigorous security testing before appearing in the Google Play Store. We vet every app and developer in Google Play, and suspend those who violate our policies. Then, Play Protect scans billions of apps daily to make sure that everything remains spot on. That way, no matter where you download an app from, you know it’s been checked by Google Play Protect.

With Safe Browsing protection in Chrome, you can browse with confidence. If you visit a site that's acting out of line, you'll be warned and taken back to safety.

references:
https://www.android.com/play-protect/

Thursday, December 6, 2018

GCP : What is Cloud Interconnect

Cloud Interconnect provides low latency, highly available connections that enable you to reliably transfer data between your on-premises and VPC networks. Also, Cloud Interconnect connections provide RFC 1918 communication, which means internal (private) IP addresses are directly accessible from both networks.

Cloud Interconnect offers two options for extending your on-premises network. Google Cloud Interconnect - Dedicated (Dedicated Interconnect) provides a direct physical connection between your on-premises network and Google’s network. Google Cloud Interconnect - Partner (Partner Interconnect) provides connectivity between your on-premises and GCP VPC networks through a supported service provider.

Benefits of cloud interconnect

Benefits of Cloud Interconnect

    Traffic between your on-premises network and your VPC network doesn't traverse the public Internet. Traffic traverses a dedicated connection or through a service provider with a dedicated connection. By bypassing the public Internet, your traffic takes fewer hops, so there are less points of failure where your traffic might get dropped or disrupted.

Your VPC network's internal (RFC 1918) IP addresses are directly accessible from your on-premises network. You don't need to use a NAT device or VPN tunnel to reach internal IP addresses. Currently, you can't use Cloud Interconnect to access external (non-RFC 1918) IP addresses; instead, you must use a separate connection, such as Carrier Peering.

You can scale your connection to Google based on your needs. For Dedicated Interconnect, connection capacity is delivered over one or more 10 Gbps Ethernet connections, with a maximum of eight connections (80 Gbps total per interconnect). For Partner Interconnect, connection capacity for each VLAN can range from 50 Mbps to 10 Gbps.

The cost of egress traffic from your VPC network to your on-premises network is reduced. Cloud Interconnect is generally the least expensive method if you have a high-volume of traffic to and from Google’s network.

Considerations

If you don't require the low latency and high availability of Cloud Interconnect, consider using Cloud VPN to set up IPsec VPN tunnels between your networks. IPsec VPN tunnels encrypt data by using industry standard IPsec protocols as traffic traverses the public Internet.

An IPsec VPN tunnel doesn't require the overhead or costs that are associated with a direct, private connection. Cloud VPN only requires a VPN device in your on-premises network.



References:
https://cloud.google.com/interconnect/docs/concepts/overview

Git : How to sync a forked repository with its original

Open the forked Git repository me/foobar
Click on Compare:



You will get the notification:

    There isn't anything to compare.
    someone:master is up to date with all commits from me:master. Try switching the base for your comparison.

Click on switching the base on this page:






Then you get to see all the commits made to someone/foobar after the day you forked it.
Click on Create pull request:





Give the pull request a title and maybe a description and click Create pull request.

On the next page, scroll to the bottom of the page and click Merge pull request and Confirm merge.

Your Git repository me/foobar will be updated.


References:
https://stackoverflow.com/questions/20984802/how-can-i-keep-my-fork-in-sync-without-adding-a-separate-remote/21131381#21131381