Thursday, May 31, 2018

What is an NAL Unit and Coded video Sequence in Video RTP

A set of NAL units in a specified form is referred to as an access unit. The decoding of each access unit results in one decoded picture. Each access unit contains a set of VCL NAL units that together compose a primary coded picture. It may also be prefixed with an access unit delimiter to aid in locating the start of the access unit. Some supplemental enhancement information containing data such as picture timing information may also precede the primary coded picture. The primary coded picture consists of a set of VCL NAL units consisting of slices or slice data partitions that represent the samples of the video picture. Following the primary coded picture may be some additional VCL NAL units that contain redundant representations of areas of the same video picture. These are referred to as redundant coded pictures, and are available for use by a decoder in recovering from loss or corruption of the data in the primary coded pictures. Decoders are not required to decode redundant coded pictures if they are present. Finally, if the coded picture is the last picture of a coded video sequence (a sequence of pictures that is independently decodable and uses only one sequence parameter set), an end of sequence NAL unit may be present to indicate the end of the sequence; and if the coded picture is the last coded picture in the entire NAL unit stream, an end of stream NAL unit may be present to indicate that the stream is ending.



A coded video sequence consists of a series of access units that are sequential in the NAL unit stream and use only one sequence parameter set. Each coded video sequence can be decoded independantly of any other coded video sequence, given the necessary parameter set information, which may be conveyed in-band or out-of-band. At the beginning of a coded video sequence is an insteantaneous decoding refresh (IDR) access unit. An IDR access unit contains and intra picture which is coded picture that can be decoded without decoding any previous pictures in the NAL unit stream, and the presence of an IDR access unit indicates that no subsequent picture in the stream will require reference to pictures prior to the intra picture it contains intruder to be decoded. A NAL unit stream may contain one or more coded video sequence.

references:
https://en.wikipedia.org/wiki/Network_Abstraction_Layer#VCL_and_Non-VCL_NAL_Units

What is purpose of NetworkManager

This class queries about the state of network connectivity and is responsible also to notify apps when the network state changes. 

Monitor network connections (Wi-Fi, GPRS, UMTS, etc.
Send broadcast intents when network connectivity changes
Attempt to "fail over" to another network when connectivity to a network is lost

Provide an API that allows applications to query the coarse-grained or fine-grained state of the available networks
Provide an API that allows applications to request and select networks for their data traffic
void requestNetwork (NetworkRequest request, 
                ConnectivityManager.NetworkCallback networkCallback)
API is the main attraction of this class. 
Request a network to satisfy a set of NetworkCapabilities.

This NetworkRequest will live until released via unregisterNetworkCallback(NetworkCallback) or the calling application exits. A version of the method which takes a timeout is requestNetwork(NetworkRequest, NetworkCallback, int). Status of the request can be followed by listening to the various callbacks described in ConnectivityManager.NetworkCallback. The Network can be used to direct traffic to the network. 

Once the requested network is available, we can bindProcessToNetwork inorder to rooute the traffic via that network. This is especially useful when the WiFi access point is a captive portal. 


References:

https://developer.android.com/reference/android/net/ConnectivityManager#requestNetwork(android.net.NetworkRequest,%20android.net.ConnectivityManager.NetworkCallback)



https://android-developers.googleblog.com/2016/07/connecting-your-app-to-wi-fi-device.html

What is NAL (Network Abstraction Layer) in H.264 video coding?

The NAL is a part of H.264/AVC and HEVC video coding standards. The Main goal of NAL is the provision of a “network-friendly” video representation addressing “conversational” and non-conversational applications. NAL has achieved significant improvement in application flexibility relative to prior video coding standards.

Why this is needed? So far in the evolution of telecommunication, many things have changed. Transmission medias have changed from xDSL, UMTS, Video coding telecommunications applications has diversified from ISDN and T1/E1 services to embrace PSTN. Mobile wireless networks and LAN/Internet network delivery. To address the flexibility and customizability, the design covers a NAL that formats the Video Coding Layer (VCL) representation of the video and provides a header information in a manner appropriate for conveyance by a variety of transport layers or storage media.

The NAL is designed in order to provide “network friendliness” to enable simple and effective customization of the user of VCL for broad variety of systems. The NAL facilitates the ability to map VCL data to transport layers such as

RTP/IP for any kind of real-time-wire-line and wireless internet services
File formats, e.g. ISO MP4, for storage services
H.32X for wireline and wireless communications
MPEG-2 systems for broadcasting services

Now what is an NAL unit ?
The coded video data is organized into NAL units, each of which is effectively a packet that contains an integer number of bytes. The first byte of each H.264 /AVC NAL unit is a header byte that contains an indication of type of data in the NAL unit. For HEVC the header was extended to two bytes. All remaining bytes contain the payload data. The NAL unit structure definition specifies a generic format for use in both packet-oriented and bit stream oriented transport systems, and a series of NAL units generated by an encoder is referred to as NAL unit stream.

References:
https://en.wikipedia.org/wiki/Network_Abstraction_Layer#VCL_and_Non-VCL_NAL_Units

SIP “sendonly” and “recveonly” bit more details.

There are 4 media flow direction attributes

1.sendrecv
2.recvonly
3.sendonly
4.inactive

these attributes are interpreted from senders perspective. The flow attributes present in the SDP portion of the SIP messaging will be used to support the call hold/resume scenario which is explained in RFC 3264 section 6.1 and still be backwards compatible with the call hold feature of RFC 2543 section B5.

sendrecv - Used to establish a 2-way media stream.
recvonly - The SIP endpoint would only receive (listen mode) and not send media.
sendonly - The SIP endpoint would only send and not receive media.
inactive -  The SIP endpoint would neither send nor receive media.

Below is a call flow for the hold resume example that shows use of these attributes.




Below are few pointes to note in this:

1. 183 Session in progress is given by the Media Gateway 
2. ACK is transferred between IMG and GW-B and GW-A 
3. the 200 OK with a=recvonly in response to the reINVITE with a=sendonly is sent by the IMG. 
4. with the above, one way media flows only till IMG. 
5. when reINVITE is sent with c=0.0.0.0. and a=inactive 200 OK contains a=inactive. 
6. The support for Media flow attribute is limited to reINVITE Method only. If a gateway sends a different method such as the UPDATE method with Media Call Flow attributes set for call hold or resume,IMG will honor the UPDATE method but not put call on hold or off hold/resume.=> this sentence is as is from dialogic
7. Absence of a Call Flow Attribute in the Re-INVITE shall default to sendrecv.
6. Call Flow attributes within an INVITE message will be ignored.
7. RTP as well as RTCP will be stopped when media is put "On Hold".

references:
https://www.dialogic.com/webhelp/IMG1010/10.5.3/WebHelp/media_flow_attribute.htm

What is ChaCha20-Poly1305 cipher?

Observed this cipher when debugging some of the TLS issue and digging a bit about it, below seems to be the

Below are the main processes in the TLS connection establishment

1. Key establishment (typically a Diffie-Hellman variant or RSA)
2. authentication (the certificate type)
3. confidentiality (a symmetric cipher)
4. integrity (a hash function)

There are two types of ciphers typically used to encrypt data with TLS: block ciphers and stream ciphers. In a block cipher, the data is broken up into chunks of a fixed size and each block is encrypted. In a stream cipher, the data is encrypted one byte at a time. Both types of ciphers have their advantages, block ciphers are generally fast in hardware and somewhat slow in software, while stream ciphers often have fast software implementations.

AES is a fine cipher to use on most modern computers. Intel processors since Westmere in 2010 come with AES hardware support that makes AES operations effectively free. This makes it an ideal cipher choice for both our servers and for web visitors using modern desktop and laptop computers. It’s not ideal for older computers and mobile devices. Phones and tablets don’t typically have cryptographic hardware for AES and are therefore required to use software implementations of ciphers. The AES-GCM cipher can be particularly costly when implemented in software. This is less than optimal on devices where every processor cycle can cost you precious battery life. A low-cost stream cipher would be ideal for these mobile devices, but the only option (RC4) is no longer secure.

In order to provide a battery-friendly alternative to AES for mobile devices, several engineers from Google set out to find and implement a fast and secure stream cipher to add to TLS. Their choice — ChaCha20-Poly1305 — was included in Chrome 31 in November 2013, and Chrome for Android and iOS at the end of April 2014

references:
https://blog.cloudflare.com/do-the-chacha-better-mobile-performance-with-cryptography/
https://crypto.stackexchange.com/questions/34455/whats-the-appeal-of-using-chacha20-instead-of-aes

Tuesday, May 29, 2018

Nginx - setting up SSL AWS

Setting up SSL cert in nginx is pretty easy. Assuming one has the certificate file, private key below is how to do it.

login to the terminal

navigate to the nginx folder

cd /etc/nginx

copy the cert files to ssl folder at /etc/nginx/ssl

now navigate inside sites-enabled
cd /etc/nginx/sites-enabled

now open the default file in vi

have the below

    server_name mysever.com;
    keepalive_timeout 70;
    ssl_certificate /etc/nginx/ssl/myserver.crt;
    ssl_certificate_key /etc/nginx/ssl/myserver.key;

    ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers "HIGH:!aNULL:!MD5 or HIGH:!aNULL:!MD5:!3DES";
    ssl_prefer_server_ciphers on;

    force https-redirects
    if ($scheme = http) {
        return 301 https://$server_name;
    } 
     location / {
        proxy_pass http://localhost:1337;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }


now kill and start the server
ps aux | grep nginx
sudo killall nginx
sudo nginx

thats all!

references:
https://www.nginx.com/blog/setting-up-nginx/

HTTP redirect codes - a summary

301: Permanent redirect. Clients making subsequent requests for this resource should use the new URI. Clients should not follow the redirect automatically for POST/PUT/DELETE requests.

302: Redirect for undefined reason. Clients making subsequent requests for this resource should not use the new URI. Clients should not follow the redirect automatically for POST/PUT/DELETE requests.

303: Redirect for undefined reason. Typically, 'Operation has completed, continue elsewhere.' Clients making subsequent requests for this resource should not use the new URI. Clients should follow the redirect for POST/PUT/DELETE requests, but use GET for the follow-up request.

307: Temporary redirect. Resource may return to this location at a later point. Clients making subsequent requests for this resource should use the old URI. Clients should not follow the redirect automatically for POST/PUT/DELETE requests.

304: A web server sends a HTTP/304 in response to a Conditional Validation request, indicating that the client’s copy of a resource is still valid and that the resource in question was Not Modified since the client cached its copy. Conditional validation enables clients to ensure that they have the latest resources without the performance overhead of the server re-sending all of its resources every time they are used.A browser client sends a Conditional Validation request when it has a cached copy of a target resource but isn’t sure if that cached resource is the latest version. You can identify conditional requests in Fiddler by looking at the headers using the Headers Inspector.

When making a conditional request, the client provides the server the Last-Modified date of its copy using the If-Modified-Since header, and provides the cached copy’s ETag identifier using the If-None-Match header:

From readings, recommendation is to avoid 302 if you have the choice. Many clients do not follow the spec when they encounter a 302. For temporary redirects, you should use either 303 or 307, depending on what type of behavior you want on non-GET requests. Prefer 307 to 303 unless you need the alternate behavior on POST/PUT/DELETE.

references:
https://www.telerik.com/blogs/understanding-http-304-responses

Sunday, May 27, 2018

Type of DNS records

Address (IPv4 A) Record (Anchor: defa): These are used to translate domain names into IP addresses.

AAAA (IPv6) Record (Anchor: defaaaa): The IPv6 Address Record is a much larger address space than that of a IPv4 Address Record. Addresses in IPv6 Address Records are 128 bits long while those in IPv4 Address Records are 32 bits long.

Mail Exchanger (MX) Record (Anchor: defmx): An MX Record identifies the email server(s) responsible for a domain name. When sending an email to user@xyz.com, email server first looks up the MX Record for xyz.com to see which email server actually handles email for xyz.com (this could be mail.xyz.com or someone else's email server like mail.isp.com). Then it looks up the A Record for the email server to connect to its IP address.
An MX Record has a Preference number, indicating the order in which the email server should be used. Email servers will attempt to deliver email to the server with the lowest preference number first, and if unsuccessful continue with the next lowest and so on.

Canonical Name (CNAME) Record (Anchor: defcname): CNAME Records are domain name aliases. Often computers on the Internet have multiple functions such as Web Server, FTP Server, Chat Server, etc.. To mask this, CNAME Records can be used, to give a single computer multiple names (aliases).
Sometimes companies register multiple domain names for their brand-names but still wish to maintain a single website. In such cases, a CNAME Record maybe used to forward traffic to their actual website.
Example:
www.abc.in could be CNAME to www.abc.com.
The most popular use of the CNAME Record, is to provide access to a Web Server using both the standard www.yourdomainname.com and yourdomainname.com (without the www). This is usually done by adding a CNAME Record for the www name pointing to the short name [while creating an A Record for the shorter name (without www)].
CNAME Records can also be used when a computer or service needs to be renamed, to temporarily allow access through both the old and new name.

Name Server (NS) Record (Anchor: defns): NS Records identify the DNS servers responsible (authoritative) for a Zone. A Zone should contain one NS Record for each of its own DNS servers (primary and secondary). This mostly is used for Zone Transfer purposes (notify). These NS Records have the same name as the Zone in which they are located.
The most important function of the NS Record is Delegation. Delegation implies that part of a domain is delegated to other DNS servers.
You can also delegate sub-domains of your own domain name (such as subdomain.yourdomainname.com) to other DNS servers. An NS Record identifies the name of a DNS server, not the IP Address. Because of this, it is important that an A Record for the referenced DNS server exists, otherwise there may not be any way to find that DNS server and communicate with it.
If an NS Record delegates a sub-domain (subdomain.yourdomainname.com)to a DNS Server with a name in that sub-domain (ns1.subdomain.yourdomainname.com), an A Record for that server (ns1.subdomain.yourdomainname.com) must exist in the Parent Zone (yourdomainname.com). This A Record is referred to as a Glue Record, because it doesn't really belong in the Parent Zone, but is necessary to locate the DNS Server for the delegated sub-domain.

Text (TXT) Records (Anchor: deftxt): TXT Records provide the ability to associate some text with a domain or a sub-domain. This text is meant to strictly provide information and has no functionality as such. A TXT Record can store upto 255 characters of free form text. This Record is generally used to convey information about the zone. Multiple TXT Records are permitted but their order is not necessarily retained.
Example:
You may add a TXT Record for yourdomainname.com with the value as This is my email server. Here, if anybody was checking the TXT Records of yourdomainname.com, would notice the above text appearing in the TXT Record.
TXT Record can be used to implement the following:
Sender Policy Framework (SPF): Sender Policy Framework is an extension to the Simple Mail Transfer Protocol (SMTP). SPF allows software to identify and reject forged addresses in the SMTP Mail From (Return-Path). SPF allows the owner of a domain to specify their mail sending policy, e.g. which mail servers they use to send mail from their domain name. 1. The technology requires two sides to work in tandem:
The domain owner publishes this information in an TXT Record in the domain's DNS zone, and when someone else's email server receives a message claiming to come from that domain, then
the receiving server can check whether the message complies with the domain's stated policy. If, for example, the message comes from an unknown server, it can be considered a fake.
DomainKeys: DomainKeys is an email authentication system (developed at Yahoo!) designed to verify the authenticity of the email sender and the message integrity (i.e., the message was not altered during transit). The DomainKeys specification has adopted aspects of Identified Internet Mail to create an enhanced protocol called DomainKeys Identified Mail (DKIM).


Service (SRV) Record (Anchor: defsrv): An SRV or Service Record is a category of data in the DNS specifying information on available services. When looking up for a service, you must first lookup the SRV Record for the service to see which server actually handles it. Then it looks up the Address Record for the server to connect to its IP Address.
The SRV Record has a priority field similar to an MX Record's priority value. Clients always use the SRV Record with the lowest priority value first, and only fall back to other SRV Records if the connection with this Record's host fails. If a service has multiple SRV records with the same priority value, clients use the weight field to determine which host to use. The weight value is relevant only in relation to other weight values for the service, and only among SRV Records with the same priority value.
Newer Internet Protocols such as SIP (Session Initiation Protocol) and XMPP (Extensible Messaging and Presence Protocol) often require SRV support from clients.


Start of Authority (SOA) Record (Anchor: defsoa): Each Zone contains a single SOA Record, which holds the following values for the Zone:
Name of Primary DNS Server: The domain name of the Primary DNS server for the Zone. The Zone should contain a matching NS Record.
Mailbox of the Responsible Person: The email address of the person responsible for maintenance of the Zone.
Serial Number: Used by the Secondary DNS servers to check if the Zone has changed. If the Serial Number is higher than what the Secondary server has, a Zone Transfer will be initiated. This number is automatically increased by our DNS servers when changes to the Zone or its Records are made.
Refresh Interval: How often the Secondary DNS servers should check if changes are made to the zone.
Retry Interval: How often the Secondary DNS server should retry checking, if changes are made if the first refresh fails.
Expire Interval: How long the Zone will be valid after a refresh. Secondary servers will discard the Zone if no refresh could be made within this interval.
Minimum (Default) TTL: Used as the default TTL for new Records created within the zone. Also used by other DNS servers to cache negative responses (such as Record does not exist, etc.).

references
https://manage.bigrock.in/kb/servlet/KBServlet/faq471.html

Mongo DB Querying arrays

for exact match, we need to use the below
db.inventory.find( { tags: ["red", "blank"] } )

below finds array that contains both elements but not in specific order, i.e. not exact match.
db.inventory.find( { tags: { $all: ["red", "blank"] } } )

below queries any array that contains at least "red"
db.inventory.find( { tags: "red" } )

below queries an array which has value > 25
db.inventory.find( { dim_cm: { $gt: 25 } } )

below queries dim_cm array that satisfy condition in any form. for e.g. one can satisfy > 15 and another can satisfy < 20
or one can satisfy both.
db.inventory.find( { dim_cm: { $gt: 15, $lt: 20 } } )

below queries element at the given index position.
db.inventory.find( { "dim_cm.1": { $gt: 25 } } )

references:
https://docs.mongodb.com/manual/tutorial/query-arrays/

Saturday, May 26, 2018

Android Preferences Screen - Few quick notes:

First of all, Android gives a good boiler plate activity for showing the settings. Its very well architectured.
Below given is some quick tips on certain aspects of it.

- With the boiler plate code, it comes with a list of options and when click on it show the fragments associated with it.
If we just need to show only one fragment, have the below line

@Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setupActionBar();
        getFragmentManager().beginTransaction().replace(android.R.id.content, new NotificationPreferenceFragment()).commit();
    }

The last line forces the system to not show the default list of options, instead show the NotificationPreferenceFragment directly

For showing a Checkox preference, below few lines will do


public static final String KEY_PREF_NEW_MSG_NOTIFICATIONS = "notifications_new_message";
//note that the key should match the key specified in the xml file




   
            android:defaultValue="true"
        android:key="notifications_new_message"
        android:title="@string/pref_title_new_message_notifications" />


Now we need to bind the ui with the stored values. In-order to do that, below bindPReferenceSummary code will help.

 @Override
        public void onCreate(Bundle savedInstanceState) {
            super.onCreate(savedInstanceState);
            addPreferencesFromResource(R.xml.pref_notification);
            setHasOptionsMenu(true);

            // Bind the summaries of EditText/List/Dialog/Ringtone preferences
            // to their values. When their values change, their summaries are
            // updated to reflect the new value, per the Android Design
            // guidelines.
            bindPreferenceSummaryToValue(findPreference(KEY_PREF_NEW_MSG_NOTIFICATIONS));
        }


references:
https://developer.android.com/guide/topics/ui/settings

Android module level build file

Configuring these build settings allows you to provide custom packaging options, such as additional build types and product flavors, and override settings in the main/ app manifest or top-level build.gradle file.

/**
 * The first line in the build configuration applies the Android plugin for
 * Gradle to this build and makes the android block available to specify
 * Android-specific build options.
 */

apply plugin: 'com.android.application'

/**
 * The android block is where you configure all your Android-specific
 * build options.
 */

android {

  /**
   * compileSdkVersion specifies the Android API level Gradle should use to
   * compile your app. This means your app can use the API features included in
   * this API level and lower.
   */

  compileSdkVersion 26

  /**
   * buildToolsVersion specifies the version of the SDK build tools, command-line
   * utilities, and compiler that Gradle should use to build your app. You need to
   * download the build tools using the SDK Manager.
   *
   * If you're using Android plugin 3.0.0 or higher, this property is optional—
   * the plugin uses a recommended version of the build tools by default.
   */

  buildToolsVersion "27.0.3"

  /**
   * The defaultConfig block encapsulates default settings and entries for all
   * build variants, and can override some attributes in main/AndroidManifest.xml
   * dynamically from the build system. You can configure product flavors to override
   * these values for different versions of your app.
   */

  defaultConfig {

    /**
     * applicationId uniquely identifies the package for publishing.
     * However, your source code should still reference the package name
     * defined by the package attribute in the main/AndroidManifest.xml file.
     */

    applicationId 'com.example.myapp'

    // Defines the minimum API level required to run the app.
    minSdkVersion 15

    // Specifies the API level used to test the app.
    targetSdkVersion 26

    // Defines the version number of your app.
    versionCode 1

    // Defines a user-friendly version name for your app.
    versionName "1.0"
  }
/**
   * The buildTypes block is where you can configure multiple build types.
   * By default, the build system defines two build types: debug and release. The
   * debug build type is not explicitly shown in the default build configuration,
   * but it includes debugging tools and is signed with the debug key. The release
   * build type applies Proguard settings and is not signed by default.
   */

  buildTypes {

    /**
     * By default, Android Studio configures the release build type to enable code
     * shrinking, using minifyEnabled, and specifies the Proguard settings file.
     */

    release {
        minifyEnabled true // Enables code shrinking for the release build type.
        proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
    }
  }


/**
   * The productFlavors block is where you can configure multiple product
   * flavors. This allows you to create different versions of your app that can
   * override the defaultConfig block with their own settings. Product flavors are
   * optional, and the build system does not create them by default. This example
   * creates a free and paid product flavor. Each product flavor then specifies
   * its own application ID, so that they can exist on the Google Play Store, or
   * an Android device, simultaneously.
   *
   * If you're using Android plugin 3.0.0 or higher, you need to also declare
   * and assign each flavor to a flavor dimension. To learn more, read the
   * migration guide.
   */

  productFlavors {
    free {
      applicationId 'com.example.myapp.free'
    }

    paid {
      applicationId 'com.example.myapp.paid'
    }
  }

  /**
   * The splits block is where you can configure different APK builds that
   * each contain only code and resources for a supported screen density or
   * ABI. You'll also need to configure your build so that each APK has a
   * different versionCode.
   */

  splits {
    // Settings to build multiple APKs based on screen density.
    density {

      // Enable or disable building multiple APKs.
      enable false

      // Exclude these densities when building multiple APKs.
      exclude "ldpi", "tvdpi", "xxxhdpi", "400dpi", "560dpi"
    }
  }
}

/**
 * The dependencies block in the module-level build configuration file
 * only specifies dependencies required to build the module itself.
 *
 * If you're using Android plugin 3.0.0 or higher, you should
 * use the new dependency configurations, which help you improve build speeds by
 * restricting which dependencies leak their APIs to other modules.
 */

dependencies {
    compile project(":lib")
    compile 'com.android.support:appcompat-v7:27.1.1'
    compile fileTree(dir: 'libs', include: ['*.jar'])
}

references:
https://developer.android.com/studio/build/#settings-file

Friday, May 25, 2018

Android Top-Level Build File, Project Wide properties

Gradle settings file located in the root of the project directory  tells Gradle which modules it should include when building your app

for e.g. include ':app', ':appnotifications', ':appmessaging', ':applib'
The top-level build.gradle file, located in the root project directory, defines build configurations that apply to all modules in your project.

By default, the top-level build file uses the buildscript block to define the Gradle repositories and dependencies that are common to all modules in the project.
Below is from Google documentation

/**
 * The buildscript block is where you configure the repositories and
 * dependencies for Gradle itself—meaning, you should not include dependencies
 * for your modules here. For example, this block includes the Android plugin for
 * Gradle as a dependency because it provides the additional instructions Gradle
 * needs to build Android app modules.
 */

buildscript {

    /**
     * The repositories block configures the repositories Gradle uses to
     * search or download the dependencies. Gradle pre-configures support for remote
     * repositories such as JCenter, Maven Central, and Ivy. You can also use local
     * repositories or define your own remote repositories. The code below defines
     * JCenter as the repository Gradle should use to look for its dependencies.
     *
     * New projects created using Android Studio 3.0 and higher also include
     * Google's Maven repository.
     */

    repositories {
        google()
        jcenter()
    }

    /**
     * The dependencies block configures the dependencies Gradle needs to use
     * to build your project. The following line adds Android plugin for Gradle
     * version 3.1.0 as a classpath dependency.
     */

    dependencies {
        classpath 'com.android.tools.build:gradle:3.1.0'
    }
}

/**
 * The allprojects block is where you configure the repositories and
 * dependencies used by all modules in your project, such as third-party plugins
 * or libraries. However, you should configure module-specific dependencies in
 * each module-level build.gradle file. For new projects, Android Studio
 * includes JCenter and Google's Maven repository by default, but it does not
 * configure any dependencies (unless you select a template that requires some).
 */

allprojects {
   repositories {
       google()
       jcenter()
   }
}

Configure project-wide properties

buildscript {...}

allprojects {...}

// This block encapsulates custom properties and makes them available to all
// modules in the project.
ext {
    // The following are only a few examples of the types of properties you can define.
    compileSdkVersion = 26
    // You can also create properties to specify versions for dependencies.
    // Having consistent versions between modules can avoid conflicts with behavior.
    supportLibVersion = "27.1.1"
    ...
}

to use these variables from inner modules, below can be done

android {
  // Use the following syntax to access properties you defined at the project level:
  // rootProject.ext.property_name
  compileSdkVersion rootProject.ext.compileSdkVersion
  ...
}
...
dependencies {
    compile "com.android.support:appcompat-v7:${rootProject.ext.supportLibVersion}"
    ...
}

This is a good note from Google documentation

Note: Although Gradle allows you to define project-wide properties at the module level, you should avoid doing so because it causes the modules that share those properties to be coupled. Module coupling makes it more difficult to later export a module as a stand-alone project and effectively prevents Gradle from utilizing parallel project execution to speed up multi-module builds.

references:
https://developer.android.com/studio/build/#settings-file

Wednesday, May 23, 2018

Why phone number has 0 prefix?

This comes with a standard formatter for a country as 0 is national dialling prefix for almost all countries. India with or without national dialling prefix the call work from native dialler and which is true for all countries mostly.

Below given the spec from android, 0 comes with NATIONAL format.
/**
 * INTERNATIONAL and NATIONAL formats are consistent with the definition in ITU-T Recommendation
 * E123. For example, the number of the Google Switzerland office will be written as
 * "+41 44 668 1800" in INTERNATIONAL format, and as "044 668 1800" in NATIONAL format.
 * E164 format is as per INTERNATIONAL format but with no formatting applied, e.g.
 * "+41446681800". RFC3966 is as per INTERNATIONAL format, but with all spaces and other
 * separating symbols replaced with a hyphen, and with any phone number extension appended with
 * ";ext=". It also will have a prefix of "tel:" added, e.g. "tel:+41-44-668-1800".
 *
 * Note: If you are considering storing the number in a neutral format, you are highly advised to
 * use the PhoneNumber class.
 */
public enum PhoneNumberFormat {
  E164,
  INTERNATIONAL,
  NATIONAL,
  RFC3966
}

Friday, May 18, 2018

Secure domain that is with BigRock while the resource is on AWS

1. The SSL certificate to be installed on AWS EC2.
2. Need to buy the certificate from Bigrock and for doing this, need to provide CSR to big rock
3. Once this is done, big rock will send the certificate and certificate chain which needs to be uploaded on to the instance and enable mod_ssl and modify the httpd.conf.
4. In big rock, select the domain and add A record with the instance EIP. This will point the domain to the instance.

The digicert link in the reference section has helped a bit. Following the document, the first step was to

1. create csr using openssl cert. The command goes as below. This can be run on the CLI AWS.

openssl req –new –newkey rsa:2048 –nodes –keyout server.key –out server.csr

2.  The above will ask so many questions When prompted, type your organizational information, beginning with your geographic information.

3. Open the .csr file that you created with a text editor.

4. Copy the text, including the -----BEGIN NEW CERTIFICATE REQUEST----- and -----END NEW CERTIFICATE REQUEST----- tags, and paste it into the BigRock order form.

and on the BigRock console, it was like below












Manage DNS option gave the domains that are listed and to the domain, added the CSR that was generated. It gave the confirmation below.










references:
https://www.digicert.com/csr-creation-ssl-installation-aws-openssl.htm#create-csr
https://stackoverflow.com/questions/37291810/how-do-i-turn-my-http-website-to-https

What is called a parameter set?

A parameter set contains information that is expected to rarely change and offers the decoding of a large number of VCL NAL units. These are two types of parameter sets.

1. Sequence parameter set (SPS) which apply to a series of consecutive video pictures Called coded video sequence.
2. Picture Parameter set (PPS) : which apply to decoding of one or more individual picture frame within a coded video sequence.

The sequence parameter set avoids the transmission of infrequently changing information from the transmission. Each VCL NAL unit contains an identifier that refers to the content of the relevant picture parameter set and each picture parameter set contains an identifier that refers to the content of the relevant sequence parameter set. In this manner, a small amount of data can be used to refer to a larger amount of information without repeating that information with each NAL. Sequence and Picture parameter set can sent well ahead of the NAL they corresponds to and can be sent as in band or out of band communication.

References:
https://en.wikipedia.org/wiki/Network_Abstraction_Layer#VCL_and_Non-VCL_NAL_Units

What is ACK and NACK in RTCP feedback

ACK Mode
Used for point-to-point communications. Unicast address should be used in the session description.
From the RFC,

For unidirectional as well as bi-directional communication between  two parties, 2.5% of the RTP session bandwidth is available for RTCP  traffic from the receivers including feedback.  For a 64-kbit/s stream this yields 1,600 bit/s for RTCP.  If we assume an average of 96 bytes (=768 bits) per RTCP packet, a receiver can report 2 events per second back to the sender.  If acknowledgements for 10 events are collected in each FB message, then 20 events can be acknowledged per  second.  At 256 kbit/s, 8 events could be reported per second; thus, the ACKs may be sent in a finer granularity (e.g., only combining three ACKs per FB message).

From 1 Mbit/s upwards, a receiver would be able to acknowledge each individual frame (not packet!) in a 30-fps video stream.

ACK strategies MUST be defined to work properly with these bandwidth limitations.  An indication whether or not ACKs are allowed for a session and, if so, which ACK strategy should be used, MAY be conveyed by out-of-band mechanisms, e.g., media-specific attributes in a session description using SDP.

NACK Mode
Used for group sizes larger than 2, while it does not really strictly restrict to 2, it can be used also for point to point communications.

In simple terms the number of events to report per second may be derived from the packet loss rate and senders rate of transmitting packets. From these two values, the allowable group sizes for the immediate feedback mode can be calculated.

From the RFC,

Let N be the average number of events to be reported per interval   by a receiver, B the RTCP bandwidth fraction for this particular receiver, and R the average RTCP packet size, then the receiver operates in Immediate Feedback mode as long as N<=B*T/R.

And this below is an example from RFC

Example: If a 256-kbit/s video with 30 fps is transmitted through a network with an MTU size of some 1,500 bytes, then, in most cases, each frame would fit in into one packet leading to a packet rate of  30 packets per second.  If 5% packet loss occurs in the network  (equally distributed, no inter-dependence between receivers), then each receiver will, on average, have to report 3 packets lost each two seconds.  Assuming a single sender and more than three receivers, this yields 3.75% of the RTCP bandwidth allocated to the receivers and thus 9.6 kbit/s.  Assuming further a size of 120 bytes for the average compound RTCP packet allows 10 RTCP packets to be sent per second or 20 in two seconds.  If every receiver needs to report three  lost packets per two seconds, this yields a maximum group size of 6-7 receivers if all loss events are reported.  The rules for  transmission of Early RTCP packets should provide sufficient flexibility for most of this reporting to occur in a timely fashion.


References:
https://tools.ietf.org/html/rfc4585

Android app bundles

This is a new upload format that includes all of the app’s compiled code and resources but defers apk generation and signing to Google play

Why this used and how this is helpful?
Today with the APK, the image resources are preloaded into the resource folder for all of the device models and form factors. However, this increases the size of the downloadable when for each of the device. App bundles is Google’s new serving model called Dynamic Delivery then uses app bundle to generate and serve optimised apps for each users device configuration.

The fundamental component of dynamic delivery is the split APK mechanism available on Android 5.0 (API level 21) and higher. Split APK is similar to regular APK, however, android system is able to treat multiple split apk as single app. With split APKs, Google play can break up large app into smaller, discrete packages that are installed on users’s device as required.

Below are three types of split APKs

Base APK
This is the first apk that is installed when downloading the app. This apk contains code and resources that all other split apps can access and provide basic functionality for the app.

Configuration APKs: contains native library and resources for specific screen density, CPU architecture or language. Most of the projects Google play generates the configuration APK for a developer from the code and resources included in the app.

Dynamic feature APKs.
Contains code and resources that are not required at the initial running of an app.

To note, when we are using this feature, Google play will compile and give all the apks to the device when the app is downloaded. On devices that are running in Kitkat or below, a single apk will be delivered to the app.

Bundletool that is shipped with the Android SDK can be used for verifying the app bundle.

Bundled apps also aim to have the instant apps feature which will let the user to get instant view of the app.

References:
https://developer.android.com/guide/app-bundle/

The Android build process, Gradle & Android Plugin for Gradle.

Gradle for Android is a build toolkit to automate and manage the build process while it allows to define custom build configurations. Each build configuration can define its own code and resources while certain parts can be reused. The Android Plugin for Gradle works with the build toolkit to provide processes and configurable settings that are specific to bundling and testing Android apps. Below given overall process in building the Android app module.

1. Compile source to convert to DEX.
2. APK packager combines the DEX and compiled resources into a single APK.
3. The APK packager signs the apk
4. The APK packager uses the zip align tool to optimise the app to use less memory when running on device.

Below are the aspects into the build that can be modified when building with Gradle and the Android plugin for gradle.

1) Build Types : for e.g. release or debug
2) Product Flavors : Different flavours such as paid or free. This can also utilise separate resource and code and reuse common ones that are common across all version of the app/
3) Build variants : this is cross product of build type and product flavour. This is the configuration that gradle uses to build the app. Build variants are not configured explicitly, instead they are formed as a result of defining the build type and product flavor
4) Manifest Entries : One can specify values for some properties of the manifest file in the build variant configuration. These build values override the existing values in the manifest file. This is useful if we want to build multiple APKs for the module that has different application name, min SDK version, or target SDK version.
5) Dependencies : Build system manages dependencies from local file system and remote repositories.
6) Signing : Enables to specify the signing settings in the build configuration The build system by default sign the build with a default key and certificate using known credentials to avoid password prompt during build time. The build sign the release version using release key using the signing configuration specified in the build.
7) Proguard : Build system enables to specify different proguard rules for each build variant. The build system can run Proguard to shrink and obfuscate class files during build process.
8) Multiple APK support: The build system enables to automatically build different APKs that each contain only the code and resources needed for a specific screen density or Application binary interface.


References:
https://developer.android.com/studio/build/

Tuesday, May 15, 2018

What is ISO-2 Country code?

This website gives information related to country codes used in various countries.
the iso-3166 is the specification which gives this. there is an extended profile to give the
details of sub regions as well. A snapshot of it is given below

It looks like the website is good one and the link is like below




references:
https://docs.oracle.com/cd/E13214_01/wli/docs92/xref/xqisocodes.html
https://www.iso.org/iso-3166-country-codes.html
https://www.iso.org/obp/ui/#search/code/

Friday, May 11, 2018

How does an RTCP packet structure look like?

RTCP (RTP Control protocol) is an application layer protocol for controlling the RTP. RTP based on UDP makes use of RTCP to make sure the packets are recontructed in order and to know the info about RTP packets.
Below is a block diagram for various field in an RTCP packet



Version: 2 bits:
The version of RTP. This value is same in RTP and RTCP packet

P, Padding bit
If set, contains some additional padding bytes at the end which are not part of control info. Last byte of padding is a count of how many padding bytes should be ignored. Padding is added for encryption algorithms with Fixed block sizes. In a compound RTCP packet, padding should be only required for the last individual packet because compound packet is encrypted as a whole.

Count:
This field is of 5 bits 0-31 . Contains the number of Reception reports contained  in the RTCP packet.

Type: 8 bits : this indicates the RTCP packet type. Various types of RTCP packets are FIR (Full Intra frame Request), NACK (negative acknowledgement),
SMPTEC,SMPTE time code mapping., IJ , extended inter Jitter report , SR sender report, RR receiver report, SDES source description, BYE good bye, APP application defined, RTPFB general RTP feedback, PSFB payload specific, XR RTCP extension, AVB AVB RTCP packet, RSI receiver summary info etc.

Length (16 bits). This is the length of RTCP packet   including the header and any padding bits.

references:
http://www.networksorcery.com/enp/protocol/rtcp.htm

Wednesday, May 9, 2018

Some very basics of VoIP - Part I

Happened to read the Cisco article and below some notes and some additions from other readings.

Below are the sections to highlight

1. Converting analog to digital form

Human speech frequency is anywhere between 200/300 Hx - 2700/2800Hz. The equipment supports maximum of 4Khz.

Below is a description of Nyquist theorem. According to this, the sampling frequency should be twice as the maximum frequency of analog signal.

Suppose the highest frequency component, in hertz, for a given analog signal is fmax. According to the Nyquist Theorem, the sampling rate must be at least 2fmax, or twice the highest analog frequency component. The sampling in an analog-to-digital converter is actuated by a pulse generator (clock). If the sampling rate is less than 2fmax, some of the highest frequency components in the analog input signal will not be correctly represented in the digitized output. When such a digital signal is converted back to analog form by a digital-to-analog converter, false frequency components appear that were not in the original analog signal. This undesirable condition is a form of distortion called aliasing.

Analysing a single byte in a sample which represents a sample.

e.g. 0011 1001

1. first bit from left (0) represents where the signal belongs, up or down of the X axis. in otherwods, -ve or +ve
2. 2-4th from left , i.e. 011 - represents the column window on y axis . for e.g between 1 - 2 or 2-3 etc
3. 5th to 8th bit represents the actual value between the window on y axis as per 2-4th bit

an 8K sampling require 8K * 8 (bits per byte) = 64 Kbit / sec. 64 Kbit/Sec is a good bandwidth. So most of the WAN apps are using the G.729 codec which require only 8 Kbps bandwidth. This is basically codec compression.

Some of the terms are:
MOS (Mean Opinion Score) : This is the score used to map the quality of voice produced at destination end as compared to source and its value ranges from 0 to 5.
For e.g. G.711 MOS is 4.1, G.729 MOS is 3.92. ILBC is 4.1

Some calculation is like this:
Codec Payload size per packet = {(codec bit rate/sec * sample size)/1000} bits
Sample size means length of sample or clipped sample length among the 8k samples from 1 sec analog wave i.e. 20 ms or 30 ms.
this makes the total packet size as below

Total packet size = Codec Payload + 12 Byte (RTP) + 8 Byte (UDP) + 20 Byte (IP) + 4 Byte (FR)
12 Byte (RTP) + 8 Byte (UDP) => Layer 4 header size
20 Byte (IP) => Layer 3 header size
4 Byte (FR) => Layer 2 Header size

Now how many packets are required to transmit 1 second long of data?

- in 1 second there are 1000 ms.
- We take samples every 20 ms and packetise it to trasmit over the network
- This means that to send 1000 ms data, we need 1000/20 = 50 packets

Now whats size of 1 packet?

We have one packet containing 20 ms of data by default.

We take samples of 20 ms and putting it in one packet.  in 20 ms, how many samples will be present? Every sample require 8 bit in binaries, i.e. a byte.
8000 samples / second means each 20 ms, we will have 8000/20 = 400 samples. each sample require 8 bits. Then it mean each 20 ms, we will have 400*8 = 3200 bits or 3.2 Kb.

RTP & RTCP : having a good comparison of bodyguard here. RTCP is a bodyguard which helps RTP packets to be re-arranged in a particular order.

Delay : Voice traffic is very sensible to delay. Cisco recommends maximum of 200ms of delay between source and destination. While ITU-T recommends it can be maximum of 150ms.

Compression: Two types of compression cRTP protocol, which is for compressing the headers. This is designed to reduce the IP/UDP/RTP headers to two bytes for most of the packets where no UDP checksums are being sent, or four bytes with Checksums. This follows RFC 2508 which is mainly depend on RFC 1144. cRTP specifies two formats :

- Compressed RTP (CR) => Used when IP, UDP, RTP headers remain consistent.
- Compressed UDP (CU) => used when there are large changes in the RTP timestamp or when the RTP payload type changes. The IP and UDP headers are compressed, RTP header is not.

PRI / BRI : PRI is typically used by Medium to large enterprises with Digital PBX telephone systems to provide digital access to PSTN. The B Channels may be used flexibly and re-assigned when necessary to meet special needs such as video conferences.


references:
https://learningnetwork.cisco.com/blogs/vip-perspectives/2014/11/20/first-date-with-voip
https://www.cisco.com/c/en/us/support/docs/quality-of-service-qos/qos-link-efficiency-mechanisms/22308-compression-qos.html
http://what-when-how.com/cisco-voice-over-ip-cvoice/voip-fundamentals-introducing-voice-over-ip-networks-part-2/

Monday, May 7, 2018

How Does the LAME Android SDK work

How do we encode and store a file in mp3 format? There are two ways to do

1. use a wav recorder and convert the byte buffer from the wav recorder into mp3 buffer on the fly
2. Read a wav file and convert it into mp3

#2 has a huge disadvantage of the original wav file being taking at least 100 MB for a 1 hr long file with just 8Kbps sampling.
#1 is efficient, but on the fly encoder needs  to be much efficient and faster.

The LAME mp3 encoder help one to achieve both. Below is how we do for on the fly encoding

int inSamplerate = 8000;
//get the minimum buffer value. Here we are considering only 8K samples
minBuffer = AudioRecord.getMinBufferSize(inSamplerate, AudioFormat.CHANNEL_IN_MONO,
                AudioFormat.ENCODING_PCM_16BIT);

//now create the AudioRecorder object.
  audioRecord = new AudioRecord(
                MediaRecorder.AudioSource.MIC, inSamplerate,
                AudioFormat.CHANNEL_IN_MONO,
                AudioFormat.ENCODING_PCM_16BIT, minBuffer * 2);


//now create a buffer to read in the bytes from recorder
short[] buffer = new short[inSamplerate * 2 * 5];

//now create the mp3 buffer.  'mp3buf' should be at least 7200 bytes long to hold all possible emitted data.
byte[] mp3buffer = new byte[(int) (7200 + buffer.length * 2 * 1.25)];

//create a file output stream for the path at which  the mp3 file should be written. 
 try {
            outputStream = new FileOutputStream(new File(filePath));
        } catch (FileNotFoundException e) {
            e.printStackTrace();
        }

//now create the AndroidLame object
androidLame = new LameBuilder()
                .setInSampleRate(inSamplerate)
                .setOutChannels(1)
                .setOutBitrate(32)
                .setOutSampleRate(inSamplerate)
                .build();

// now start the recording
 audioRecord.startRecording();

//now loop in and read the byte buffer.
while (isRecording) {
            bytesRead = audioRecord.read(buffer, 0, minBuffer);
            if (bytesRead > 0) {
                int bytesEncoded = androidLame.encode(buffer, buffer, bytesRead, mp3buffer);
                if (bytesEncoded > 0) {
                    try {
                        outputStream.write(mp3buffer, 0, bytesEncoded);
                    } catch (IOException e) {
                        e.printStackTrace();
                    }
                }
            }
        }


//now flush
int outputMp3buf = androidLame.flush(mp3buffer);

if (outputMp3buf > 0) {
         try {
                outputStream.write(mp3buffer, 0, outputMp3buf);
                outputStream.close();
            } catch (IOException e) {
                e.printStackTrace();
            }
}

//now stop and release the recorder, and close the AndroidLame as well. 
 audioRecord.stop();
 audioRecord.release();
 androidLame.close();


references:
https://github.com/naman14/TAndroidLame 

How does the WAV format look like?

To write PCM samples captured from audio recorder to WAV file, below can be done

1. Create a Random access file.
2. write the wav header, which is as below

rFile.writeBytes("RIFF"); => This is chunk descriptor . 4 bytes
rFile.writeInt(0); => Chunk Size , 4 bytes => total size followed after this field including this => CHUNKSZ1
rFile.writeBytes("WAVE"); => 4 bytes
rFile.writeBytes("fmt "); => 4 bytes (note the space in here)
rFile.writeInt(Integer.reverseBytes(16)); => sub chunk size, 4 bytes
rFile.writeShort(Short.reverseBytes((short) 1)); => Audio format (PCM) => 2 bytes 
rFile.writeShort(Short.reverseBytes(channels)); => number of channels => 2 bytes
rFile.writeInt(Integer.reverseBytes(rate)); => Sampling rate => 4 bytes
rFile.writeInt(Integer.reverseBytes(rate * (resolution / 8) * mChannels)); =>  total bit rate (4 bytes)
rFile.writeShort(Short.reverseBytes((short) (channels * resolution / 8))); => block align (4 bytes)
rFile.writeShort(Short.reverseBytes(resolution)); => bits per sample (16 bit or 8 bit) , 2 byte
rFile.writeBytes("data"); => data chunk CHUNKSZ2
rFile.writeInt(0); => this is sub chunk size (the size of the remaining payload, which is also same as the total size followed by this)

Now if we have the PCM data, in order to put that in WAV format, write the PCM data appended to the random access file and then edit the CHUNKSZ1, CHUNKSZ2 values to the the total size of bytes. for e.g. like this below

// Write size to CHUNKSZ1
rFile.seek(4); // Write size to RIFF header
rFile.writeInt(Integer.reverseBytes(36 + payloadSz)); => 36 being the size of the header excluding the RIFF

rFile.seek(40); // Write size to CHUNKSZ2
rFile.writeInt(Integer.reverseBytes(payloadSz));

3. Close the Random access file.

Now we could make some funny things like repeat the whatever told multiple times in that case below can be done. Copy the contents from 44 bytes and then append to the random access file, come back and edit the payload size values again.

 RandomAccessFile wavFile = new RandomAccessFile(wavFilePath,"rw");
 long totalLength = wavFile.length();
 FileLogger.logInfo("Size of the wav file is "+totalLength);
 wavFile.seek(40);//get past the WAV headers
 //read the current size
 int subChunkSize = Integer.reverseBytes(wavFile.readInt());
 FileLogger.logInfo("subChunkSize is :"+subChunkSize);
 //now read the sub chunk size
 //buffer size
 int BUFF_SIZE = 1024;
 byte[] buffer = new byte[BUFF_SIZE];
 int readOffset = 44; // wav header end
 int countOfBytes  = -1;
 int debugCountOfByes = 0; // to analyze how many bytes were copied
 ByteArrayOutputStream bout = new ByteArrayOutputStream();
 wavFile.seek(readOffset);
 while ((countOfBytes = wavFile.read(buffer,0,BUFF_SIZE)) != -1 ){
  bout.write(buffer);
  readOffset += countOfBytes;
        debugCountOfByes += countOfBytes;
}
FileLogger.logInfo("Copied "+debugCountOfByes+" from file");
byte[] bytes = bout.toByteArray();
//now write this multiple times
for (int i = 0; i < count; i++){
     wavFile.write(bytes,44,bytes.length - 44);
}
int totalNewSize = subChunkSize + (count * bytes.length);
//now alter the wav header for the size values
wavFile.seek(4); // Write size to RIFF header
wavFile.writeInt(Integer.reverseBytes(36 + totalNewSize));

wavFile.seek(40); // Write size to Subchunk2Size field
wavFile.writeInt(Integer.reverseBytes(totalNewSize));
wavFile.close();

references:
http://soundfile.sapp.org/doc/WaveFormat/

Wednesday, May 2, 2018

What is LAME?

Lame is an MPEG Audio Layer 3 encoder (mp3) licensed under GPL.
Kudos to  Mike Cheng who started this development back in 1998.  However, in its current form Mark Taylor is the leader for LAME.
Today LAME is considered as best mp3 encoder at mid-high bit rates and at VBR. There are many projects that use LAME including famous WinAMP, many of the native operating systems.
Source is available for download at http://lame.sourceforge.net/download.php
LAME is only distributed in source code form. Many of the OS are supporting LAME compilation, which includes Windows, DOS, GNU/Linux, MacOS X, *BSD, Solaris, HP-UX, Tru64, Unix, AIX, Irix, NeXTStep, and so on..

to build the source code, one can follow the below

- Download the source and extract and navigate to the folder.
- Execute the below steps

$ ./configure
$ make
$ sudo make install


as part of the ./configure script execution, it creates make file for all of them mainly to notice for libmp3lame for i386, vector, dll, ACM, MacOS X, VS etc.

config.status: creating Makefile
config.status: creating libmp3lame/Makefile
config.status: creating libmp3lame/i386/Makefile
config.status: creating libmp3lame/vector/Makefile
config.status: creating frontend/Makefile
config.status: creating mpglib/Makefile
config.status: creating doc/Makefile
config.status: creating doc/html/Makefile
config.status: creating doc/man/Makefile
config.status: creating include/Makefile
config.status: creating Dll/Makefile
config.status: creating misc/Makefile
config.status: creating dshow/Makefile
config.status: creating ACM/Makefile
config.status: creating ACM/ADbg/Makefile
config.status: creating ACM/ddk/Makefile
config.status: creating ACM/tinyxml/Makefile
config.status: creating lame.spec
config.status: creating mac/Makefile
config.status: creating macosx/Makefile
config.status: creating macosx/English.lproj/Makefile
config.status: creating macosx/LAME.xcodeproj/Makefile
config.status: creating vc_solution/Makefile
config.status: creating config.h
config.status: executing depfiles commands
config.status: executing libtool commands
The source in zip form is only 1.3 Mb.

But make command resulted in below error

single_module -Wl,-exported_symbols_list,.libs/libmp3lame-symbols.expsym
Undefined symbols for architecture x86_64:
  "_lame_init_old", referenced from:
     -exported_symbol[s_list] command line option
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[3]: *** [libmp3lame.la] Error 1
make[2]: *** [all-recursive] Error 1
make[1]: *** [all-recursive] Error 1
make: *** [all] Error 2

Tried to compile using Xcode, but that was giving config.h is not found.
Gave the correct path reference to config.h and the error becomes similar to the terminal compilation, below

Undefined symbols for architecture x86_64:
  "_init_xrpow_core_sse", referenced from:
      _init_xrpow_core_init in quantize.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)

so, the issue is in linking phase. The compilation is all good.

Looked at the ld command options, which displayed like this below. So all of the architectures are supported.

 ld -v
@(#)PROGRAM:ld  PROJECT:ld64-305
configured to support archs: armv6 armv7 armv7s arm64 i386 x86_64 x86_64h armv6m armv7k armv7m armv7em (tvOS)
LTO support using: LLVM version 9.0.0, (clang-900.0.39.2) (static support for 21, runtime is 21)
TAPI support using: Apple TAPI version 900.0.15 (tapi-900.0.1

Now tried to comment the below lines and apparently that makes a successful build.

//#if defined(HAVE_XMMINTRIN_H)
//    if (gfc->CPU_features.SSE)
//        gfc->init_xrpow_core = init_xrpow_core_sse;
//#endif
//#ifndef HAVE_NASM
//#ifdef MIN_ARCH_SSE
//    gfc->init_xrpow_core = init_xrpow_core_sse;
//#endif
//#endif

So, the problem is really init_xrpow_core_sse. Then I stumbled upon the last link in this and this guy has also mentioned the same and mentions it to be due to the --host parameter.

Darwin myssytemcreds 17.2.0 Darwin Kernel Version 17.2.0: Fri Sep 29 18:27:05 PDT 2017; root:xnu-4570.20.62~3/RELEASE_X86_64 x86_64

from the makefile, it looked like below

host = x86_64-apple-darwin17.2.0
host_alias =
host_cpu = x86_64
host_os = darwin17.2.0
host_vendor = apple



references:
http://lame.sourceforge.net/index.php
https://gist.github.com/trevorsheridan/1948448
https://stackoverflow.com/questions/21255976/how-to-solve-undefined-symbol-init-xrpow-core-sse-when-linking-lame-mp3-encode

Tuesday, May 1, 2018

What is PM2 ? - Some quick learning notes

From the definition from the official site, it appeared like below.

PM2 empowers your process management workflow. It allows you to fine-tune the behavior, options, environment variables, logs files of each application via a process file. It’s particularly useful for micro-service based applications.

Configuration format supported are Javascript, JSON and YAML.

what mattered to me most as part of learning is

- Ability to keep the code running even if it crashes out due to some reason
- Ability to get the log files

Basic usage is :
pm2 start app.js
To install, follow the below
npm install pm2@latest -g

It seems we can create a configuration file to manage multiple applications. Like below

process.yml

apps:
  - script   : app.js
    instances: 4
    exec_mode: cluster
  - script : worker.js
    watch  : true
    env    :
      NODE_ENV: development
    env_production:
      NODE_ENV: production


pm2 start process.yml
Next much useful thing about the PM2 was it as Process Management tool.

PM2 manages application states so that it can start, stop, restart and delete processes.
Some useful commands are :

pm2 start app.js --name "my-api"
pm2 start web.js --name "web-interface"
pm2 restart web-interface
pm2 stop web-interface

to list all running processes pm2 list
pm2 show 0
pm2 list --sort name:desc
pm2 list --sort [name|id|pid|memory|cpu|status|uptime][:asc|desc]

PM2 allows to restart an application based on a memory limit.
pm2 start big-array.js --max-memory-restart 20M

Other interesting thing about the PM2 was the folder structure especially to redirect the application logs to the file. Below are the directory structure

$HOME/.pm2 will contain all PM2 related files
$HOME/.pm2/logs will contain all applications logs
$HOME/.pm2/pids will contain all applications pids
$HOME/.pm2/pm2.log PM2 logs
$HOME/.pm2/pm2.pid PM2 pid
$HOME/.pm2/rpc.sock Socket file for remote commands
$HOME/.pm2/pub.sock Socket file for publishable events
$HOME/.pm2/conf.js PM2 Configuration


references:
http://pm2.keymetrics.io/docs/usage/quick-start/
http://pm2.keymetrics.io/docs/usage/process-management/#max-memory-restart