Saturday, April 30, 2016

What is TensorFlow?


TensorFlow™ is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. 

The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API. TensorFlow was originally developed by researchers and engineers working on the Google Brain Team within Google's Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research, but the system is general enough to be applicable in a wide variety of other domains as well.

references:
https://www.tensorflow.org/

Machine Learning Google Products


Google Cloud Machine Learning provides modern machine learning services, with pre-trained models and a platform to generate one's own tailored models. Google's neural net-based ML platform has better training performance and increased accuracy compared to other large scale deep learning systems.  Major Google applications use Cloud Machine Learning, including Photos (image search), the Google app (voice search), Translate, and Inbox (Smart Reply). 

Google Cloud Machine Learning Platform makes it easy for developer to build accurate, large scale machine learning models in a short amount of time. It is a portable, fully managed and integrated with other Google Cloud Data platform products such as Google Cloud Storage or Google BigQuery so you can easily train your models.

Google Cloud Vision API enables app to understand the content of an image by encapsulating powerful machine learning models in an easy to use REST API. It quickly classifies images into thousands of categories (e.g., "sailboat", "Eiffel Tower"), detects individual objects and faces within images, and finds and reads printed words contained within images.

Google Cloud Speech API enables app to convert audio to text by applying neural network models in an easy to use API. The API recognizes over 80 languages and variants, to support your global user base. You can transcribe the text of users dictating to an application’s microphone or enable command-and-control through voice among many other use cases.

references:
https://cloud.google.com/products/machine-learning/?utm_source=newsletter&utm_medium=email&utm_campaign=2016-April-GCP-newsletter-en

Connect Android to TV: Wireless Few options


 There’s little to beat the wow factor associated with beaming video straight from a tablet a TV. The good thing about Android is that there’s more than one way to do it. Miracast is a wireless standard that creates an ad-hoc network between two devices, typically your tablet and a set-top box which supports Miracast.

        An increasing number of TVs support Miracast without the need for extra hardware. Miracast uses H.264 for video transmission, which means efficient compression and decent, full HD picture quality. Better yet, Miracast supports Digital Rights Management (DRM), which means services such as iPlayer and YouTube can be streamed to a TV. Not all services work, though – see Playing Back Video below. Android devices running Android 4.2 support Miracast.

            An alternative is Google’s Chromecast. This inexpensive £30 ‘dongle’ plugs into a spare HDMI port on your TV and connects to your wireless network. Chromecast support is burgeoning, which means content from services such as iPlayer, Netflix, BT Sport and others can be played with the Chromecast dongle doing all the legwork instead of your tablet, and that’s good news for battery life.

                As of July 2014, it’s possible to use Chromecast to mirror the display on your Android device, allowing you to hit play on a tablet and have (non DRM-protected) video start playing on your TV. The same goes for anything the screen can display, including apps, games and photos.

references:
http://www.pcadvisor.co.uk/how-to/google-android/how-connect-android-tv-summary-3533870/

What is MHL?

MHL is an innovative technology that fundamentally changes the way we work and play. Transform your smartphone into a home theater system and stream your favorite TV channels, movies, and home videos in high-definition. Experience the music you love with immersive surround sound. Play mobile games on the big screen, while charging your phone or even using it as a controller.

Some of the features are:

PLUG & PLAY
Power up while you level up with MHL! Play your mobile games on the big screen with no lag, while charging your phone at the same time. MHL makes gaming experiences more interactive and fun by transforming your mobile device into a game console or controller. The next stage of mobile gaming has arrived. MHL — wired for power and performance.

FAST CHARGING
MHL is a wired solution where your TV can charge your mobile device up to 40W . So what’s with the wire? Current wireless approaches consume a lot of power and can cause noticeable lag. MHL is the missing link. It’s time to worry less about your battery draining and more about the game at hand.

HIGH RESOLUTION
When it comes to visual entertainment, details make all the difference in the world. High resolution video turns dreams into reality. MHL currently supports up to 8K video resolution, allowing you to see your content the way it was meant to be seen.

IMMERSIVE AUDIO
Do you have music on your smartphone that you want to share with friends? Sometimes hearing is believing. MHL delivers enhanced audio through its support of Dolby® Atmos and DTS:X™. Get lost in the sounds you love with MHL.


REMOTE CONTROL
Mobile entertainment shouldn’t be a chore. It’s time to kick back and relax! Once your smartphone or tablet is connected with MHL, use your TV remote to navigate your favorite apps, games, music, videos, and photos on the big screen.

NO LAG
Don’t let a bad connection slow you down. Lag can ruin even the best game. MHL offers a zero-lag, seamless connection from mobile devices to TVs. At MHL, connectivity is our universal language.


references:

iOS how pass Context Data between Objective C and C code in networking

In the below example, the NetworkRequester is a class which is in Objective C needs to pass in the CTX information 
in the initiateRequest method, the makeRequest function passes the self references as the context data. 

Now in the makeRequest method, using the CFStreamClientContext CTX = { 1, ctxData, NULL, NULL, NULL };
it sets the context data to the stream so that when readCallBack( called back, the data will be present. 

@implementation NetworkRequester

@synthesize requestListener;

-(void) initiateRequest
{
    testFinished = 0;
    NSLog(@"Makign network request call");
    makeRequest("http://s3.amazonaws.com/test/samplefiles/est_4mb.txt",(__bridge void *)(self));
}

int makeRequest(const char *requestURL, void* ctxData)
{
    NSLog(@"totalRead starting make request %ld",(long)totalRead);
    CFReadStreamRef readStream;
    CFHTTPMessageRef request;
    CFStreamClientContext CTX = { 1, ctxData, NULL, NULL, NULL };
    
    NSString* requestURLString = [ [ NSString alloc ] initWithCString:requestURL ];
    NSURL *url = [ NSURL URLWithString: requestURLString ];
    
    CFStringRef requestMessage = CFSTR("");
    
    request = CFHTTPMessageCreateRequest(kCFAllocatorDefault, CFSTR("GET"),
                                         (__bridge CFURLRef) url, kCFHTTPVersion1_1);
    if (!request) {
        return -1;
    }
    CFHTTPMessageSetBody(request, (CFDataRef) requestMessage);
    readStream = CFReadStreamCreateForHTTPRequest(kCFAllocatorDefault, request);
    
    CFRelease(request);
    
    if (!readStream) {
        return -1;
    }
    
    if (!CFReadStreamSetClient(readStream, kCFStreamEventOpenCompleted |
                               kCFStreamEventHasBytesAvailable |
                               kCFStreamEventEndEncountered |
                               kCFStreamEventErrorOccurred,
                               readCallBack, &CTX))
    {
        CFRelease(readStream);
        return -1;
    }




void readCallBack(
                  CFReadStreamRef stream,
                  CFStreamEventType eventType,
                  void *clientCallBackInfo)
{
    UInt8 buffer[204800];
    CFIndex bytes_recvd = 0;
    NetworkRequester* requester = (__bridge NetworkRequester*)clientCallBackInfo;
    

NOW 



references:

What is CDN?

The Wikipedia entry for CDN states: “A content delivery network or content distribution network (CDN) is a large distributed system of servers deployed in multiple data centers across the Internet. The goal of a CDN is to serve content to end-users with high availability and high performance. CDNs serve a large fraction of the Internet content today, including web objects (text, graphics and scripts), downloadable objects (media files, software, documents), applications (e-commerce, portals), live streaming media, on-demand streaming media, and social networks.”

Additional hops mean more time to render data from a request on the user’s browser. The speed of delivery is also constrained by the slowest network in the chain. The solution is a CDN that places servers around the world and, depending on where the end user is located, serves the user with data from the closest or most appropriate server. CDNs reduce the number of hops needed to handle a request. The difference is shown in the following figures.

CDNs focus on improving performance of web page delivery. CDNs like the Akamai CDN support progressive downloads, which optimizes delivery of digital assets such as web page images. CDN nodes and servers are deployed in multiple locations around the globe over multiple Internet backbones. These nodes cooperate with each other to satisfy data requests by end users, transparently moving content to optimize the delivery process. The larger the size and scale of a CDN’s Edge Network deployments, the better the CDN.



references:

Android - animation in Image View

The goal was to create an animation effect of fading one image and then other appear. Below code and settings could do this.

ImageView view = (ImageView) findViewById(R.id.slide_show_image_view);
    if (view != null) {
        Animation animation = AnimationUtils.loadAnimation(getApplicationContext(), R.anim.fade);
        view.startAnimation(animation);
        imgView.setImageBitmap(bitmapCache);

    }

This required the anim file like this below 
  
    "1.0" encoding="utf-8"?
    "http://schemas.android.com/apk/res/android"
android:interpolator="@android:anim/accelerate_interpolator"
    
   
android:fromAlpha="0"
android:toAlpha="1"
android:duration="2000"
   
    
   
android:startOffset="8000"
android:fromAlpha="1"
android:toAlpha="0"
android:duration="2000"
   
    
    

It is important to keep the animation durations right so that it gives good efect. 

references:

Amazon AWS - Creating Application Server

Below are the main three tasks involved 

1. Create Security Group for Amazon EC2 Instance 
2. Create an IAM role 
3. Launch EC2 Instance 

When launch an instance, we can add rules to each security group that can control how much load can be accommodated in each instance. Once created these rules can be modified at any time and the changes will take effect immediately. 

This tutorial seems to be doing the below after creating a security group.

1. Allow inbound HTTP access from anywhere
2. Allow inbound SSH traffic from computer’s public IP so that we can connect to the instance.

Below are the tasks we need to do for configuring the access group

1. Decide who needs to have access. For e.g. access can be given entirely to an IP address. We can give a range of IP address if under a firewall. If don’t know the range, can be given as 0.0.0.0/0
2. Login to EC2 console https://console.aws.amazon.com/ec2/
3. In the navigation bar, verify that US West (Oregon) is the selected region.
4. In the navigation pane, click Security Groups, and then click Create Security Group.
5. Enter WebServerSG as the name of the security group, and provide a description.
6. Select your VPC from the list.
7. On the Inbound tab, add the rules as follows:
8. Click Add Rule, and then select SSH from the Type list. Under Source, select Custom IP and enter the public IP address range that you decided on in step 1 in the text box.
9. Click Add Rule, and then select HTTP from the Type list.
10. Click Create. 

The screen looks like below 

Next task is to create an IAM role. The purpose of IAM credentials is that one can effectively manage AWS credentials for running software on the instances. We can create an IAM role and configure it with permissions that the software requires. 

Below are the steps to create the IAM role with Full access 
To create an IAM role with full access to AWS
Open the Identity and Access Management (IAM) console at https://console.aws.amazon.com/iam/.
In the navigation pane, click Roles, and then click Create New Role.
On the Set Role Name page, enter a name for the role, and then click Next Step. Remember this name, as you'll need it when you launch your instance.
On the Select Role page, under AWS Service Roles, select Amazon EC2.
On the Attach Policy page, select the PowerUserAccess policy, and then click Next Step.
Review the role information and then click Create Role.

references:

Setting up with Amazon EC2

Below are the main tasks in setting up Amazon EC2 

1. Sign up with AWS 
2. Create an IAM user 
3. Create a Key pair
4. Create Virtual Private cloud 
5. Create security Group 

1) Sign Up With AWS : When a developer sign up for AWS, he is automatically signed up for all the AWS services. One can get started with it for free. 
2) Create an IAM user : Services in AWS such as EC2 require that developer provide credentials when he access them so that service can determine whether he has permission to access them. The console requires requires the password. Developer can create access keys for the AWS account to access the command line interface of the API. However, it is recommended to use the AWS identity and access management (IAM) instead. First we need to create an IAM user and then add the user to the IAM group with administrative permissions or and grant this user administrative permissions. Developer can access the AWS using a special URL and the credentials for the IAM user. 
3) Create a Key Pair : AWS uses PKI cryptography to secure the login information for the instance. A linux instance has no password, one should use a key pair to launch the instance, then provide the private key when one use login using SSH. The Key pair can be created using EC2 console. If planning to launch instances in each region, then we should have key pair for each region. 
4) Creating Virtual Private cloud : Amazon VPC allows a developer to launch AWS resources into a Virtual network that developer has defined.
5) Last Step is to create the security Group: Security group acts as firewall for the associated instances, controlling both inbound and outbound traffic at the instance level. One should add rules to security group to enable to connect to instance from their IP address using SSH. Developer can also add rules that allow inbound and outbound HTTP and HTTPs access from anywhere. 


references:

httpbin.org A website for testing HTTP services.

This is pretty interesting website.

Testing an HTTP Library can become difficult sometimes. RequestBin is fantastic for testing POST requests, but doesn't let you control the response. This exists to cover all kinds of HTTP scenarios. Additional endpoints are being considered.
        

All endpoint responses are JSON-encoded.

Some of the other websites include below 

https://www.hurl.it/
http://requestb.in/
http://docs.python-requests.org/en/master/

references:


Testing Bandwidth Using Wireshark

Below gives the full steps to do this. 

1. Open a web-browser and navigate to a speedtest.net and start the test
        
        2. Start a packet capture (preferably without capture filters, just in case we miss some traffic) and start the download (or the service you are testing)
        
        In my case I started downloading the ubuntu image from their website, in the background leaving wireshark running. Once the download completes, get back to wireshark.
        
        3. Apply display filters in wireshark to display only the traffic you are interested in. Its usually quite simple. Once you identify a packet belonging to the network flow you are interested in, right click on it > conversation filter > ip / tcp. This will isolate the IP / TCP traffic of interest
        
        The first method of seeing bandwidth used is by selecting the menu items:  Statistics > Protocol Hierarchy
        

        This screen will give you a breakdown of bandwidth by protocol. Since in this test we are observing HTTP, we drill down to TCP, and we observe the Mbits/sec

Note that the Wireshark captures the speed in Average of every packet which makes it low compared to speed test website measurement 

references:

Tesseract API - Providing training data

Tesseract is fully trainable. Tesseract needs to know about different shapes of the same character by having different fonts separated explicitly. The number of fonts is limited to 64 fonts. Note that runtime is heavily dependent on the number of fonts provided, and training more than 32 will result in a significant slow-down.

Some of the procedure is inevitably manual. As much automated help as possible is provided. The tools referenced below are all built in the training subdirectory.

The first step is to determine full character set to be used, and prepare a text or word processor file containing a set of examples. Below are few points to remember while generating the sample files 

1. Make sure there are a minimum number of samples for each character. 10 is good, 5 is OK,
2. There should be more samples of the more frequent characters - at least 20 
3. Don't make the mistake of grouping all the non-letters together. Make the text more realistic. For example, The quick brown fox jumps over the lazy dog. 0123456789 !@#$%^&(),.{}<>/? is terrible. Much better is The (quick) brown {fox} jumps! over the $3,456.78 #90 dog & duck/goose, as 12.5% of E-mail from aspammer@website.com is spam? This gives the textline finding code a much better chance of getting sensible baseline metrics for the special characters.

In 3.0.3, there is automated method,  We need to prepare a UTF-8 text file (training_text.txt) containing training text according to the above specification. Obtain truetype/opentype font files for the fonts we wish to recognize. And the run the following command for each font in turn to create a matching tif/box pair. 

training/text2image —text=training_text.txt —outputbase=[lang].[fontname].exp0 —font=‘Font name’ —fonts_dir=/path/to/fonts 
e.g 
training/text2image --text=training_text.txt --outputbase=eng.TimesNewRomanBold.exp0 --font='Times New Roman Bold' --fonts_dir=/usr/share/fonts

references:

Wednesday, April 27, 2016

iOS preventing blocking when working with streams

Problem is that when working with CFStreams, the socket reading could take longer time. If implement streams synchronously, entire stream will have to wait. 
There are two ways to avoid this 

1. Using run loop - Register to receive stream-related events and schedule stream on a run loop. When stream related event occurs, the callback function (specified by the registration call) is called. 

2. Polling - For read streams, find out if there are bytes to read before reading from the stream. For write streams, find out whether the stream can be written without blocking before writing to the stream. 

Using run loop to prevent blocking: This can be done by calling CFReadStreamScheduleWithRunLoop 

int makeRequest(const char *requestURL)
{
    CFReadStreamRef readStream;
    CFHTTPMessageRef request;
    CFStreamClientContext CTX = { 0, NULL, NULL, NULL, NULL };
    
    NSString* requestURLString = [ [ NSString alloc ] initWithCString:requestURL ];
    NSURL *url = [ NSURL URLWithString: requestURLString ];
    
    CFStringRef requestMessage = CFSTR("");
    
    request = CFHTTPMessageCreateRequest(kCFAllocatorDefault, CFSTR("GET"),
                                         (__bridge CFURLRef) url, kCFHTTPVersion1_1);
    if (!request) {
        return -1;
    }
    CFHTTPMessageSetBody(request, (CFDataRef) requestMessage);
    readStream = CFReadStreamCreateForHTTPRequest(kCFAllocatorDefault, request);
    CFRelease(request);
    
    if (!readStream) {
        return -1;
    }
    
    if (!CFReadStreamSetClient(readStream, kCFStreamEventOpenCompleted |
                               kCFStreamEventHasBytesAvailable |
                               kCFStreamEventEndEncountered |
                               kCFStreamEventErrorOccurred,
                               readCallBack, &CTX))
    {
        CFRelease(readStream);
        return -1;
    }
    
    /* Add to the run loop */
    CFReadStreamScheduleWithRunLoop(readStream, CFRunLoopGetCurrent(),
                                    kCFRunLoopCommonModes);
    
    if (!CFReadStreamOpen(readStream)) {
        CFReadStreamSetClient(readStream, 0, NULL, NULL);
        CFReadStreamUnscheduleFromRunLoop(readStream,
                                          CFRunLoopGetCurrent(),
                                          kCFRunLoopCommonModes);
        CFRelease(readStream);
        return -1;
    }
    
    return 0;
}


void readCallBack(
                  CFReadStreamRef stream,
                  CFStreamEventType eventType,
                  void *clientCallBackInfo)
{
    UInt8 buffer[2048];
    CFIndex bytes_recvd = 0;
    
    NSLog(@"Stream event type :%lu",eventType);
    if(eventType == kCFStreamEventOpenCompleted ||
       eventType == kCFStreamEventHasBytesAvailable)
    {
        bytes_recvd = CFReadStreamRead(stream, buffer, sizeof(buffer));
        totalRead += bytes_recvd;
        if (bytes_recvd > 0)
        {
            NSLog(@"read bytes :%ld, total %d",bytes_recvd,totalRead);
        }
    }
    else if(eventType == kCFStreamEventErrorOccurred)
    {
        NSLog(@"Error Encountered :%d",totalRead);
    }
    else if(eventType == kCFStreamEventEndEncountered)
    {
        NSLog(@"End Encountered :%d",totalRead);
    }
}


references:

Internals of Speed test app

Below were the steps decoded from running the speed test app

1. Application sends a ICMP ping message 

Application seems to be pinging the IP1 by TTL value as 1 initially. This comes back with TTL expire response. Then again it pings with TTL value increased. 
This is repeated unto 8 Times. Each time it was resulting in the error as in the screenshot below 

Then it makes TCP connection to port 8080. There seems to be 3 such connections from 3 different local ports on the device. From the traces, it can be seen that 51085, 51086, 51087 are the ports. 

Each data packets are containing 1440 bytes of data back to the device. 

Trying it on another network, could see that the ping is sent to multiple IPs and based on the TTL, best is picked. For e.g. the 103.18.67.61, 72.21.92.82 are the two destinations to which the PING is sent. The packet size of the transfer then varies. It sends only 1352 length packet in a high speed network case. 

as can be seen from the comparison, on the high speed network, the average packet size, 
both of the tests runs around 25 seconds. 

Image 



references:



Tuesday, April 26, 2016

Ping and TTL some details

ICMP PING command associated with TTL value. When each hop is reached, the TTL value is reduced by 1. For e.g. 

If there are 6 hops to 8.8.8.8.
So a minimum TTL value of 6 is required to reach icmp packets to 8.8.8.8 and get ping replay. And cannot ping to 8.8.8.8 with a TTL value of 5 or less.


Ping Results with different TTL values
[root@server ~]# ping 8.8.8.8 -t 5                 (-t 5  is for custom TTL value of 5)

PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
From 192.168.1.1  icmp_seq=1 Time to live exceeded
From 192.168.1.1  icmp_seq=2 Time to live exceeded
From 192.168.1.1  icmp_seq=3 Time to live exceeded
From 192.168.1.1  icmp_seq=4 Time to live exceeded


[root@server ~]# ping 8.8.8.8 -t 6             (-t 6  is for custom TTL value of 6)
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_req=1 ttl=55 time=48.9 ms
64 bytes from 8.8.8.8: icmp_req=2 ttl=55 time=49.5 ms
64 bytes from 8.8.8.8: icmp_req=3 ttl=55 time=50.4 ms
64 bytes from 8.8.8.8: icmp_req=4 ttl=55 time=49.4 ms


references:

Monday, April 25, 2016

Playing around with card.io SDK




This is a simple SDK that can read card information. Below is how we can instantiate and bring up this controller. 

We need to call [CardIOUtilities preload]; From the documentation, The best time to call preload is when displaying a view from which card.io might be launched. 
The preload method prepares card.io to launch faster. Calling preload is optional but suggested. On an iPhone 5S, for example, preloading makes card.io launch ~400ms faster.
preload works in the background; the call to preload returns immediately.

CardIOPaymentViewController *scanViewController = [[CardIOPaymentViewController alloc] initWithPaymentDelegate:self];
scanViewController.modalPresentationStyle = UIModalPresentationFormSheet;
[self presentViewController:scanViewController animated:YES completion:nil];

By default, this brings up the camera screen with the card.io / paypal logo shown up. This logo can be avoided by scanViewController.hideCardIOLogo = true; 

references:

Android How to check if Activity is finishing due to orientation change?

In Android, whenever there is change in orientation, previous activity instance get destroyed and new one get created. Below is a sequence of operation that happens with this said

onPause
onDestroy  
onCreate 

onResume

Now if a start any service such as window overlay service in onPause then it wouldnt start until orientation change happens. Due to this, onResume if check if started something already, it is not going to give any true result. One way is to check if the activity pause / destroy is happening due to configuration change, like the below 

if(!isChangingConfigurations()) {
    startService(new Intent(getApplicationContext(), OverlayService2.class));
}

references: 
http://stackoverflow.com/questions/9620841/how-to-distinguish-between-orientation-change-and-leaving-application-android



Amazon Hosting A Web app Architecture Overview

The idea is to get to know how to deploy a scalable, robust web app on AWS infrastructure. The aim is listed below 

- Create a Virtual Server, called Ec2 instance, and use it as an application server in the cloud. 
- Create a database server, called DB instance
- Deploy a sample Web App to the application server
- Set up a scaling and load balancing to distribute traffic across a minimum number of application servers
- Associate a domain with the web app. 

Below is the web app hosting architecture 

For low cost, reliable application and database servers, Amazon EC2 provides virtual servers in the cloud. Developer can control the protocols, ports, and source IP Address ranges that can access the virtual servers. Amazon EBS provides a persistent file system for Amazon EC2 Virtual servers. Amazon RDS provides cost-efficient and resizable database server that easy to administer. 

Below architecture diagram shows an example architecture for a web app. 
The web application tiers runs on EC2 instances in public subnets. Access to EC2 instance over SSH is controlled by a security group, which acts as a firewall. 
The auto scaling group maintains a fleet of EC2 instances that can scale to handle the current load. The auto scaling group spans multiple availability zones. The load balancer get adjusted accordingly whenever auto scaling group expands or terminates a EC2 instance. 

Access to DB instances from EC2 instances is controlled by security group. 



references:

Android Dealing with Out of memory errors on Image load


1. When image that is having size (width and height) it gives below error
ingmobile.com.eyeome W/OpenGLRenderer Bitmap too large to be uploaded into a texture
04-21 07:08:21.080  17783-17783/livingmobile.com.eyeome W/OpenGLRenderer Bitmap too large to be uploaded into a texture
04-21 07:08:21.080  17783-17783/livingmobile.com.eyeome W/OpenGLRenderer Bitmap too large to be uploaded into a texture
04-21 07:08:21.090  17783-17783/livingmobile.com.eyeome W/OpenGLRenderer Bitmap too large to be uploaded into a texture
04-21 07:08:21.110  17783-17783/livingmobile.com.eyeome W/OpenGLRenderer Bitmap too large to be uploaded into a texture
04-21 07:08:21.130  17783-17783/livingmobile.com.eyeome W/OpenGLRenderer Bitmap too large to be uploaded into a texture

The solution for this is to resize the image 

2. Out of memory error while decoding the stream

java.lang.OutOfMemoryError
at android.graphics.BitmapFactory.nativeDecodeStream(Native Method)
at android.graphics.BitmapFactory.decodeStream(BitmapFactory.java:493)
at android.graphics.BitmapFactory.decodeFile(BitmapFactory.java:299)
at ln.SlideShowActivity.setThumbnailImageAndSave(SlideShowActivity.java:381)
at ln.SlideShowActivity$3.run(SlideShowActivity.java:255)
at android.os.Handler.handleCallback(Handler.java:605)
at android.os.Handler.dispatchMessage(Handler.java:92)
at android.os.Looper.loop(Looper.java:137)
at android.app.ActivityThread.main(ActivityThread.java:4424)
at java.lang.reflect.Method.invokeNative(Native Method)
at java.lang.reflect.Method.invoke(Method.java:511)
at com.android.internal.os.ZygoteInit$MethodAn


public void setThumbnailImageAndSave(final ImageView imgView, File imgFile) {
    
    if(bitmapCache != null)
    {
        bitmapCache.recycle();
        bitmapCache = null;
    }
    /* There isn't enough memory to open up more than a couple camera photos */
    /* So pre-scale the target bitmap into which the file is decoded */
    
    /* Get the size of the ImageView */
    int targetW = imgView.getWidth();
    int targetH = imgView.getHeight();
    
    /* Get the size of the image */
    BitmapFactory.Options bmOptions = new BitmapFactory.Options();
    bmOptions.inJustDecodeBounds = true;
    BitmapFactory.decodeFile(imgFile.getAbsolutePath(), bmOptions);
    int photoW = bmOptions.outWidth;
    int photoH = bmOptions.outHeight;
    
    /* Figure out which way needs to be reduced less */
    int scaleFactor = 1;
    if ((targetW > 0) || (targetH > 0)) {
        scaleFactor = Math.min(photoW/targetW, photoH/targetH);
    }
    
    /* Set bitmap options to scale the image decode target */
    bmOptions.inJustDecodeBounds = false;
    bmOptions.inSampleSize = scaleFactor;
    bmOptions.inPurgeable = true;
    
    /* Decode the JPEG file into a Bitmap */
    bitmapCache = BitmapFactory.decodeFile(imgFile.getAbsolutePath(), bmOptions);
    
    if(bitmapCache != null) {
        /* Associate the Bitmap to the ImageView */
        imgView.setImageBitmap(bitmapCache);
        imgView.setVisibility(View.VISIBLE);
    }
}

references:

Saturday, April 23, 2016

Experimenting Tesseract API demo

Checked out the source code available at the location in reference section. 
Trying to run it on the simulator was giving error with details below. 

It appeared that the other linker flags was mentioning stdc++ and that seemed to be the reason
Trying to archive, the error was not appearing and hence decided to move forward without digging into the error. 

Ran the app and it presented two options “Start Camera” and “Show Image Picker”

If use the image which was taken as screenshot of the app, it was almost giving proper result. Also, if take a picture of numbers 
on VIC card, it was also giving correctly recognized. The failure mainly was when try to read the Credit card / debit card numbers 
imprinted on it. This wasn’t giving the result correctly. 

This has been the case with almost all the tesseract sample apps came across. The font of the bank card numbers was not somehow getting recognized correctly
May be that was also because the colors on it. 

Looking closer into the app, as usual, the library libtesseract_full.a was included in the frameworks, along with it, OpenGLES, QuarzCore, CoreGraphics frameworks were also included.

One thing to note was in the Resources folder, it was having “tessdata” folder and that folder contains a number of tables for the english language. 
It seems that the trick is around the training of this SDK

references:

Wednesday, April 20, 2016

Android using external Storage directory

The API getExternalFilesDir() creates files that are app specific. When the user uninstall the application, this directory and its contents will be deleted. Also the media scanner does not read from these directories. This directory can be used by the purposes where the content only belongs to user such as edited photos. Beginning with Android 4.4. it provides a method getExternalFilesDir that will give both internal and SD card directories that device declare as External directories. 

Now to save files that can also be accessed by other apps, we should use getExternalStoragePublicDirectory method. We can pass the type as either of DIRECTORY_MUSIC, DIRECTORY_PICTURES, DIRECTORY_RINGTONES. The code is like this below 

public File getAlbumStorageDir(String albumName) {
    
    File file = new File(Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_PICTURES), albumName);
    if (!file.mkdirs()) {
        Log.e(LOG_TAG, "Directory not created");
    }
    return file;
}

references

Monday, April 11, 2016

What is ARP

Address Resolution Protocol (ARP) is a protocol for mapping an Internet Protocol address (IP address) to a physical machine address that is recognized in the local network. For example, in IP Version 4, the most common level of IP in use today, an address is 32 bits long. In an Ethernet local area network, however, addresses for attached devices are 48 bits long. (The physical machine address is also known as a Media Access Control or MAC address.) A table, usually called the ARP cache, is used to maintain a correlation between each MAC address and its corresponding IP address. ARP provides the protocol rules for making this correlation and providing address conversion in both directions.


When an incoming packet destined for a host machine on a particular local area network arrives at a gateway, the gateway asks the ARP program to find a physical host or MAC address that matches the IP address. The ARP program looks in the ARP cache and, if it finds the address, provides it so that the packet can be converted to the right packet length and format and sent to the machine. If no entry is found for the IP address, ARP broadcasts a request packet in a special format to all the machines on the LAN to see if one machine knows that it has that IP address associated with it. A machine that recognizes the IP address as its own returns a reply so indicating. ARP updates the ARP cache for future reference and then sends the packet to the MAC address that replied.

references:

MapReduce Basics

MapReduce is a processing technique and a program model for distributed computing based on Java. The MapReduce contains two important tasks, basically, Map and Reduce. Map takes a set of data and converts into another set of data, where individual elements are broken down into tuples. Secondly, reduce task which takes the output from a map as an input and combines those tuples into smaller set of tuples. As the sequence of name MapReduce implies, the reduce task is always performed after the map job. 

The major advantage of MapReduce is that it is easy to scale data processing over multiple nodes. 

Below give is the algorithm for this. 

MapReduce program executes mainly in three stages. namely, map stage, shuffle stage, and reduce stage. 

Map Stage : The map or mapper’s job is to process the input data. Generally the input is in the form of text file or directory and is stored in the HDFS. The input file is passed to the mapper function line by line. The mapper processes the data and creates several chunks of of data. 

Reduce Stage: This stage is combination of shuffle stage and Reduce stage. The Reducer’s job is to process the data that comes from the mapper. After processing, it produces a new set of output which will be stored in the HDFS. 

During the MapReduce job, Hadoop sends the Map and Reduce tasks to the appropriate servers in the cluster. The framework manages all the details of data processing such as issuing tasks, verifying task completion, and copying the data around the cluster between nodes. Most of the computing takes place on nodes with data on local disk that reduces the network traffic. 

After completion of the tasks, the cluster collects and reduces the data to form an appropriate result and sends back to the Hadoop server. 


From Java Perspective, MapReduce see the input as a set of key value pairs. 

Below are few terminologies 

Payload : Applications implement the Map and Reduce functions, and form the core of the job. 
Mapper : Mapper maps the input key values pairs to a set of intermediate key value pair. 
NamedNode : Node that manages the Hadoop distributed system. 
DataNode : Node where data is presented in advance before any processing takes place. 
MasterNode : Node where the JobTracker runes and which accepts Job requests from clients. 
SlaveNode : Node where Map and Reduce program runs. 
JobTracker : Schedules the job and track the assign Jobs to Task Tracker. 
Task Tracker : Tracks the task and reports status to JobTracker 
Job : A program is an execution of a mapper and reducer across a dataset. 
Task : eXecution of a Mapper or Reducer on a slice of data. 
Task Attempt : A particular instance of an attempt to execute a task on a SlaveNode. 



references:


Sunday, April 10, 2016

Java Split() method


split uses regular expression and in regex | is metacharacter representing OR operator. You need to escape that character using \ (written in String as "\\" since \ is also metacharacter in String literals and require another \ to escape it).

with this said, to split hello|world to hello and world, we will need to specify as split(\\|);

now to split hello||world, we have to specify as .split(\\|\\|)

references
http://stackoverflow.com/questions/10796160/splitting-a-java-string-by-the-pipe-symbol-using-split

Tuesday, April 5, 2016

Converting Time to Multiple Timezones

Date startTime = new Date(); // current date time
TimeZone pstTimeZone = TimeZone.getTimeZone("America/Los_Angeles");
DateFormat formatter = DateFormat.getDateInstance(); // just date, you might want something else
formatter.setTimeZone(pstTimeZone);
String formattedDate = formatter.format(startTime);

OR 

TimeZone pacificTimeZone = TimeZone.getTimeZone("America/Los_Angeles");
long currentTime = new Date().getTime();
long convertedTime = currentTime +
pacificTimeZone.getOffset(currentTime);

references:

Angular - Templating Images and Links

The whole exercise in this is to display a list of phone thumbnails for the phones. This is done by the following snippet. 

ul class=“phones”
li ng-repeat=“phone in phones | filter:query | orderly:orderProp“ class = thumbnail 
a href = “#phones/{{phone.id}}” class=“thumb” img ng-src=“{{phone.phone-url}}” alt=“{{phone.name}}”
a href = “#phones/{{phone.id}}” > {{phone.name}}
p {{phone.snippet}} 

To dynamically generate links that will in the future lead to phone detail pages, we used the now-familiar double-curly brace binding in the href attribute values. In step 2, we added the {{phone.name}} binding as the element content. In this step the {{phone.id}} binding is used in the element attribute.

We also added phone images next to each record using an image tag with the ngSrc directive. That directive prevents the browser from treating the Angular {{ expression }} markup literally, and initiating a request to an invalid URL http://localhost:8000/app/{{phone.imageUrl}}, which it would have done if we had only specified an attribute binding in a regular src attribute (). Using the ngSrc directive prevents the browser from making an http request to an invalid location.

references:

Monday, April 4, 2016

Angular JS XHR and Dependency Injection

This section explains how to load the data using the HTTP service. This is done by utilizing the Angular’s $http service. 

Below is the code that replaces the hardcoded data with the one from http service 

var phonecatApp = angular.module(‘phonecatApp’, []);
phonecatApp.controller(‘PhoneListCtrl’, function($scope,$http))
{
$http.get(‘phones/phones.jsn’).success(function (data) {
$scope.phones = data; 
 });
$scope.orderProp = ‘age’; 
});

Below is how Angular system inspects the dependancies and resolves them. 

1. the injector identifies $http services as a dependancy of phoneListCtrl 
2. The injector checks whether $http has already been instantiated 
3. If not instantiated yet, it uses factory function for $http to construct it
4. The injector provides singleton instances of the $http service to the PhonelistCtrl controller. 


references: