Thursday, July 31, 2014

RTCP and Type Of RTCP packets

RTCP 

RTP Control protocol is based on periodic transmission of control packets to all participants in the session using the same distribution mechanism as data packets. 
RTCP was first specified in RFC 1889 which is obsoleted by the RFC 3550. 
Typically, RTCP uses UDP protocol. RTCP doesn't not have a well known UDP port, instead port is allocated dynamically and then signalled using different protocols such as SDP and H245. 


Below are the major functions of RTP

- It provides feedback on the quality of data transmission. Different type of packets are used. 
- This carries persistent transport-level identifier for an RTP source called Cannonical Name or CNAME. SSRC may change from time to time, but CNAME remains the same. It used to identify a participant during a session. RTCP may also contain extra information about a participant like email. 
- By Having each participant send its control packets to all the others, each can independently observe the number of participants. This number is used to calculate the rate at which the packets are sent. More users in the session means an individual source may send the packets less frequently. 

Below are the types of RTCP packets

1. SR: Sender Report, for transmission and reception statistics from participants that are active senders. 
2. RR : Receiver Report : for reception statistics from participants that are NOT active senders. 
3. SDES: Source Description Items, including CNAME. 
4. BYE indicates the end of participation. 
5. APP : Application specific functions. 

References:
http://www.siptutorial.net/RTP/rtcp.html

Application Level Framing in RTP

RTP is a protocol framework that is not deliberately not complete. It is not steadfast in certain structures and can be modified to suite application when required. This characteristic is known as Application level framing. 

Inorder to provide extensions or modifications to RTP, a profile that defines it is needed. An RTP profile defines extensions or modifications to RTP for a class of applications. Participants in RTP session should agree to a common format. Several header fields can be modified according to specific application. 

The extension bit may be set to indicate that the fixed header is followed by exactly one header extension. Extra fields may carry extra information useful for the application. 

In short, the information required by a specific application is not included in the generic RTP headers, instead it is defined through RTP profiles and payload formats. For each class of application, RTP defines a profile and one or more associated payload formats. 

A Profile defines the codecs used to encode the payload data and their mapping to payload format codecs in the payload type (PT) field of the RTP header. Each profile is accompanied by several payload format specifications each of which describes the transport of a particular encoded data. The audio payload formats include G.711, G.723, G.726, GCM, QCELP, MP3 and DTMF. and the video payload formats include H.261, H.263, H.264, MPEG-4 

Some of the examples for RTP profiles are below

- The RTP profile for Audio and Video conferences with minimal control (RFC 3551) defines a set of static payload type assignments and mechanism for mapping between payload type format, and a payload type identifier(in header) using the SDP

- The Secure Real Time Transfer Protocol (SRTP) defines a profile of RTP that provides cryptographic services for the transfer of payload data. 

- The experimental Control Data Profile for machine to machine communications. 


Below is a block diagram representation of RTP header with a profile 



References:
http://www.siptutorial.net/RTP/applvlframing.html

Synchronization in RTP

References:
http://www.siptutorial.net/RTP/synchro.html

Mixers and Translators in RTP

Mixer In RTP 

It may so happen that all participant in the conference do not have connection of same bandwidth. Now the question is how do they participate in the conference. 

One solution could be that all of them use sane lower bandwidth. But this leads to reduced-quality audio encoding. 

A smarter solution exists in the use of a RTP level relay called a mixer. A mixer can be placed in a lower bandwidth area. This mixer resynchronise incoming audio packets to reconstruct the constant 20 ms spacing generated by the sender, Mixes the reconstructed audio stream to a single stream, translate the audio encoding to a lower bandwidth one and forwards the lower bandwidth packet stream across the low-speed link. The following gives a Graphical representation. 



The mixer puts its own identification as the source of the packet (SSRC) and puts contributing sources in the CSRC fields.


Mixers have other applications too. for e.g. a video mixer that scales the images of individual people in separate video streams and composites them into one video stream to simulate a group scene. 

Translators in RTP 

A problem occurs if one or more participants of a conference are behind the firewall which won't allow an ip packet containing an RTP message to pass. For this situation, translators are used 

Two translators are installed, one on either side of the firewall with the outside one funnelling all multicast packets received through a secure connection to the translator inside the firewall. The translator inside the firewall sends them again as multicast packets to a multicast group restricted to the site's internal network. The illustration is given below. 




The translator do not change the SSRC or CSRC unlike the mixers. 

References:
http://www.siptutorial.net/RTP/mixer.html

RTP packetization - smaller or bigger packets?

references:
http://200ok.info/2008/02/15/voip-rtp-packetization-interval-maybe-its-time-for-a-smaller-ptime/

Wireshark and SSL

References:
http://wiki.wireshark.org/SSL

Creating a Google Pie Chart

As part of an exercise, i had to quickly build a chart UI which indicates number of JIRA issues assigned with the level of priority. This view is most beautiful when shown via a pie chart. Of course Core plot APIs is an option, but needed something very simpler to indicate. The search resulted in finding the Google pie chart. 

Using the playground, could visualise the chart before integrating to the application. 

The base URL for creating the chart is http://chart.googleapis.com/chart? 

The type of the chart is determined by "cht" parameter indicating the chart type.  

As part of the experiment, below are the ones tried. 

http://chart.googleapis.com/chart?cht=p3&chts=Mychart&chs=600x400&chl=January|February|March|April => Did not work 
http://chart.googleapis.com/chart?cht=p3&chs=600x400&chd=t:January,February,March,April => Did not work 
http://chart.googleapis.com/chart?cht=p3&chs=250x150&chd=t:10,20,30,40,50 => Worked, 
http://chart.googleapis.com/chart?cht=p3&chs=270x150&chd=t:10,20,30,40,50&chl=P1|P2|P3|P4 => Worked

Basically, "chd" specifies the chart data and that has to be in integer so that chart engine can slice the chart accordingly. 
the "chs" specifies the chart subject. However, could not see this appearing in the chart 
the "chl" specifies the labels of each segment. 

The chart url gives back a png image which can be displayed n a native application such as Android or iOS or within html page. 

the sample taken using the URL http://chart.googleapis.com/chart?cht=p3&chs=270x150&chd=t:10,20&chl=P1|P2&chco=FF0010,56FF00



References: 

RTP Basics - Part II - RTP Packet structure


The real time media data is transmitted forms a RTP payload. RTP header contains information related to the payload e.g. source size, encoding type etc. The header structure will be defined a bit later. 



Since the RTP packet cannot be transmitted as is, it need to be placed in the UDP packet. And UDP packet needs to be sent inside an IP packet. In addition, to transfer the IP packet over the wires, we need to put that inside other packets which is very lower level. 

Below given is the detailed view of RTP header 




The fields are mainly 

version (V) : 2 bits 
This field identifies version of the RTP. the value is 2 upto RFC 1889

Padding ( P ): 1 bit 
IF the padding is set, the packet contains one or more padding octets at the end which are not part of payload. The last octet of the padding contains a count of how many padding octets should be ignored. The padding may be needed by some encryption algorithms with fixed block size or for carrying several RTP packets at a lower-layer protocol data unit. 

extension ( X ) : 1  bit 
IF the extension bit is set, the fixed header is followed by exactly one header extension. 

CSRC count (CC) : 4 bits 
The CSRC count contains the number of CSRC identifiers that follow the fixed header. 

marker (M) : 1 bit 
Marker bit is used for specific applications to serve the purpose of its own. This is mainly used for Application level framing 

Payload Type : (PT) 7 bits 
This field identifies the format (e.g. encoding) of the RTP payload that determines its interpreting by the application. This field is not intended for multiplexing separate media. 

Sequence number : 16bits 
The sequence number increase by 1 for each RTP data packet sent and may be used by the receiver to detect the packet loss and to restore the packet sequence. The initial value of this sequence is random (unpredictable) 

timestamp : 32 bits
 The timestamp reflects the sampling instant of the first octet in the RTP data packet. The sampling instant must be derived from a clock that increments moronically and linearly in time to allow synchronisation and jitter calculations. 

SSRC : 32 bits 
The SSRC field identifies the synchronisation source. this identifier is chosen randomly, with the intent that no two synchronisation sources within the same RTP session has the same SSRC identifier 

CSRC list : 0 to 15 items, 32 bits each
The CSRC list identifies the contributing sources for the payload contained in this packet. The number of identifiers in the given in the CC field. IF there are more than 15 contributors only 15 may be identified. CSRC identifiers are inserted by mixers, using the SSRC identifiers of contributing sources. 


References: 
http://www.siptutorial.net/RTP/header.html

RTP basics - Part 1


RTP provides end to end delivery services for data (such as interactive audio and video) with real time characteristics. It is primarily designed to support multi party multimedia conferences. RTP is standard defined in RFC 1889, but more recent versions are RFC 3550 and RFC 3551. 

Now what is meant by real time? CAn the transmission be realtime considering the latencies applied by the network components such as routers , switches etc? The real time just referred to make sure the delivery is fast no matter whether it is correct. in short, RTP provides suitable mechanism for transferring real time content such as voice applicable for conferences. 

Another important fact about RTP is that it doesn't ensure timely delivery. i.e. RTP doesn't provide any mechanism to ensure timely delivery or provide other quality of service guarantees. It relies on lower layer services such as UDP or TCP to do so. 

RTP is intact combination of two parts 

RTP  - Real time transfer protocol, which carries the data
RTCP - Real Time Control protocol. This monitors the quality of services and conveys the information about the participants. 

There are multiple use cases for the RTP

Simple Multicast Audio conference
Initially the owner of the conference through some allocation mechanism obtains the a multicast group address and a pair of ports. One port is used for RTP data and the other is used for RTCP control packets. The address and port information is distributed to the intended participants. IF privacy is desired, the data and the control packets may be encrypted in which case, an encryption key is generated and distributed. 

Each participant sends the audio data in small chunks say 20ms. Each instance of the conference periodically multicasts a reception report plus name of the user on the rtcp port. This helps to monitor the quality of transmission  and also determine who the present participants are. 

Audio And Video Conference
If both audio and video is present in the conference, they are transmitted as separate RTP sessions. RTCP packets are transited for each medium as two different UDP port pairs. The canonical name or CNAME of each participant is used to match the audio and video sessions of of individual participants. 

The sessions are divided so the client may choose one of them without really having to use both. Say for e.g. we can choose only audio if don't want to see the video. 

Mixers in RTP 

So far, we have assumed that all site want to receive the media data in same format. However, this is not always appropriate. For users having connections of different bandwidth or those working behind the wall, which don't allow IP packets to pass will need some extra processing. This is done in the form of Mixers and translators. 

Reference: 
http://www.siptutorial.net/RTP/application.html

Wednesday, July 30, 2014

WebRTC an overview

WebRTC is a free open project that enables web browsers with Real Time Communication capabilities via simple Javascript APIs. The WebRTC components have been optimised to best serve this purpose.

The WebRTC initiative is a project supported by Google, Mozilla and Opera. The webpage given in reference is initiated by Google Chrome team.

Below are few terms we should be familiar on the WebRTC front.



Web App
This is the component which is a web application enabled with audio and video capabilities powered by the web APIs for real time communication.

Web API
This is the set of APIs which is used by the web developer to make real time chat applications

WebRTC native C++ API
This is the API layer which enables browser makers to easily implement the Web API proposal.

Transport / Session
The session components are built by re-using components from lib jingle, without using or requiring the xmpp/jingle protocol.

RTP Stack
A network stack for RTP, real time protocol

STUN/ICE
A component which allows calls to make use of STUN or ICE mechanisms to establish connection across various types of networks.

Session Mananagement
An abstract session layer, allowing calls setup and management layer. This leaves protocol implementation decision to the application developer.

Voice Engine
VoiceEngine is the framework for the audio media chain, from the sound card to the networks.

iSAC, iBLC, Opus
iSAC : A sideband and super sideband audio codec for VoIP and streaming audio. iSAC uses 16khz or 32 KHz sampling frequency with an adaptive and variable bit rate of 12 to 52 kbps

iLBC: A narrowband speech codec for VoIP and streaming audio. Uses 8KHz sampling frequency with a bit rate of 15.2 kbps for 20 ms frames and 13.33 kbps for 30ms frames.

Opus: Supports constant and variable nitrate encoding from 6Kbp/s to 510 kbit/s frame sizes from 2.5ms to 60ms and various sampling rates from 8KHz (with 4KHZ bandwidth) )to 48KHz (with 20KHz bandwidth where entire hearing range of human auditory system can be reproduced.)

NetEQ for voice
A dynamic jitter buffer and error concealment algorithm used for concealing the negative effects of network jitter and packet loss. This keeps latency as low as possible while ensuring high audio quality.

Acoustic Echo Canceller(AEC)
AEC is a software based signal processing component that removes in real time the echo resulting from audio being played out coming into the active microphone.

Noice Reduction (NR)
The NR component is software based signal processing component that removes certain types of background noise usually with VoIP. (Hiss, fan noise etc)

Video Engine
Video engine is a framework video media chain for video. from camera to network, and from network to the screen.

VP8
Video codec from the WebM project. Well suited for RTC as it is designed for low latency.

Video Jitter buffer
Dynamic jitter buffer for video. Helps conceal the effects of jitter and packet loss on overall video quality.

Image enhancements
This component removes the video noise from the image captured by the webcam.

References:

Thursday, July 24, 2014

The Hopper Dis-assembler


The hopper disassembler can be found here http://hopperapp.com/download.html 

Hopper is a tool that will assist developer in static analysis of the binary file.
The demo version is quite good for some initial investigation of the binary. 

the idea of hopper is that it accepts a set of bytes and coverts into something readable by humans

There are various types that can be used in hopper. they are below 

data : an area is set to data type when Hopper thinks that it is an area that represents a constant, like an array of int for instance 
ASCII : a NULL terminated C string 
code : an instruction 
procedure: a byte receive this type once it has been determined that it is part of a method that has been successfully reconstructed by Hopper. 
undefined : this is an area that have not been explored by Hopper. 

As soon as an executable is loaded, one can manually change the type, by using either they keyboard, or the toolbar on the top of the window. 

D | A | C | P | U

Navigating through the file
An executable is split up into smaller piece of data called segments and sections. 

When OS loads an executable, some part of it get loaded to system memory. Each continuous piece of the file mapped into memory is called segments. These segments are splitted into smaller parts called sections which will receive various access properties. 

The hopper allows user to name an address so that the piece of code can be identified using the label within the binary file. 

The tool provides a Navigation bar which shows up the colour scheme. blue for code, yellow for procedure, green for ASCII strings, purple for data, grey for undefined. 

There is an inspector component which shows below main components 

1. Instruction Encoding -> This component display the bytes of the current instruction. If the current processor is having multiple CPU types, user will see popup menu which lets the user to change the CPU modes at the current address. Different cpu types are ARM And Thumb. 

Format: This component is used to change the display format of the operand of an instruction 

Comment : This component allows user to add comment at a given address. 

Colors and Tags: This component allows user to associate tags to addresses, block of procedure, or procedure itself. 

References: This is very important component. This shows all the references one instruction can have to other instructions or a piece of data. User can even add own reference too if hopper analysis did not add a reference. 


Procedure: This component contains information on the current procedure. 

References:

Tuesday, July 22, 2014

iOS Core Plot Library

Application needs to create an instance of CPTGraphHostingView which is hosting the graph view. In this class, there is hostedGraph which is an instance of CPTGraph which is a generic interface. In this case, we can have the CPTXYGraph  instance which is Bar chart kind of graph. 

    CPTXYGraph *barChart = [[CPTXYGraph alloc] initWithFrame:CGRectZero];
    barGraph.hostedGraph             = barChart;
    barGraph.allowPinchScaling = NO;
    
    barChart.paddingLeft   = 35.0;
    barChart.paddingTop    = 20.0;
    barChart.paddingRight  = 20.0;

We can also apply a theme for the graph using the below statements 

 CPTTheme *theme = [CPTTheme themeNamed:kCPTPlainBlackTheme];
    [barChart applyTheme:theme];
    
    barChart.plotAreaFrame.masksToBorder = NO;
    barChart.plotAreaFrame.borderLineStyle = nil;

The Axis style can be set like below     

CPTMutableLineStyle *majorGridLineStyle = [CPTMutableLineStyle lineStyle];
    majorGridLineStyle.lineWidth            = 0.1;
    majorGridLineStyle.lineColor            = [[CPTColor whiteColor] col


We can give the Axis labels like this below 

CPTAxisLabel *newLabel = [[CPTAxisLabel alloc] initWithText:[months objectAtIndex:labelLocation++] textStyle:x.labelTextStyle];
        newLabel.tickLocation = [tickLocation decimalValue];
        newLabel.offset       = x.labelOffset + x.majorTickLength;
        [customLabels addObject:newLabel];

 x.axisLabels = [NSSet setWithArray:customLabels];

Now we can draw each of the bar like in the below code

CPTBarPlot *barPlot = [[CPTBarPlot alloc] init];
    barPlot.fill = [CPTFill fillWithColor:[CPTColor colorWithComponentRed:87/255.0 green:142/255.0 blue:242/255.0 alpha:1.0]];
    barPlot.dataSource      = self;
    barPlot.barCornerRadius = 2.0f;
    barPlot.delegate        = self;
    barPlot.lineStyle = barLineStyle;
    barPlot.baseValue = CPTDecimalFromFloat(0.0f);

Application needs to implement the methods of CPTPlotDataSource so that the bar is supplied with the data 
The only mandatory method is 
-(NSUInteger)numberOfRecordsForPlot:(CPTPlot *)plot;

Other methods such as below are optional in the API
-(NSArray *)numbersForPlot:(CPTPlot *)plot field:(NSUInteger)fieldEnum recordIndexRange:(NSRange)indexRange;
-(NSNumber *)numberForPlot:(CPTPlot *)plot field:(NSUInteger)fieldEnum recordIndex:(NSUInteger)idx;
-(double *)doublesForPlot:(CPTPlot *)plot field:(NSUInteger)fieldEnum recordIndexRange:(NSRange)indexRange;
-(double)doubleForPlot:(CPTPlot *)plot field:(NSUInteger)fieldEnum recordIndex:(NSUInteger)idx;
-(CPTNumericData *)dataForPlot:(CPTPlot *)plot field:(NSUInteger)fieldEnum recordIndexRange:(NSRange)indexRange;
-(CPTNumericData *)dataForPlot:(CPTPlot *)plot recordIndexRange:(NSRange)indexRange;
-(NSArray *)dataLabelsForPlot:(CPTPlot *)plot recordIndexRange:(NSRange)indexRange;

-(CPTLayer *)dataLabelForPlot:(CPTPlot *)plot recordIndex:(NSUInteger)idx;

references:
http://www.raywenderlich.com/13269/how-to-draw-graphs-with-core-plot-part-1

Sunday, July 20, 2014

Android : List view


Android has ListView and ExpandableListView classes capable of displaying scrollable list of items. The ExpandableListView supports a scrollable list of items. 
An Adapter manages the data model and adapts to the individual rows in the list view. An Adapter extends the BaseAdapter class. 
Every line in the list view consists of a layout and application can choose the complexity of it. 

Adapters 

An Adapter manages the data model and adapts to the individual rows in the list view. An adapter extends the BaseAdapter class. The Adapter would inflate the layout for each row in its getView method and assign data to the individual views in the row. 
The Adapter is assigned to the ListView via the setAdapter method in the ListView object. 

the default normal adapters provided by the system are ArrayAdapters and CursorAdapter 

A sample of ListView with ArrayAdapter 

- First of all make the layout in the xml like below 
android:id="+id/listview"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
/>

By default if Application provides a simple array adapter, each element n the array adapter array will be taken as the row item and display it in the list view 

Applications can create a list view row with own layout. A sample given below which shows an image view and a text view in a row 

android:layout_width="wrap_content"
android:layout_height="wrap_content"
>
/>
android:id="+id/textview"
/>

After this, in the code, the below can be done 

String[] values = new String[]{"Android" , "iPhone", "WindowsMobile", "Blackberrty" , "WebOS" };
ArrayAdapter adapter = new ArrayAdapter (this, R.layout.rowlayout, R.id.label, values);
setLlistAdapter (adapter)

Application can create an Own adapter as well. Below sample code shows the 

extends from the Simple Adapter such as array adapter overrider the getView method 

public class MySimpleCustomAdapter extends ArrayAdapter
{
private final Context context; 
private final String[] values;
public MySimpleCustomAdapter (Context context, String[] values)
{
super(context, R.layout.rowlayout, values);
this.context = context; 
this.values = values;
}
@override
public View getView(int position, View convertView, ViewGroup parent)
{
LayoutInflator inflator = (LayoutInflator) context.getSystemService(Context.LAYOUT_INFLATOR_SERVICE);
View rowView  = inflator.inflate(R.layout.rowLayout,parent,false);
TextView tview = (TExtView)rowView.findViewById(R.id.label);
ImageView view = (ImageView) rowView.findViewById(R.id.imageview);
tview.setText(values[position]);
view.setImageResource(R.drawble.mycustomimage);
}
}

references: 

Saturday, July 19, 2014

JIRA Integration via REST APIs

The main step in JIRA integration is authentication. JIRA provides mainly three methods for authentication 

1. Simple mechanism 

In this method, application passes the username and password as a plain text to the network layer. Based on whether it is http or https, the data is sent to the server unencrypted or encrypted. 

below is a sample curl command that demonstrate this. 


curl -v -u myusername:mypassword  https://examplejira.atlassian.net/rest/api/latest/search?jql=project=TWCIOS&startAt=0&maxResults=200

2. Supplying Basic auth headers. 
In this mechanism, application passes the Authorization header to the network layer. Authorizaton header is constructed by Base64 encoding the username:password combination. 

For e.g.  

curl -D- -X GET -H "Authorization: Basic VHlwZSAob3IgcGFzdGUpIGhlcmUuLi4=" -H "Content-Type: application/json" "http://kelpie9:8081/rest/api/2/issue/QA-31"

Where VHlwZSAob3IgcGFzdGUpIGhlcmUuLi4= Is the base64 encoded value of myusername:my password 

OAuth based authentication 
For providing OAuth based authentication, the basic terminologies related to the OAuth authentication needs to be in mind, they are Consumer, Service Provider, request, token, access token. 

Step 1: 

The first step is to register a new consumer in JIRA. This is done through the application links administration screens in JIRA. When creating the application link, we can specify URL which can be a placeholder URL or a correct URL of the client. If the client can be reached via http url, select the General Application type. After the application link has been created, edit the configuration and go to the incoming authentication configuration screen and select OAUTH. Enter in this the public key and the consumer key which the client will use when making request to JIRA. 

After these configurations are done, press OK to ensure the authentication is enabled. 

Step 2: 

This step is about configuring the client. 
Client will require the following information to make authentication request in JIRA. 

request token url : JIRA_BASE_URL + "/plugins/servlet/oauth/request+token"
authorisation url : JIRA_BASE_URL + "/plugins/servlet/oauth/authorize
access token url : JIRA_BASE_URL + "/plugins/servlet/oauth/access-token 
oath sign type  : RSA-SHA1
consumer key : Key that is configured in step 1 

In short the above in for below 

1. Obtain a request token 
2. Authorize the request token 
3. Swap the request token with access token 

Step 3: 
Now having the access token, application can make the request to the specific REST JIRA APIs 


References: 

https://developer.atlassian.com/display/JIRADEV/JIRA+REST+API+Example+-+OAuth+authentication
https://developer.atlassian.com/display/JIRADEV/JIRA+REST+API+Example+-+Basic+Authentication#JIRARESTAPIExample-BasicAuthentication-Authenticationchallenge

Sunday, July 13, 2014

Android : Compound Controls

If the Application requirement is to just group a certain already existing component and create it as a group, This is also possible in android and this is categorised as Compound Controls. 
There are already some components in System framework which does this. For e.g. Spinner, AutoCompleteTextView 

Below are the steps to create a CompoundControl 

1. The usual starting point is layout of some kind, so, create a class that extends the Layout. For e.g. LinearLayout. The layout can be nested inside to make complex compound components. Note that just like with an Activity, you can use either the declarative (XML based) approach to creating the contained components, or application can nest them programmatically in the code. 

2. In the constructor of the new class, take whatever parameters the superclass expects. and pass them through to the super contractor first. After this, the component component can set up other components those are readily available or other custom components. Note that application might also introduce own attributes and parameters into XML that can be pulled out and used by the new compound controls constructor. 

3. Compound controls can also have own listeners 

4. Compound controls can expose new properties and methods that may deem necessary for the functionality and usefulness of the component 

5. In case application is extending a layout, application don't need to override onDraw() or onMeasure methods since the layout will have default behavior that will likely just work fine. However, application can override it still would like to. 

6. The application can override other on… methods such as onKeyDown if required. 

There is a Compound component example given in the 

With this in mind, lets create a CardView similar to single card in Pinterest app. The card can have ImageView that contains the main image and Label Below it and a separator image, then an image in round shape for the profile picture and a label for description. The sample is List4.java and List6.java 

The List6.java class creates a SpeachView class as a compound class. This compound component is created programmatically. And holds two TextView components. 

The class is declared as extending the LinearLayout. Other methods such as onMeasure and onLayout are not overridden in this class, which means that it lets the system to layout the components within it. 

References: 


Android Custom View - A Look at LabelView class


The LabelView class demonstrates a Custom LabeView which draws the given text. This example doesn't load anything from a layout XML file, instead it paints the text. This is done by overriding the onDraw method like below 

@override 
protected void onDraw(Canvas canvas)
super.onDraw(canvas);
canvas.drawText(mText, getPaddingLeft(), getPaddingTop() - mAscent, mTextPaint);
}

Other  important functions is onMeasure, This function is important to let the parent know that how much amount of space this component require 

The implementation is like below 

@override 
protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec)
{
setMeasuredDimension( measureWidth(widthMeasureSpec), measureHeight(heightMeasureSpec)); 

the widthMeasureSpec and heightMeasureSpec are int values which is a package of multiple measure spec attributes for e.g. specMode, specSize, which can be extracted using the MeasureSpec class 

A Typical implementation is like below 

private int measureWidth (int measureSpec)
{
int result = 0; 
int specMode = MeasureSpec.getMode(measureSpec);
int specSize = MeasureSpec.getSize(measureSpec);
if(specMode == MeasureSpec.EXACTLY)
{
//We were told how big it would be 
result  = specSize;
}
else 
{
result = (int) mTextPaint.measureText(mText) + getPaddingLeft() + getPaddingRight(); 
if(specMode == MeasureSpec.AT_MOST)
{
result  = Math.min(result, specSize);
}
}
return result; 
}

I decided to take this and layout in the Custom Layout engine i created. And below are few observations from this. 
As usual, came across the issue where style able is not present in the workspace i have. A workaround like this did solve the problem 

onMeasure, the individual views have been passed with MeasureSpec value as UNSPECIFIED for specMode.
Based on the passed in value, the LabelView class computed the measurable width and height. But since the layout was called with the 
absolute value, the view was still looking according to how the CardLayout wanted. 

However, tried to use the modified measured values by the LabelView which was returned like values 57 and 17 which was not sufficient to display the view
But it did work with a 20px more from the measured value. 

References: 

Saturday, July 12, 2014

Android Custom Components


Creating own view subclasses gives an application precise control over the appearance and function of screen element. Below are the examples of usefulness of creating custom views

- Appplication create a completely custom-rendered view type, for e.g. a Volume control knob rendered using 2D graphics, and which resembles an analog electronic control. 
- Application could combine a group of view components into a new single components, perhaps to make something like a combo box (a combination of popup list and free entry text field), a dual-pane selector control (a left and right pane with a list in each where application can re-assign which item is in which list.) and so on
- Application could override the way that an EditText components is rendered on the screen 
- Application could capture other events like key presses and handle them in some custom way (such as for a game)

The basic approach to do this is like given below

1. Extend an existing View class or subclass with own class
2. Override some of the methods from the superclass. The superclass methods to override starte with on. for e.g. onDraw, onMeasure, onKeyDown, 
3. once above is completed, the new custom view class can be used in place of regular view

Fully customised components can be created like below 
A good example could be a sing-along-text view where a bouncing ball moves along the words so user can sign along with a karaoke machine. 

1. Extend from View. this is the most generic component from which a view can be derived. 
2. Supply a constructor which can take attributes and parameters from the XML
3. Probably create own event listeners, property accessors and modifiers, and possibly more sophisticated behavior
4. Most certainly override the onMeasure() and likely override the onDraw. The default onDraw will do nothing, and the default onMeasure() will always set the z size of 100x100 - Application may require more than this. 
5. Other on… methods are overridden as required. 

The onDraw method delivers application a Canvas upon which the application can draw anything wanted. 2D graphics or standard or custom components, styled text etc. But this can't render the 3D graphics, IF want the 3D graphics, needs to override the SurfaceView instead of View and draw from a separate thread. 

onMeasure is little more involved. This is a critical contract between the application component and its container. 

Below given is the logic that goes into the onMeasure method 

- The overridden onMeasure method is called with width and height measure specifications (widthMeasureSpec and heightMeasureSepc) parameters both representing the dimensions. These should be treated as requirements for the restrictions on the width and height measurements that component should produce. 
- The new Custom component's onMeasure method should calculate a measurement width and height which will be required to render the component. It should try to stay within the specifications passed in, although it can choose to exceed them (in this case, parent can choose what to do such as clipping, scrolling, throwing an exception or asking onMeasure to try again perhaps with a different measurement specifications) 
- Once the width and height are calculated, the setMeasuredDimension(int width, int height) method must be called with the calculated measurements. Failure to do this will result in exception being thrown. 

Rerefences: 

Android - Saving Data

The principal file storage options in Android are the ones below 

- Saving Key value pairs of simple data types in preference file.
- Saving Arbitrary files in Android file system. 
- Using Database managed by SQlite 

Key value pair saving can be via SharedPreferences API. Internally, system keeps a file to store the key value pairs. This shared preference can be made a private or shared. 

Below is a code snippet to invoke the SharePreference call from a Fragment. The context of Fragment is Activity 

Context context = getActivty(); 
SharedPreferences preferences = context.getSharedPreferences(getString(R.strings.preference_file_key),Context.MODE_PRIVATE);

While naming the preference file, it is better to name with the reverse DNS name. For e.g. com.example.PREFERENCE_FILE_KEY 

Alternatively, if application needs only one preference file, then getPreferences cane be called like below 

SharedPreferences sharedPref = getActivity().getPreferences(Context.MODE_PRIVATE);

If the Application specifies the preference file name access as MODE_WORLD_READABLE, MODE_WORLD_WRITABLE, it will be accessible by other application if the preference file name is know to those apps. 

Below is the code to write to SharedPreferences = getActivity().getPreferences()

SharedPreferences preferences = getActivity().getPreferences(Context.MODE_PRIVATE);
SharedPreferences.Editor = preferences.edit();
editor.putInt("high_score",newhighscore);
eidtor.commit();

While reading the preference, we can give an optional value that will get returned if the value for the key we are reading is not present. 

Saving Files
There are two storage areas: "internal" and "external" storage representing built in memory and SD card respectively. 

Internal storage is always accessible. Files stored here is accessible only by the application by default. When the user uninstalls the app, the stored file also get deleted. 
External storage is not always accessible because user can unmount the card. By default the data stored here is world readable. When app is uninstalled, system will remove the apps files only if the saving is done in the getExternalFilesDir()/ 

If the application needs to get installed on to file system, then that can be specified in the android manifest file. 

IF the application would like to write to external storage, a permission needs to be requested in the manifest file via the property android.permission.READ_EXTERNAL_STORAGE / WRITE_EXTERNAL_STORAGE. 

To get applications root files directory, getFilesDir api can be called. To get the caches directory, getCachesDir api can be called. Caches directory is a temporary directory. When there is any memory constraint situation arises, system will delete files from this directory without any warning. 

When trying to access external storage, always should try to see if it is mounted. There are APIs available such as getExternalStorageState which will return Environment.MEDIA_MOUNTED value. Application can check if the media is read only by checking the value as MEDIA_MOUNTED_READ_ONLY 

The files can be stored in the external directory in two forms 

Public : Files are freely available for other applications, the public directory can be obtained by using the API Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_PICTURES)
Private : Even though files are accessible to the user, they don't make much sense to the user. the API getExternalFilesDir can be used for this. Both these APIs accept parameter such as Environment.DIRECTORY_PICTURES so that the system categoryzes the files properly. 

The API getFreeSpace() or getTotalSpace() can be called to know the freespace and the total space. The recommendation is that if the storage is 90% full, then don't write anything. Also, application can write without checking the size and gets into IOEXception, then handle it accordingly. 


When user uninstalls the App, Android system does the following 

- Delete all files in internal storage
- Delete all field on external storage that used getExternalFilesDir() 

Application should delete all the files created with getCacheDir on a regular basis if no longer needed. 

Wednesday, July 9, 2014

Managing Activity Life cycle

Below diagram shows the Activity life cycle 



The main life cycle states are 

Resumed : In this state, activity is in foreground and user can interact with it. (Also referred to as running state)
Paused : In this state, activity is partially obscured by another activity. The other activity that is in the foreground is semi-transparent or does not cover the entire screen. The paused activity does not receive any user input nor execute any code. 
Stopped : In this case, Actvity is completely hidden and not visible to the user; it is considered to be in the background. While stopped, activity information and all its state information such as member variables is retained, but cannot execute any code. 

There are two other states Created and Started, these two are transient state and system moves quickly between these two states. i.e once the system calls onCreate, it quickly calls onStart and immediately onResume. 

An activity should be specified as Launcher activity. The system will use this activity as the starting point of the application. 


if any of the intent type MAIN Or LAUNCHER is not appearing in the  manifest xml, application will not be listed in the HomeScreen's list of apps 

when onCreate finishes the execution, the system calls onStart and onResume in quick succession. The application will never reside in created or started states. Technically the activity comes to the visible state when the onStart call happens, but onResume call comes immediately and it remains on the Resumed state. 

The very last callback is onDestroy. The system calls this method as a final signal that the application's activity instance is being completely removed from the system memory. Most apps should do the cleanup operations in the onPause or onStop method of the activity. 

In normal cases, onDestroy should come after the onPause and onStop methods. However, if the activity calls finish() from the onCreate methods with an intention to launch another activity, the onDestroy will be called without these two life cycle methods are called. 

Pausing and Resuming Activity
During the life cycle of an activity, if there is a semi transparent style of ui is obstructing this activity, then system calls onPause method. However, if the activity is completely invisible, it calls the onStop method. 

If the activity is resuming from the paused state, it will call onResume method. 

It is recommended that the Activity does the below items on the onPause method 

- Stop animations or other ongoing actions that may consume CPU 
- Commit unsaved changes if required. 
- RElease system resources such as broadcast receivers, handles to sensors (like GPS), or any resource that may affect battery life while the activity is paused and the user doesn't need them. 

For e.g. if the application uses camera, this will be the right method to release it 

Generally, onPause should not be used for any CPU intensive operations, instead, they should be done on onStop method. 

When the activity is resumed from the paused state, it calls the onResume method. 

Stopping and Restarting the Activity
Below are the few scenarios where the Activity is stopped and restarted. 

- User opens the recent activity list and switches to the another application
- The user performs an action in your app that starts a new activity. The current activity will be stopped and when user performs the back, it will be restarted. 
- User receives a device interruption such as a phone call. 

When the Activity is getting restarted, onRestart method will be called and immediately the onStart method will be called. 

Similar to paused state, framework keeps the data in memory. application not required to save it. 

Even if the System destroys the activity while it is stopped, it still retains the state of the View objects in a Bundle (a blob of key value pairs)

When the app comes back from Background to foreground, Activities, onRestart get called and followed by that onStart get called. 

Recreating An Activity 
When the Activity is destroyed decease user pressed back or activity finishes itself, system's concept of that Activity is gone, However, if the system destroys an Activity due to system constraints, then although the actual Activity instance is gone, the system remembers that if user navigates back to it, the system creates a new instance of activity using a set of saved data that describes the state of the activity when it was destroyed. The saved data that the system uses to restore the previous state is called the "instance state" and is a collection of key-value pairs stored in Bundle object. 

It is very important to note that the Activity get killed every time when rotate the screen!!! . This is because the screen configuration has changed and the system may need to load different layout files etc. 
In this case also by default system will keep the state in Bundle object. But if the app has to keep track of other states, then it has to have its own mechanism. 

Android uses the android:id attribute in the xml to keep track of the object for restoration purposes. 

System will call onSaveInstanceState method and add key value pairs to the Bundle object. On onCreate method, application should check whether the bundleInstance is null. If not null application can read from it and assign to the instance variables. 

Also application may choose onRestoreInstance method to restore the state of the instance. the onRestoreInstance may get called back only if there is any bundle exist. Also to note, always Application should call super.onREstoreInstancestate and super.onSaveBundleInstanceState methods so that System can do the store/restore operation on the instance state. 

References: 

Monday, July 7, 2014

Android Training - Getting Started - Day 1

Few important items to note in the first application is 

AndroidManifest.xml is having tag which has android:minSdkVersion, android:targetSdkVersion values. The former should be lowest as possible to support various set of devices and the latter should be high as possible to target latest set of devices. 

src/ contains the source which includes the main Activity. res/ directory cottons the below folder 

/drawble- => drawable elements such as bitmaps for the designated screen density 
/layout => contains the layout xml files that defines app user interface 
/values => Contains collection of resources such as strings and colour values 

Running on real devices

Installing the device drives a good documentation is available at http://developer.android.com/tools/extras/oem-usb.html
To enable development mode, the instructions is given at : http://developer.android.com/training/basics/firstapp/running-app.html
to give a short note on this, on devices 3.2 or older, the debug option can be found under Settings -> Applications > Development 
on 4.0 and newer , Settings -> Developer options 
on 4.2 and newer Developer Options is hidden by default. To enable it, Settings -> About Phone, and tap build number seven times and now can return to the previous screen to find Developer Options

To install on to the device, 
1. Change Directories to the root of Android project and execute 

ant debug 

adb install bin/MyFirstApp-debug.apk 

User Interface

User interface is a collection of Views and Viewgroups. Below is the hierarchy. Views are ui widgets such as buttons, Text fields etc. ViewGroups are hidden elements by default and contains these views and defines the layout for these views. 

android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="horizontal">

LinearLayout is a subclass of ViewGroup which lays out its child subviews in either vertical or horizontal orientation. 

To add a TextField to this, the below needs to be added in the XML file 

android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:hint="@string/edit_message">

This above references edit_message property and it is from the strings resource file. The strings resource file can be modified like below so that it picks the value from this place. 

My First App
Enter a message

We can define the width occupied by a view using the layout weight property. The weight value is the amount of remaining space each view should consume. relative to the amount of space consumed by the sibling views. For e.g. if the application gives one view as weight of 2 and another weight of 1, the sum is 3. so the first view will fill the 2/3 of the space and second will fill the rest. now if we add third view with the weight value as 1, then the first view will get 2/4, i.e half of the remaining space while the other two will get 1/4 of the space. 

Default width of each view is 0. so, if we don't specify any weight for a view, then that widget will fill the remaining space after all views are given their requested space. 

With the above in hand, in order to improve the layout efficiency, application can provide the width of the view as 0dp so that the layout engine don't calculate the width and eventually discard it because it not used and overridden by the weight property. 

Inorder to invoke another Activity, application needs to build and intent. the Intent class is in the package android.content.Intent. 
The code to pass the display message to the new activity can be something like below 

Intent intent = new Intent(this, DisplayMessageActivity.class);
intent.putExtra(EXTRA_MESSAGE, message);

References: 
http://developer.android.com/training/basics/firstapp/starting-activity.html

Saturday, July 5, 2014

Android Handling Input Events - Learning Day 3

If application inherit a View class and override the methods like onTouchEvent(), application can intercept these events generated by the Android framework. But since this is tedious, another approach is all of these views contains the nested interfaces with callbacks. These interfaces are Event Listeners.

The common listener methods are: 

onClick()  using View.OnClickListener
This is from View.onClickListener. called back on touch down or corresponding action using a jog wheel or track ball etc. 

onLongClick() using View.OnLongClickListener 
Called by the framework on long touch. 

onFocusChange() using View.OnFocusChangeListener 
Called back when one view is getting unfocused due to user moving away. 

onKey() View.OnKeyListener 
Called back when user focused an item and pressed a hardware key. 

onTouch()  using View.OnTouchListener 
When a touch event happens such as press, release, or any movement gesture within the screen 

onCreateContextMenu Using View.onCreateContextMenuListener 
When a context menu being built. 

It is to be noted that some of the callbacks are having Bool return value, while others don't need a return type. The methods which are having BOOL return type assume the listener handled the event if the listener returned back TRUE. 

When creating a custom Views, some of the common callback method used for event handling are the below 

- onKeyDown(int , KeyEvent)
- onKeyUp
- onTrackballEvent 
- onTouchEvent
- onFocusChanged 

There are few events also bit more important which are below. 

Activity.dispatchTouchEvent(MotionEvent) - This allows view to intercept all events before they are passed to the Window
Activity.onInterceptTouchEvent - This allows ViewGroup to watch the events as they are passed to the child views 
ViewParent.requestDisallowInterceptTouchEevnt- This allows to request to disallow the above. 

There is a touch mode for Views. For e.g. Buttons are focusable in non touch mode, i.e. user is interacting with trackball or keys. In this mode, in order to make actions of a button, first it needs to be focused. But if touch in touch mode, the widget action is directly fired instead of first reporting any focus events. 

Focusable components can be specified in the layout XML file like in same below 


 

references:
http://developer.android.com/guide/topics/ui/ui-events.html

Friday, July 4, 2014

Android Day 2 - Input Controls part 2

CheckBoxes

Application can create a Checkbox using CheckBox class. Like Button, the onClick attribute can specify the method. the code is something like below

public void onCheckbxClicked(View view)
{
boolean checked = ((CheckBox)view).isChecked();
switch (view.getId())
{
case R.id.checkbox_meat:
if(checked)
{
//do something
}
}
}

RadioButton and RadioGroup can be used for creating the RadioButton implementation. RadioButton needs to be placed inside a RadioGroup. The sample is like below

android:layout_width = "fill_parent"
android:layout_height = "wrap_content"
android:orientation = "vertical">

android:layout_width = "wrap_content"
android:layout_height = "wrap_content"
android:text="@string/pirates"
android:onClick="onRadioButtonClicked"/>

android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text = "@string/ninjas"
android:onClick="onRadioButtonClicked"/>

 

ToggleButton class can be used to create a Toggle button which can be used to switch between two states. Android 4.0 API Level 14 has introduced another kind of toggle button called Switch. These two classes are subclass of CompoundButton and function like in same manner. 

Programmatically an application can add the listener like in below code. 

ToggleButton tb = (ToggleButton) findViewById(R.id.togglebutton);
toggle.setOnCheckedChangeListener(new CompoundButton.onCheckedChangeListener())
{
public void onCheckedChanged(CompoindButton buttonView, boolean isChecked)
{
}
}

Spinners allow to select a value from a list of values. Populating values to a Spinner is similar to how it is done for a list, i.e. using ArrayAdapter or a CursorAdapter
sample is like below.

Spinner spinner = (Spinner) findViewById(R.id.spinner);
ArrayAdapter adapter = ArrayAdapter.createFromResource(this,R.array.planets_array,android.R.laout.simple_spinner_item);
adapter.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item);
spinner.setAdapter(adapter);

Picker 
Android provides picker as a control to provide picking of time or date as a ready-to-use dialogs. Picking date using these picker ensures that the user selects the date/time info correctly adjusted to the user locale and formatted properly. Key classes are DatePickerDialog, TimePickerDialog, DialogFragment

Android recommends to use the DialogFragment class to host the pickers. DialogFragment class provides a way to display the pciekr is different layout and configurations. For e.g. basic dialling on handsets or as an embedded part of the layout on large screens. 

The DialogFragment was added first in Api level 3.0 (API Level 11), even older version can have DialogFragment using the support library. 


Wednesday, July 2, 2014

Android - Day 1 - Android Input Controls Part 1

These are the interactive components. Below are some common controls 

Button :- A push-button that can be pressed or clicked by the user to perform an action. 
Text Field : editable textview. Application can use AutocompleteText widget to create a text entry widget that provides auto-complete suggestions. 
Checkbox : On/Off switch as usual.
RadioButton, ToggleButton, Spinner, 
Pickers -> This displays Dialog for users to select a Single value for a set by using up/down button or swipe gesture. DatePicker or TimePicker widget can be used for acheving this. 

Buttons can have text, icon or both. Application needs to use Button and ImageButton class for this. With the Button, the icon image can be placed on the button using android:leftDrawable property. 

There are two ways application have listeners to the button. Defining in the layout xml file in the Button XML attributes using android:onClick property. Inside the Activity which hosts this Button view need to write the method as public, void and accepting the View argument. 

Programmatically can define the action by using the View.onClickListener object. For e.g. below 

Button b = findViewById(R.id.button_id)
button.setOnClickListener(new View.onClickListener() 
{
public void onClick(View v)
{
}
});

The appearance of the button may vary across the different devices since they are from different manufacturers. However, application can set a theme for the entire application and
For e.g application can set the theme as holo theme using the following 

android:theme="@android:style/Theme.Holo" in the manifest element. The theme is not supported on older devices, so, http://android-developers.blogspot.com/2012/01/holo-everywhere.html can help to do something similar for earlier versions of OS. 

Application can set the border style of a button like style="?android:attr/borderlessButtonStyle"

Application can also set a custom background. Below is what application needs to do for this 

1. Create three bitmpas for the button background that represents the default, pressed and focused button states. Application needs to create these as nine patch images
2. place the bitmapts in /drawable resource. the convention is something like button_pressed.9.png, button_focused.9.png, button_default.9.png 
3. Create a new XML in the /res/drawable directory (the name can be like button_custom.xml)

The below could be the content of this XML file 


Text Fields
Applications can use EditText class to display the text. android:inputType specifies whether we should get the input type as email id, text, texturi, number, phone. The inputType filed is a bit or-ed value and the other values are textCapSentences, text capWords, textAutoCorrect, textPassword, textMultiLine, Applications can specify the subsequent actions using the imeOptions value. The possible values are actionDone, actionSend, actionSearch, or suppress everything by using actionNone. In order to listen to the IME action events, below code can be used 

EditText et = findViewById(R.id.edit_text); 
et.setOnEditActionListener(new OnEditorActionListener()
{
@override
public boolean onEditorAction(TextView v, int actionId, KeyEvent evt)
{
boolean handled = false;
if(actionId == Editor.IME_ACTION_SEND)
{
sendMessage();
handled = true;
}
return handled;
}
}
);

Application can also set a custom IME action label by using the property imeActionLabel property. In addition, many flags can be set using android:imeOptions attribute For e.g. in landscape mode, the text field may turn to a full screen one. And this can be disabled using the flag "flagNoExtractUi"

If application needs to provide an AutoCompleteTextView, The code can be something like below 

AutocompleteTextView tv = (AutocompleteTextView) findViewById(R.id.autocomplete_text);
String countries[] = getResources().getStringArray(R.array.countries_array);
ArrayAdapter adapter = (this, android.R.simple_list_item_1,countries);

tv.setAdapter(adapter);

references: 
http://developer.android.com/guide/topics/ui/controls.html