Saturday, May 31, 2014

SIP Timers

http://www.cisco.com/c/en/us/td/docs/ios_xr_sw/iosxr_r3-4/sbc/configuration/guide/sbc_c34/sbc34str.pdf

https://community.acmepacket.com/t5/Interop-and-Standards/What-are-SIP-timers-and-how-are-they-used/td-p/135


iOS Networking Best Practices




http://docs.huihoo.com/apple/wwdc/2012/session_706__networking_best_practices.pdf

Hudson For Continuous Integration

Hudson is an open source project licensed under MIT. Broadly speaking, this is an application that monitors the status of a recurring task, such as a script run under the operating system's task scheduler. However, it provides web based interface that can help for configuring, executing and viewing software builds.

Hudson can be configured to start the build in many ways:

On Demand : Press on the Build Now option in the hudson console.
Fixed Interval: can be configured to start the build around say 8.00pm everyday
Source code management polling: Hudson can poll the SCM and if there is any change detected, then it can prepare the build.

Below is the overall system diagram of hudson



References:
http://www.eclipse.org/hudson/documents/hudson_javaone_2013.pdf

Android Supporting various screen resolutions

Android categorises devices using two property. screen size and screen density.

There are four general sizes: small, medium, large, extra large. 
there are four general screen densities: low(ldli), medium(mdpi), high (hdpi), extra high (xhdpi) 

Application can provide different layout files so that the resources are taken accordingly. the naming is such that layout- 
for example, a extra large screen would have the layout directory name as res/layout-xlarge 

Android automatically scales the layout in order to fit the screen. Due to this layout for different screen sizes doesn't need to worry about the 
absolute size of the UI elements but instead focus on the layout structure that affects the user experience. 

A project that includes a default layout and an alternate layout for extra large screen will have resource folder something like below 

res/layout/main.xml 
res/layout-large/main.xml

If want to provide a different layout xml for landscape as well, below is the format

res/layout/main.xml
res/layout-land/main.xml
res/layout-large/main.xml
res/layout-large-land/main.xml 

When creating the bit map resources, we need to keep in mind the density of the pixels on screen 
xhdpi  is 2.0 dpi , hdpi is 1.5dpi, mdpi is 1.0 dpi, ldpi is 0.75 dpi. 
This means that a image needs to be 200x200 for xhdpi, 150x150 for hdpi, 100x100 for mdpi and 75x75 for ldpi. 

Once the image is made like above, one needs to place these under the folder in the following structure 

res/drwable-xhdpi/myimage.png
res/drawable-hdpi/myimage.png
res/drawable-mdpi/myimage.png
res/drawable-ldpi/myimage.png

System selects the bitmap according to the running device density. 


as a note ldpi image resources are not necessary. If provided hdpi images, the system scales it down by 1/2 to fit the ldpi screens.

Tuesday, May 27, 2014

iOS AudioStreamer implementation

There is an open source free license git hub project which allows the streaming files to be played.

This inputs the streaming URL and plays the media from the given location. the main intention for this attempt is to control the player via the lock screen media controls which are called iPod controls.

Initially thought of using the AVPlayer and doing the below is not really solving the issue of getting the events to the application when the lock screen media controls are invoked.

https://github.com/DigitalDJ/AudioStreamer

iOS Device Details - A useful website

I was trying to get the details of all iOS devices in market and there are some WiKi pages, but the below site looked amazing and giving almost all the useful infos.




Since this search was part of the Hotspot 2.0 compatible devices, http://hotspot2experience.com/ this site was also listing devices which are compatible with the HS 2.0 feature.

Android - Refreshing a bit of history

Android was created on 5th November 2007.

Android 1.0 (API level 1)
This is the first commercial release on 23 Sep 2008. The commercial device was HTC Dream.

Android 1.1 (API level 2)
This release was on Feb 2009. The internal code name was Petite Four. but this name was never used publicly. This version contained bug fixes from 1.0 and few minor feature additions

Android 1.5 (Cupcake) (API Level3)
This release was on April 2009. This version was based on Linux Kernel. This is the first release starting with a theme and henceforth followed for subsequent ones.

This version included support for widgets. Bluetooth A2DP profile as major enhancements.

Android 1.6 (Donut) (API Level 4)
Released on Sep 2009.

This release contained few features such as updated technology support for CDMA/EVDO/802.1x

Android 2.0 (Eclair) (API Level 5)
Released on October 2009

This version had Bluetooth 2.1 support, Improved ui, live wallpapers etc.

Android 2.0.1 (Eclair) API level 6
This release was on Dec 2009. With minor API changes, bug fixes and framework behavioral changes.

Android 2.1 (Eclair) API Level 7
released on Jan 2010. This version had Minor API amendments and bug fixes.

Android 2.2 (Froyo) API level 8
This is short form of Frozen Yogurt. this version has additional application speed enhancements using the JIT compiler. This version has the C2DM feature introduced. Has added Wifi hotspot functionality.
Supports bluetooth enabled car and desk docks. Added Adobe flash support

Android 2.3 - 2.3.2 Gingerbread (API level 9)
This was my first android device. Released on Decemeber 2010. This version included updated user interface for simplicity and speed. Native support for SIP and VOIP telephony, support for NFC, support for WebM, VP8 codec and AAC playbacks, concurrent garbage collection for increased performance.

Android 2.3.3 to 2.3.7 (API Level 10)
This version has several improvements and bug fixes. This has improved network performance for Nexus S 4G, This release added support for Google Waller in Nexus 4S phones. 

Android 3.0 HoneyComb (API Level 11)
this was released on February 2011, This version was the first tablet only update. The first device featuring this is Motorola Xoom tablet. This version has simplified multi tasking, Support for multi core processors, https stack improved with server name indication (sni).

Android 3.1 Honey Comb (API Level 12)
Released on May 2011 contains ui enhancements, resizable home screen widgets, support for joystick and gamepads, support for external key boards and pointing devices, support for FLAC audio playback, Support for http proxy for each WiFi access point. 

Android 3.2 Honey comb (API Level 13)
This has versions unto 3.2 to 3.2.6. Released on July 2011. Most first and second generation Google TV devices utilise the Honeycomb 3.2 version. 

Android 4.0 - Android 4.0.2 Ice Cream Sandwich (API Level 14)
Released on Oct 2011. This is the last version that included Adobe's flash player. This version has Android beam, a NFC feature that allows rapid short-range exchange of bookmarks, contact info, directions, youtube videos and other data. 

Includes Web-P image format, WiFi Direct, Android VPN framework (AVF),

Android 4.0.3 - 4.0.4 Ice Cream Sandwich (API Level 15) 
Released on Dec 2011, this version includes new APIs for developers, including social stream social apis in contacts provider, smoother screen rotations, stability improvements 

Android 4.1 Jelly Bean (API Level 16)  
Released on July 2012, Nexus 7 was the first device to run Jelly bean OS. This included ui performance and smoothness, triple buffering, a smoothness project called "Project Butter". 

Android 4.2 Jelly Bean (API Level 17)
Re,eased on Oct 2012, This version is debuted on Google's Nexus 4 and Nexus 10 which were released on Nov 2012.
This supports wireless display from Miracast. 


Android 4.3 Jelly Bean (API Level 18)
This was released on July2013 with the slogan "Ever sweet Jellybean" during an event called breakfast with Sundar Pichai. Nexus 7 tablet was the first device which officially shipped with this OS . This version has Open GL ES3.0 


This version has support for system level Geo fencing and WiFi scanning APIs. Included DRM APIs, Bug fixes for Nexus 7 running on LTE.

Android 4.4 Kitkat (API Level 19)
This version is released on Sep 2013,
Major features include NFC host card emulation, enabling device to replace smart cards.  Webviews are based on Chromium engine, This version has built in screen recording feature. This version has a built in IR blaster.

References:
http://en.wikipedia.org/wiki/Android_version_history

Friday, May 23, 2014

A short list of WLAN Jargons

Promiscous Mode: In a network, promiscuous mode allows a network device to intercept and read each network packet that arrives in its entirety. This mode of operation is sometimes given to a network snoop server that capture and save all packet for analysis (for e.g. for monitoring network usage) 

When a data packet is transmitted in a non-promiscous mode, all the LAN devices "listen to" the data to determine if the network address included in that data packet is theirs. if it is nt that data packet is passed on to the next LAN device until the device with the correct network address is reached. That device then receives and reads the data. 

the promiscuous mode is something that needs to be supported by the network card. when the network card is in promiscuous mode, it accepts all packets, even if the destination MAC of the frame doesn't match its own MAC. However, broadcasts are accepted anyway. Without promiscuous mode frames with MACs other than one interface has are ignored (apart from broadcasts again).

A 802.11 LAN is based on a cellular architecture where the system is subdivided in to cells, where each cell (called Base Service Set or BSS, in the 802.11 nomenclature) is controlled by Base station (called Access Point , AP)



Even though a Wireless LAN can be formed by a single call called Access point, normal structure is that it will be formed by multiple cells which are connected with another medium such as a backbone which is called a Distribution System or DS. The backbone could be a LAN cable or wireless LAN itself. 

The whole interconnected network cells including multiple cells, their access point and DS is seen to the upper layers of the OSI model as a single network. and is called a standard Extended service set (ESS).

The 802.11 standard also defines a concept called "Portal" which is actually a medium to interconnect between 802.11 and another 802 LAN. This also sometimes called translation bridge. 


Very Good link on knowing the inner details of IEEE 802.11 

Thursday, May 22, 2014

Analyzing IEEE 802.11 frames

IEEE 802.11 frame is only accessible at the very low layer of the packet capture. Capture needs to be done in monitor mode and only certain MAC can capture is, in windows it is not really easy as well.

Below command enters the network adapter in monitor mode. Below man page of the TCP dump tell the more options available for this command.

The important parameter to put the adapter in monitor mode is -I

I could not find a good way to exit out of the monitor mode, but restarting the device was exiting the monitor mode. Entering the device to monitor mode will cause the network to be lost.

sudo tcpdump -i  en1 -I  -s 0 -B 524288 -w ~/Desktop/DumpFile01.pcap

TCPDUMP(1)                                                          TCPDUMP(1)

NAME
       tcpdump - dump traffic on a network

SYNOPSIS
       tcpdump [ -AbdDefgIkKlLnNOPpqRStuUvxX ] [ -B buffer_size ] [ -c count ]
               [ -C file_size ] [ -G rotate_seconds ] [ -F file ]
               [ -i interface ] [ -m module ] [ -M secret ]
               [ -r file ] [ -s snaplen ] [ -T type ] [ -w file ]
               [ -W filecount ]
               [ -E spi@ipaddr algo:secret,...  ]
               [ -y datalinktype ] [ -z postrotate-command ] [ -Z user ]
               [ expression ]

DESCRIPTION
       Tcpdump prints out a description of the contents of packets on  a  net-
       work  interface  that match the boolean expression.  It can also be run
       with the -w flag, which causes it to save the packet data to a file for
       later analysis, and/or with the -r flag, which causes it to read from a
       saved packet file rather than to read packets from a network interface.
       In  all  cases, only packets that match expression will be processed by
       tcpdump.
:

Tuesday, May 20, 2014

Xcode Scheme, Target and Workspace

Xcode scheme 

Xcode Scheme defines a collection of targets to build, a configuration to use when building, and a collection of tests to execute. 

A user can have as many as scheme he want but only one can be active at a time. One can specify whether a scheme to be available inside a project or in every workspace that included in that project. 


Xcode Target

A target specifies a product to build and contains the instructions to build it from a set of files in a project or workspace. A target defines a single product, and it organises the inputs to the build system. The inputs to build a product is taken in form of build settings and build phases. A target inherits build settings but one can override any of the build settings by specifying different settings at the target level. There can be only one active target at a time. The Xcode scheme specifies the active target. 

A Target and the product it creates can be related to another target. If a target require another targets output, it can declare a dependancy. If the targets are in same workspace, Xcode will find the dependancy internally and such kinds of dependancy is called implicit dependency. 

Xcode workspace 

Xcode workspace is a document that groups together different projects and other documents so that one can work on them together. A workspace can contain any number of projects and files. A workspace can also provide implicit and explicit relationship between projects and their targets. 

The concept of Workspace was introduced only in Xcode 4.0 and later before that it was just a project file holding other project and project related files. 

There are few advantages of having a workspace. 

- Since the workspace share the scope for all of the projects and files inside, when there is a refactoring or code completion, it is globally affecting all of the files.




Monday, May 19, 2014

Creating an App in iTunes Connect

To submit an app to the app store, there are  few steps we need to do via the iTunes connect application.

The iTunes connect application can be accessed via the URL https://itunesconnect.apple.com/

First step
Enter the basic information about the app like below


The SKU number is a unique id for the app. It can be a number or a literal or a combination etc.

Second Step:
In this step, one needs to enter the availability date etc like in the below image.

If someone is not selecting any territory filter, then app will be available in all the territories.


Next , Needs to enter the meta data information about the app. The fields look like the below. 

After this, needs to enter the App Reviewers information

After this needs to enter the EULA text if anything to be shown to the user. 
Next section is a bit more important. Below given the screen shot for the image requirements. 


Next screen comes up with a certain Export compliance, Content Rights and Advertising Identifier usage. 


You are now ready to upload your binary using Application Loader. Application Loader can only be used when your app status is Waiting for Upload. Once the binary is uploaded, your app status will change first to Upload Received and then to Waiting for Review. If we encounter any issues with the binary itself, your app status will change to Invalid Binary and you will receive an email explaining the issues and the steps you can take to correct them.
If you have downloaded Xcode 5 (5A1413) or later, in Xcode, choose Xcode->Open Developer Tool->Application Loader. If you do not find it, download and install the latest version of Application Loader.


Sunday, May 18, 2014

WiFi Certified Passpoint Architecture

What problem does a Passpoint architecture solve?
One word answer is WiFi Roaming.

Cellular networks, when they cant find their home network, automatically identify and register with national and international roaming partners without need for user intervention. Before passpoint, WiFi networks lacked this feature because there was no widely adopted protocol to do this. Below are the cumbersomness in the pre-passpoint era of WiFi architecutre.

Today's Wifi Access points have only one publicly accessible label, the SSID. Hence this SSID is used to indicate different network types. Most SSIDs reflect the organization operating the Access point for e.g. "myhomewifi" while others indicate the service provider for e.g. "Docomowifi". IF someone needs to say that this the organization is having the wifi from a service provider, it would need to advertise two ssids. While it is possible to advertise multiple ssids, it is quite inefficient airtime and cannot be extended so far.

With the passpoint, the information about the service and service providers that are reachable via a hotspot are separated from SSID. A new protocol allows mobile device to discover a comprehensive profile of the hotspot before it associates so that it can quickly identify and prioritize hotspots suitable for its needs. with passpoint, a mobile device can silently find the appropriate network and associate with it and get authenticated silently while the device is in user's pocket itself.

The passpoint does the below for enhancing the hotspot experience.

- New info in the beacon and probe responses
- A new GAS/ANQP protocol to allow pre-association queries of a hotspots's capabiltiies
- New information fields that allow a mobile device to know which service providers are accessible via the hotspot
- New info fileds that allow mobile device to know which operator and the venue and configuration of the hotspot
- Security features to further enhance the security against attacks.

GAS & ANQP
ANQP (Accees network query protocol) is the protocol delivered inside the framework of GAS (Generic Advertisement Service). This protocol is used for querying the capability of a hotspot. The existing protocols are beacon and probe and they are quite inefficient for the automatic association and capability discovery and hence the ANQP was introduced.

The GAS/ANQP support of a hotspot can be detected by a mobile device using the beacon/probe response info field that has been newly added.

Below are the major new fields added to the Beacon/Probe response:

- Access network type, identifying whether the network is public/private/guest access etc
- Internet bit, indicating whether the hotspot can have internet access
- Advertisement bit whether the hotspot supports GAS/ANQP
- Roaming consortium element. list upto 3 names of reachable service providers.
- Venue information of the hotspot
- Homogeneous ESSID, a value identifying  hotspots in a continuous zone
- P2P cross connectivity capability
- BSS load value, a value indicating the current load of the hotspot.

In most cases, device will identify the hotspot in the area using probe requests and more complex picture of the hotspot is queried using GAS/ANQP

In the initial release of passpoint, the following are the informational elements in the ANQP query

- venue name information
- Network authentication type info
- Roaming Consortium list
- IP address Type availalbility info
- NAI realm list
- 3GPP cellular network info
- Domain name list
- Hotspot operator friendly name
- Operating Class
- Hotspot WAN metrics
- Hotspot connection capability
- NAI home realm

References:

http://www.arubanetworks.com/pdf/technology/whitepapers/WP_Passpoint_Wi-Fi.pdf

Saturday, May 17, 2014

Samsung Gear A High Level Overview

Samsung Gear comes in 3 flavors, Samsung Gear 2, Samsung Gear 2 Neo, Samsung Gear 2 Fit. Neo misses 2 mega pixel camera on it and hence it is cheap. These devices are no longer called Galaxy Gear because these devices are not powered by the Android OS, instead TIZEN. Gear Fit on the otherhand is run by RTOS not TIZEN.

Gear 2 has heart rate monitor, A Gyroscope, Accelerometer, IR Blazer for controlling AV devices. This has BLE 4.0, IP 67 certificated for water and dust resistance, It also has microphone and speakers so that one can answer the phone using this.

The Gear devices are compatible with 20 of Samsung Galaxy devices. Like Sony SmartWatch require LivewareMAnager here it require the GearManager which again is available via the app store. Once the app is installed, the device needs to be paired with the Gear.

Gear Neo and Gear 2 has 1 GHz processor 500MB RAM 4GB Storage.

The main health monitor apps are Pedomenter, Heart rate monitor, Sleep, Exercise

Pedometer app lets user to set Goals on how many number of steps one can walk and it monitors the steps.

Exercise app has many modes, one being walking, When start working, can press on Start and it will tell the distance walked, the hear rate, calories burnt etc.

Cycling mode: In this mode, it allows user to start cycling activity and when it starts, the app uses users GPS location to estimate the distance travelled.

Sleep App - will let user monitor the sleep activity. It interestingly tells whether user was motionless during the sleep or how much time he slept etc.

there is a speech based search functionality, also it has Voice recorder which has transcriptor feature as well. This recorded voices then get synced up to the devices.

When there are emails, user can open those and it has a feature called Show on Device. This feature will unlock the phone, and take user directly to the message.

 this also has find device app which will let the other device ring, this can be used in both the ways.

This has a S Voice input that can work with the email, message apps. 

Sony SmartWatch - a high level Overview

Sony SmartWatch is a wearable device that works in conjunction with Sony Xperia smart phones.

When a call or message comes in, the Watch acts like an android remote for the phone in users pocket and gently vibrates to let the user know about the call / message.

To have this set up, the below are the three simple steps :

- Install the Liveware manager
- Pair device and install device app
- Install apps

For the second step, Go to Settings -> Wireless and Netwroks, turn on the smartwatch using long press, until see the pairing icon. Once we see the pairing icon, turn on the bluetooth icon and scan for devices. Once the device appears in the phone bluetooth scanned devices list, click on the smartwatch to start the pairing process. The pairing request needs to be acknowledged on both the devices.

Once the pairing is complete, user will get a prompt to install the SmartWatch application on the device which actually manages the smartwatch. This is available in the PlayStore. Once the app is downloaded, the watch will start showing the clock. Swiping away the clock reveals that there are no application installed.

To install the applications for the SmartWatch, launch the LiveManager or go to the System Manager and click on the SmartWatch. Sony has developed a set of applications for the SmartWatch already which they recommend, it can be installed and make most out of the smartwatch by the users. 

Wednesday, May 14, 2014

Detailed Look at 802.11u, Hotspot 2.0

Detailed Look at Hotspot 2.0 802.11u and Hotspot 2.0 mechanisms

Both the 802.11u(IEEE) and the Hotspot 2.0(WiFi alliance) specs details the protocols and mechanisms for enhancing the WiFi capabilities. 

there are 3 main organisation those who have done initiative for enhanced wifi capabilities. They are the ones below

IEEE => IEEE brought the 802.11u initiative which is amendment to the 802.11 published in 2011. 
Wi-Fi Alliance => Brought the Hotspot 2.0 initiative which is basically the technical program and specification that defines technical requirements for Passpoint interoperability certification. 
Wireless Broadband alliance => Brought the Next Generation hotspot initiative which basically trying to establish a common framework for interoperability between networks and devices. 

802.11u specification was ratified in Feb 2011.It defines a number of enhancements to the 802.11(WLAN) protocol to address the process of interworking with external networks. One of the motivations of 802.11u is to learn more about a network before deciding to join it. Some of the relevant use cases for knowing the additional info about a network is 

- network selection 
- automated roaming and offload 
- secure user authentication 
- emergency services and 
- QOS integration with operator networks carrying user traffic. 

On the heels of IEEE, WiFi alliance has also been at work creating a certification framework and specification that complements a subset of 802.11u protocols. The WiFi alliance program called Hostpot 2.0 is the technical specification which spells out the Passpoint certification requirements. 


To note again, Hotspot 2.0 is the technical specification and Passpoint is the certification. Hotspot 2.0 technical specification spells out the requirements for Passpoint certification. 

Normally, hotspot clients use Active probes or passive beacons for scanning to discover APs, to learn about the network, to determine which network is best and connect to it. the problem today is that this process depends on the user recognition of the network name. What 802.11u does is, it does not alter this mechanism, instead it makes available few more info about the network during the scanning process and it allows the client to query AP for more info. 






References:
http://www.arubanetworks.com/pdf/technology/whitepapers/WP_Passpoint_Wi-Fi.pdf
http://hotspot2experience.com/
http://c541678.r78.cf2.rackcdn.com/appnotes/appnote-wispr.pdf
http://www.slideshare.net/zahidtg/hotspot-20-making-wifi-as-easy-to-use-and-secure-as-cellular
http://www.arubanetworks.com/pdf/technology/whitepapers/WP_Passpoint_Wi-Fi.pdf
http://a030f85c1e25003d7609-b98377aee968aad08453374eb1df3398.r40.cf2.rackcdn.com/wp/wp-how-interworking-works.pdf

Tuesday, May 13, 2014

iOS Audio Session - Hands on

Below was what i had implemented for a test app for a hands on with audio framework.
With this approach, it did not really work well to resume back the audio even though i had placed the interruption handling code and upon resuming after interruption, started the player again.

Below is what is noticed through this exercise

With interruption handling code like in below lines:
- Incoming call comes in, application is pushed to background, after the call interruption, either ignore or accept the call, application doesnt resume playing the sound
- User pushed the app to background and initiates a call, the audio gets paused and after disconnecting the call, the audio doesnt get resumed.
- User is answering the call, and the application is brought to foreground while the app is playing. Now disconnect the call. This doesnt seem to resume the play back again
- Device is locked state playing the sound, and now interruption comes in and other end ignores the call. This also doesnt seem to resume the audio play!
- The only time it seem to be able to resume the play is when the app is foreground, interruption happend and interruption was ended from the caller end.

-(void) startPlayer
{
    if(self.audioPlayer == nil)
    {
        self.audioPlayer = [[AVPlayer alloc]initWithURL:[[NSURL alloc]initWithString:@"http://streamurl"]];
    }
    currentState = ePlayerStatePlaying;
    [self.audioPlayer play];

}

-(void) stopPlayer
{
    if(self.audioPlayer == nil)
    {
        NSLog(@"-- invalid state. Player is already nil");
        return;
    }
    currentState = ePlayerStateStopped;
    [self.audioPlayer pause];
}

For this work, the AVPlayer was used. 

also the AVAudioSession was customized to have the playback category when application is getting loaded 

AVAudioSession* audio = [AVAudioSession sharedInstance];
    [audio setCategory: AVAudioSessionCategoryPlayback error: nil];
    [audio setActive: YES error: nil];

to handle the interruption, NSNotification was added 

[[NSNotificationCenter defaultCenter] addObserver: self
                                             selector:@selector(handleInterruption:)
                                                 name:AVAudioSessionInterruptionNotification
                                               object:[AVAudioSession sharedInstance]];

In the interruption handling code, below was written 

-(void) handleInterruption:(NSNotification*)notification
{
    if (notification.userInfo) {
        int interruptionType = [notification.userInfo[AVAudioSessionInterruptionTypeKey] intValue];
        if (interruptionType == AVAudioSessionInterruptionTypeBegan) {
            [self beginInterruption];
        } else if (interruptionType == AVAudioSessionInterruptionTypeEnded) {
            [self endInterruption];
        }
    }
}

The above code was only resuming the app when app was in foreground and the interruption ended.
This is fixed by few forum searches.

The below statement is added before starting the player

[[UIApplication sharedApplication] beginReceivingRemoteControlEvents];

the interruption handler code is modified like below 

-(void) handleInterruption:(NSNotification*)notification
{
    if (notification.userInfo) {
        int interruptionType = [notification.userInfo[AVAudioSessionInterruptionTypeKey] intValue];
        if (interruptionType == AVAudioSessionInterruptionTypeBegan) {
            [self beginInterruption];
        } else if (interruptionType == AVAudioSessionInterruptionTypeEnded) {
            int interruptionOption = [notification.userInfo[AVAudioSessionInterruptionOptionKey] intValue];
            if(interruptionOption == AVAudioSessionInterruptionOptionShouldResume)
            {
                NSLog(@"--- AVAudioSessionInterruptionOptionShouldResume ---");
                [self endInterruption];
            }
            else
            {
                 NSLog(@"--- AVAudioSessionInterruptionOptionShouldResume after dealy ---");
                 [self performSelector:@selector(endInterruption) withObject:nil afterDelay:1.0];
            }
        }
    }
}

Monday, May 12, 2014

AVAudioSession Interruption handling

An audio interruption is deactivation of an apps audio session which immediately pauses stops or pauses the audio. Interruptions happen a competing audio session from an app activates and that session is not categorized by the system to mix with the first app. 

Handling Audio interruptions by registering for appropriate NSNotification 

Below are the expected behavior from an apps 

After interruptions starts 
  - save state and context 
  - Update user interface 

After interruption ends 
  - restore state context 
  - Reactivated audio session if app appropriate 
  - Update user interface 

the above most of the behavior is automatically handled by the AVAudioPlayer or AVAudioRecroder objects. 

Depending on which audio technology is used, the interruption handling is interruptions 

AV Foundation Framework: The AVAudioPlayer and AVAudioRecorder classes provide delegate methods for interruption start and end. These methods can be implemented to update the ui and once the interruption ends, to resume paused playback. 

Audio Queue services, I/O audio unit: This technology makes the app to be in control of handling interruptions. The app is responsible for saving the playback or recording position and for re-activating the audio position after the interruption ends. 

open AL : When using openAL for audio playback, app needs to register for appropriate NSNotification notifications - as when using Audio Queue services. However, the delegate must additionally manage the audio queue context in this case. 

System sound services: Sounds played using System Sounds Services go silent when interruption begins. They can automatically be used again if the interruption ends. Apps cannot influence the interruption behavior for sounds that use this playback technology. 

Below is a better representation of how the OS handles the interruption  (Image courtesy: Apple)


If we are using AVAudioPlayer class, it provides it provides its own delegate methods via the AVAudioPlayerDelegate protocol. 

The delegate methods are audioPlayerBeginInterruption, audioPlayerEndInterruption.

If using the openAL for audio playback, application needs to implement and interruption listener callback function as we do with the Audio Queue services. However, the interruption handler must additionally manage the openAL context. upon receiving the interruption, set the context to null and after ending the interruption, set the context to the previous one. 




Sunday, May 11, 2014

AVAudioSession A high level look

AudioSession is an intermediary between an iOS app and iOS for configuring audio behavior. Upon launch an app automatically gets a singleton audio session. an app can configure this default behavior for the needs of an app's audio behavior. Some of the practical examples for configuring the default audio session could be for these below

- Mix apps sound with those from other apps such as music app or if the app intends to silence other audio
- Responding to an audio interruption such as incoming call or clock Alarm
- Respoding when a user plugs in the headset or unplug the headset

the default behavior of Audio session is like below

- Playback is enabled and recording is disabled
- When user moves the slient switch (or ring/silent switch on iPhone) to silent position, the audio is silenced
- When the user presses the Wake/Sleep button on the lock button, or when the lock period expires, the app audio is silenced
- When the app audio starts, other audio on the device stops.

the main audio session categories are:

AVAudioSessionCategoryAmbient : This category is used for apps which are like play along apps. Such a virtual piano apps which plays along with music apps audio that goes on in background. Such app's audio will be muted when silent switch is turned to silent and by screen getting locked.

AVAudioSessionCategorySoloAmbient:  This is the default category. This category doesnt allow the audio mixing with the background audio. when the apps session starts playing auido, other app's audio will get interrupted. Similar to the previous category, the audio gets stopped when the wake lock timer expires or wake/lock button is pressed. And also get silent when the ringer/silent button is set to silent.

AVAudioSessionCategoryPlayback: This category is used for playing recorded sounds. Using this category will let the app to continue playing the audio even after the wake/lock button pressed or ringer/silent button is set to silent. Inorder for app to continue playing the audio when transition to background, apps audio property must be set to UIBackgroundModes

AVAudioSessionCategoryRecord: This category is used for recording audio. this category silences audio. To continue recording even when app transitions to background, we need to add the audio value UIBackgroundModes to the plist file.

AVAudioSessionCategoryPlayAndRecord: This category is most useful for VOIP type of apps. The audio continues when even when the ring/silent switch is set to silent. Like other categories to continue record and play in background, audio property value must be set to UIBackgroundModes. This category by default is non mixable. however, we can set the property so that it becomes mixable. The property is AVAudioSessionCategoryMixWithOthers

 AVAudioSessionCategoryAudioProcessing: This category is used for using audio hardware codec or signal processor while not playing or recording audio. This category by default disables the audio playback and recording. The audio processing cannot continue in background. however, when app is pushed to background, an app can request for additional processing time.

AVAudioSessionCategoryMultiRoute: This category is used for routing distinct audio streams to different devices at the same time. For e.g. this category can be used to route audio to both a USB device and a headset.

Saturday, May 10, 2014

WISPr in practical deployment

WISPr stands for Wireless Internet Service Provider Roaming is a draft protocol submitted to the WiFi alliance allowing for hotspot service. Typical WISPr based implementations deliver several features such as below

- Universal Authentication Method (or UAM, browser based login at a captive portal)
- Walled Garden
- Time based user session control
- Additional RADIUS attribute for some hotspot service settings

Some of the Jargons associated with the mechanism are:

UAM - Universal Authentication Method allows user to login to access WISP services with just a wifi network interface and users device browser

UAM Login URL - the url that is served by the provider for users to login

UAM logout URL - The url that is served by the provider for users to logout

Walled Garden : The purpose of walled garden is to let unauthenticated users to access certain page such as hotel login page, online registration form, without needing to login first.

Typical flow on a WISP enabled hotspot is something like below

1. Hotspot Client associates with a Hotspot WLAN which is typically encrypted
2. The hotspot user then tries to browse the web on the hotspot client by going to www.google.com 
3. The hotspot user is then redirected to the login portal server by the Zone director implementation. 
4. after the user inputs the credentials, the information is then sent to the UAM server on the ZoneDirector (1), the zone director is then sends the access request to the RADIUS server (2) which then responds back with accept or reject message (3).

5. After the user is authenticated, he will be redirected to the original web page they requested. Optionally, administrators can redirect them to another appropriate web page (such as an airport welcome page for example.)



References:
http://c541678.r78.cf2.rackcdn.com/appnotes/appnote-wispr.pdf 

Tuesday, May 6, 2014

What is Esitomte SDK?

Estimote SDK allows application developer to allow application to interact with beacons. The SDK requiremetns are Android SDK 4.3 or above and BLE. The SDK allows applications to do the below

- Beacon Ranging
- Beacon Monitoring
- Beacon characteristic reading and writing (proxymity, UUID, major, minor values, broadcasting power, advertising interval)

Installation looks to be straight forward.  We just need to copy the  estimote-sdk-preview.jar file into the application lib directory. And the following permissions needs to be given to the app.

 android:name="android.permission.BLUETOOTH"/>
 android:name="android.permission.BLUETOOTH_ADMIN"/>
 android:name="com.estimote.sdk.service.BeaconService"
         android:exported="false"/>
The main methods are:

1. Setting the ranging listener. The code is something like this

private BeaconManager beaconManager = new BeaconManager(context);

beaconManager.startRangingListener (new BeaconManager.RangingListener()
{
@override public void onBeaconDiscovered(Region region, List beacon)
{
}
}

Monday, May 5, 2014

iPhone Passbook

Passes are digital representation of information what otherwise should have been printed on a piece of paper. Passes can contain image or barcode and one can update passes using push notifications. There is a pass library which contains users passes. Users view and manage their passes by using passbook application.

A pass can be created by providing data including json file and the images in the package that contain passes. the JSON describes the contents of the pass and allows some control over the visual experience of the pass. You can sign the pass using private key for a certificate you obtain from apple. to keep the private key private, the passes are signed at the server, not on the client device.

Passes can be updated once after they are delivered/ APNS notification lets the user know that there are some udpate to the pass and using a web service api implemented by the server, passes are udpated by downloading latest version of the pass.

An application can interact with the passes and the pass library using the passkit framework.  

Sunday, May 4, 2014

What is Hotspot 2.0

Hostspot 2.0 (HS 2.0) is a new standard for Public access WiFi that enables seemless roaming between WiFi networks and among WiFi and cellular networks. Hs 2.0 was developed by WiFi alliance and Wireless broadband association to enable seemless handoff of traffic without requiring additional user sign on and authentication.

The HS 2.0 specification is based on set of protocols called 802.11u, which facilitates cellular like roaming, increased bandwidth, and service on demand for wireless equipped devices in general. When a subscribers 802.11u capable device is in range of at least one wifi network, the device automatically selects the network and connects to it. Network discovery, registration, provisioning and access process are automated so that the user does not have to go through them manually inorder to have them stay connected.

Samsung was the first to have Hotspot 2.0 in their device range. Galaxy S4

References:
http://www.theruckusroom.net/2013/09/apples-ios-7-gets-hot.html
http://theruckusroom.typepad.com/files/wp-802.11u-0613.pdf

Thursday, May 1, 2014

iBeacon Various test cases and results

Part of an application testing did the below experiments to find out the details of iBeacon functionality

1. First of all, it needs devices with Bluetooth 4.0. This means that it can only be tried one iPhone 4s or later, iPad 3rd generation or later, iPad mini first generation and later, iPod touch 5th generation. Installed on iPhone 4, it appears that the functionality doesn't work!.

2. Second hitch is that, the region monitoring doesn't work if the app is not running. The region monitoring get invoked only if the app is running in either foreground or background.

3. When app display is turned on, it quickly gives the call back saying that app entered beacon range. This callback comes very quickly.

4. When app is in the background but user explicitly turned off the backlight, or the backlight gets turned off, application is not getting the call back immediately. Looks like there is a minimum time interval after which the scanning of beacon begins after the backlight is turned off.

5. There seems to be an option to turn off the background refresh from Geneal -> Background App Refresh and the iBeacon apps get listed in this as it appears.

6. Having internet connection or WiFi connection doesn't look to be affecting this functionality.

7. One time it is found that after even for 10 minutes, even though I was on the vicinity of the beacon, the backlight did not turn on and neither it was not posting notifications or app was not getting invoked. This make me feel that it is unreliable a bit!

Tried the above all in an iPhone 5S device, the latest and greatest i could get as of this writing. And used the AirLocate app to test these.

Also noticed Apple documentation that is relevant with this

"When testing your region monitoring code in iOS Simulator or on a device, realize that region events may not happen immediately after a region boundary is crossed. To prevent spurious notifications, iOS doesn’t deliver region notifications until certain threshold conditions are met. Specifically, the user’s location must cross the region boundary, move away from the boundary by a minimum distance, and remain at that minimum distance for at least 20 seconds before the notifications are reported."


The specific threshold distances are determined by the hardware and the location technologies that are currently available. For example, if Wi-Fi is disabled, region monitoring is significantly less accurate. However, for testing purposes, you can assume that the minimum distance is approximately 200 meters.

Having read this, did the below test:

- Moved away from the Beacon more than 200 metres (climbed down the steps to the first floor of my 5 floor apartment and climbed up to the door back again. Nearing the door, the beacon notification fired up !!). So in practical use cases, the functionality could be reliably used.


Like me, it appears that there are many developer out there discussing the same, like the ones below:
http://stackoverflow.com/questions/23146629/ibeacon-reliable-unreliable



References:
http://developer.radiusnetworks.com/2013/11/13/ibeacon-monitoring-in-the-background-and-foreground.html

What is Apple CarPlay?


The CarPlay can use the Cars console for rendering the iOS apps that are loved by the users such as Phone, Messaging, Music, Maps, iTunes Radio etc. User can either be satisfied with the existing map features available in the console or once the iPhone is connected, user can use the apps from the phone, it syncs up the recents from the phone and displays on the list. Car's default interface interaction system can be used for navigating through the display or controlling and invoking the app.

CarPlay uses iOS Siri system to interact with the Device to give instruct the system, such as Calling a number, navigating to a location with the help of map etc.

Regarding APIs, it is not exposed to the Application Developers yet and it stays as private ones as of this writing. there are few apps having access to this private APIs and those applications are Spotify, iHeart Radio, Beats Radio and Sticher. If notice, these apps share common behavior that they all stream contents to the device, the APIS might get exposed by the June when the WWDC announces it probably.

The major car manufacturer;s those who has adopted the car play as of now is Mercedez Benz, Volvo, Hyundai.