Sunday, November 30, 2014

Pasteboard in iOS interprocess Communication


On iOS, the class is UIPasteBoard, and on Mac, it is NSPasteBoard. These both are pretty much the same, but on iOS, the APIs are modern and cleaner. 
Programmatically writing into pasteboard is nearly as invoking Edit > Copy in GUI application. 

NSImage *image; 
NSPasteBoard *pasteboard = [NSPasteBoard generalPasteboard];
[pasteboard clearContents];
[pasteboard writeObjects:@[image]];

The receive of contents in the pasteboard is bit more involved and it is like below 

NSPasteBoard *pasteboard = [NSPasteBoard generalPasteboard];
if([pasteboard canReadObjectForClasses:@[[NSImage image] options:nil])
{
NSArray *contents = [pasteboard readObjectsForClasses:@[[NSImage class]] options:nil];
NSImage *image = [contents firstObject];
}


What makes Pasteboard compelling as a mechanism for transferring data is the notion of simultaneously providing multiple representations of content copied into pasteboard. For e.g. a selection of text may be copied as a Rich text or a normal text and the receiver can be accordingly configured to take it. For e.g. a rich text editor will be able to take the formatted text. 

References:
http://nshipster.com/inter-process-communication/

What is Chromebook - vs Microsoft Windows, Apple iPad



Chrome book is laptop with the Chrome OS installed within. The iPad is not having a keypad hardware causing it most of the time inconvenient to use for kids especially. 
Like mentioned in an article, Chromebook is for creating and iPad is for consuming. 
Chromebook is completely web based, meaning if there is no network, it can’t really function very well. 


Chrome book is basically available for multiple hardware vendors such as Acer, Dell, Samsung Microsoft 
Many of the schools in US are said to be ditching the iPad and going behind the Chromebooks. 

Some of the apps we used to are not supported in Chromebook such as Microsoft office, But these days many of the desktop apps are having their web versions, so it is not a big deal. 

References:


Apple Or Google ?

(Picture Courtesy http://www.pocket-lint.com/)

- Based on many articles read it is very much obvious that for any solution it is must to have interoperability between devices, Google is achieving it highly using the cloud model, by providing centralised cloud store for settings, preferences, data which can be accessed via a number of mediums such as handhelds, desktops.  This is a good reminder for us to concentrate our apps in this as well. We rarely integrate with the cloud services and take advantage of the cloud platforms offered by the giants, such as iCloud, Google Drive, One Drive. There are plenty of APIs now  available for such integrations. 

- Apple is driving the user base with its superior hardwares, while i think cloud based services of Apple is not really any weaker than Google’s but it falls short in terms of number of devices it can reach. A simple example may be Google apps are available on iOS and a user with Google account can use apple device without much problems, while the other way is not really true, in the sense that iTunes accounts are restricted only to Apple devices. Not sure this can really change, and it has been like this from many years, and still Apple is strong! 

- Samsung’s flexible display “YOUM”  OLED Displayes could drastically change the user experience possibly the form factor of the devices, With this, Apple’s superior quality devices could get a good competition. Of course Apple will have definitely something in their kit to counter this. 

- Also read the article that Chromebooks are taking over the iPads in education area. This is again due to the price affordability may be especially in education area. There are two advantages being told for Chromebooks that 1. Chromebooks have hardware keyboards , 2) Chromebooks is completely web driven, not the App driven.  . The second point just tells the ability to use the content outside the current device or in otherwords ability to use in multiple devices. This is merely possible in Apple devices too i think.


Even though the article is more saying Google is overtaking the market share etc,  in my thinking, it is just wait and see, Apple is also coming up with counter technologies for everything that others are introducing, best examples being WatchKit, CarKit, HealthKit, HomeKit etc the recent additions to the iOS platform. 

References:
http://www.pocket-lint.com/news/131520-which-cloud-storage-service-is-right-for-you-icloud-vs-google-drive-vs-onedrive-vs-dropbox
https://www.edsurge.com/n/2014-06-30-3-reasons-why-chromebook-beats-ipad-in-1-1-programs
http://www.in.techradar.com//articleshow/42913485.cms

Android Changing Name of application

Yet another simple stuff which i did not know as a beginner was, how to change the name of application,

The application name is basically defined within the application element in the AndroidManifest.xml file

        android:allowBackup="true"
        android:icon="@drawable/ic_launcher"
        android:label="@string/app_name"
        android:largeHeap="true"
        android:theme="@style/AppTheme"
         >

the label field represents the label that is shown on the Device. We just need to change it, in this case, the name is taken from string/app_name resource. To edit it, just change in the resource string file.

References:
http://developer.android.com/guide/topics/manifest/application-element.html

Android Adding Application icon

Inorder to add application icon which is renderable in multiple device densities, the icon asset in various dimension needs to be provided in the resource folder

        android:allowBackup="true"
        android:icon="@drawable/ic_launcher"
        android:label="@string/app_name"
        android:largeHeap="true"
        android:theme="@style/AppTheme"


By default android project has the icon name ic_launcher, this is kept in the res folder under density specific folders.

We just needs to replace these files with the one required.

Also found a very useful website which can create the resource with different densities for icons.

http://android-ui-utils.googlecode.com/hg/asset-studio/dist/icons-launcher.html

Just provide icon to it and the complete set can be downloaded. So easy!

References:
http://romannurik.github.io/AndroidAssetStudio/
http://android-ui-utils.googlecode.com/hg/asset-studio/dist/icons-launcher.html

Saturday, November 29, 2014

Android Reading raw Resource file

As part of a project, i had to read a resource file from raw folder. This was a login response json file.

First right clicked on the Resources folder and selected the resource directory like in the below screenshot



From this, selected the resource type as raw.


Doing this, the raw folder appeared under resources. 

Right clicked on the raw again and then selected add file and selected a text file as the file was a json file 


This created an empty file and pasted the json file contents into this. Did a compile once so that R.java file is getting this resource Id 

After this from the code did the below 

Utils.readRawTextFile(contextRef,R.raw.login_response);

public static String readRawTextFile(Context ctx, int resId)
    {
        InputStream inputStream = ctx.getResources().openRawResource(resId);

        InputStreamReader inputreader = new InputStreamReader(inputStream);
        BufferedReader buffreader = new BufferedReader(inputreader);
        String line;
        StringBuilder text = new StringBuilder();

        try {
            while (( line = buffreader.readLine()) != null) {
                text.append(line);
                text.append('\n');
            }
        } catch (IOException e) {
            return null;
        }
        return text.toString();
    }


Thats all, the file is read from the resouces folder. 





Android Studio - Changing Run Configurations

When i imported an eclipse project into Android Studio, and ran it, it started by default launching in simulator even though i had the device connected.

This configuration can be altered using the Run -> Edit Configurations Menu,
In this the Target Device can be selected to Emulator or Device or Show Chooser dialog. This configuration is given like in below screenshot

 References:
http://stackoverflow.com/questions/16585055/android-studio-doesnt-start-with-connected-device

Sunday, November 16, 2014

Estimote SDK for Android

Estimote SDK is an abstract way to communicate with the Estimote Beacons. The Android SDK is very similar to the iOS SDK in architecture

The SDK allows for 
- Beacon Ranging (scans beacons and optionally filters them by their properties) 
- Beacon monitoring (monitors regions for those beacons that have exited / entered a region) 
- beacon characteristic reading and writing (proximity UUID, major & minor values, broadcasting power. advertising interval ) 

What is Ranging? 
Ranging allows apps to know the relative distance between a device and beacons. As an example of ranging, if a user is in a department store, then device of the user comes into proximity of the beacon, it can know the distance and determine which department in the store user is close by. 

The Estimote engineers recommend for better ranging of the distance, app be in the foreground. apps can use startRangingBeacon method of BeaconManager to start the ranging operation. 
Ranging code is something like below in Android 

private void connectToService2()
  {
    beaconManager.connect(new BeaconManager.ServiceReadyCallback()
    {
        @Override
        public void onServiceReady()
        {
            try
            {
                beaconManager.startRanging(ALL_ESTIMOTE_BEACONS_REGION);
            }
            catch(RemoteException ex)
            {
                Log.e(TAG,"Cannot start ranging", e);
            }
        }
        
    });
      
  }

Beacon monitoring 
This is a term used to describe a bluetooth device’s usage and detect when a user is in the vicinity of beacons. This can be used to provide contextually aware information. 
Beacon has following properties 

Proximity UUID : 128-bit identifier
major : 16-bit unsigned integer to differentiate between beacons within the same proximity UUID 
minor : 16-bit unsigned integer to differentiate between beacons within the same proximity UUID and major number 

Monitoring is designed to perform periodic scan in the background. By default it scans for 5 seconds and sleeps for 25 seconds. This means that it can take up to 30 seconds to detect entering or exiting region. 
Default behaviour can be changed via BeaconManager#setBackgroundScanPeriod. 

Running the Demo
Demos in The Estimote directory are said to be can be built with Gradle by typing gradleview installDebug

Tried to run this via the Android Studio, as it supports Gradle based projects, When tried to open the project, found the below that Gradle path is not set and did not have Gradle. Downloaded the Gradle and set the path in the Android SDK, It gave an error initially saying the Gradle used to build the demo was old, but later it got automatically corrected! . Still to find, how ?? Anyway, a detailed analysis of Gradle is required. 



References 

Saturday, November 15, 2014

Adding a New iOS Extension target to an existing application


It may sound familiar but this time i was trying to add an extension to a project which was created in XCode 4.0 or below. Not sure it is same as any other tasks exit to do this, Below is the concept trying to achive. 

- Any application which want to display few markers on the map can utilize this extension which  accepts a string form of lat and long values. 

Added the extension and it gave popup like below 



Encountered the below error 

Searching the forum links suggested to keep the bundle identifier and the extension to be the same with a period and a string for the extension. which in my case wasn't. But even after doing this and resetting the simulator or running in a different simulator, it did not resolve the issue. Looking more to debug this, thought to look at the simulator logs which is available at  ~/Library/Logs/CoreSimulator/CoreSimulator.log 

It had the below error log 
Nov 16 08:35:54 -MacBook-Pro.local com.apple.dt.Xcode[59577] : Error Domain=LaunchServicesError Code=0 "The operation couldn’t be completed. (LaunchServicesError error 0.)" UserInfo=0x7fbde4bcf520 {Error=ExecutableTwiddleFailed, ErrorDescription=Failed to chmod file:///Users/Retheesh/Library/Developer/CoreSimulator/Devices/259196B4-E8DC-4B40-BFAA-61EACC39EE8B/data/Library/Caches/com.apple.containermanagerd/Temp/Bundle/Application/6AEDBD0E-8C2D-4DCC-8A7F-3A400EF1FF73/myapp.app/PlugIns/MapViewerExtension.appex/MapViewerExtension : No such file or directory}

The errors were still the same! 

When opened the Xcode and ran an older project having no extension, that also gave same error. Complete mess. Luckily it got resolved when cleaned the project.

References:
http://stackoverflow.com/questions/25632886/an-error-was-encountered-while-running-domain-launchserviceserror-code-0
http://stackoverflow.com/questions/25130558/unable-to-run-app-in-simulator-an-error-was-encountered-while-running-domain

Friday, November 14, 2014

Symbolicating crash report using symbolicatecrash Xcode command line utility

Recently, all of the app store crash reports i was able to symbolicate using symbolicatecrash utility. 
Some tips are below for this process. 

bestomet-MacBook-Pro:Resources bestomet$ /Applications/Xcode.app/Contents/SharedFrameworks/DTDeviceKitBase.framework/Versions/A/Resources/symbolicatecrash -v "/Users/bestomet/Desktop/RR/projects/WFF/debugs/appstore_crash_reports/mytwc_ios_reports/TWC-WiFi-Finder_4-3-20_2014-11-12_iOS-7-1_Top-Crash_1.tqohuwwj.zip_1.crash"
Error: "DEVELOPER_DIR" is not defined at /Applications/Xcode.app/Contents/SharedFrameworks/DTDeviceKitBase.framework/Versions/A/Resources/symbolicatecrash line 60.

The developer directory can be set like the below 
export DEVELOPER_DIR="/Applications/XCode.app/Contents/Developer"


Once the Path is set, the above can be executed again. 

Just need to make sure the dSym file and the crash report are in the same folder


Also, to verify the the build is corresponding to the crash file, below steps can be followed

https://developer.apple.com/library/ios/qa/qa1765/_index.html
the below step will help to find the udid from the crash file.
            
            $ grep --after-context=2 "Binary Images:" Example.crash
            Binary Images:
            0xb6000 - 0xb7fff +Example armv7 <270a9b9d7a333a4a9f1aaf8186f81394> /var/mobile/Applications /28D4F177-D312-4D3B-A76C - C2ACB4CB7DAD / Example.app / Example
            0x2feb5000 - 0x2fed6fff dyld armv7<4 a817f3e0def30d5ae2032157d889c1d> /usr/lib/dyld
            
            below step will help to find the udid from the binary image

            $ xcrun dwarfdump --uuid


References: 
http://stackoverflow.com/questions/1460892/symbolicating-iphone-app-crash-reports

Wednesday, November 12, 2014

Android Efficient Location Tracking

Android battery optimisation is a major problem when doing location monitoring in Android. 

There are 3 approaches to go with

1. Request one shot location when needed
 If requesting one shot, app won’t be able to get the most accurate location in one shot. 
2. Timer / services based approach for continuous location tracking 
This is done by providing a background service that implements a timer and does the continuous polling in those itnervals. Location will be most accurate in this. less battery efficient compared to the geo fence approach. 
3. Geo fence based approach for continuous location tracking 

Google play services has made it easy to monitor geo location area of certain radius which is called Geo Fence. App can create a geo fence at a point where a location is obtained with greater accuracy. Android will monitor this geo fence and when user exit or enter this region, app will be intimated. The advantage is that Applications don’t need to write own service way of handling the location changes. 

References:
http://developer.android.com/training/location/geofencing.html

Monday, November 10, 2014

Google Markers with Lettered text

Below code cab be used to achieve this

for (i = 0; i < lats.length; i++)
                        {
                            var myLatlng1 = new google.maps.LatLng(lats[i],longs[i]);
                            var marker = new google.maps.Marker({position: myLatlng1,map: map,title: 'Hello World!', icon: 'http://chart.apis.google.com/chart?chst=d_map_pin_letter&chld=A|009BEE|FFFFFF'
});
                            marker.id = i;
                            infoWindowArray[i] = new google.maps.InfoWindow({content: infoTexts[i]});
                            google.maps.event.addListener(marker, 'click', function() { 
                                                          this.info = infoWindowArray[this.id];
                                                          this.info.open(map,this);
                                                          });

                        }

Sunday, November 9, 2014

Google Map Javascript - A classic problem of closures

The intention was to have a few markers created in a loop and add info windows to it. Each info windows should have been popup when clicking on the marker. Each Info window was supposed to contain information about the marker clicked which is specific to that marker. Below was the initial code written which ended up in showing the marker only on the last marker created in the loop when clicked on whichever markers created.




But this did not work very well because of the closure problem. by the time click event happens, it is the last marker object get assisegned in the action and hence only the last marker object is getting that action. To avoid it, below had to be done. 


References: 
http://stackoverflow.com/questions/3158598/google-maps-api-v3-adding-an-infowindow-to-each-marker

Saturday, November 8, 2014

Google Map Simple Info windows

Below code can be used to bring up a sample info window

function initialize() {
                        var myLatlng = new google.maps.LatLng(-25.363882,131.044922);
                        var mapOptions = {
                            zoom: 4,
                            center: myLatlng
                        };
                        
                        var map = new google.maps.Map(document.getElementById('map-canvas'), mapOptions);
                        
                        var contentString = 'sample text';
                        
                        var infowindow = new google.maps.InfoWindow({
                                                                    content: contentString
                                                                    });
                                                                    
                                                                    var marker = new google.maps.Marker({
                                                                                                        position: myLatlng,
                                                                                                        map: map,
                                                                                                        title: 'Uluru (Ayers Rock)'
                                                                                                        });
                                                                                                        google.maps.event.addListener(marker, 'click', function() {
                                                                                                                                      infowindow.open(map,marker);
                                                                                                                                      });
                    }
                

                google.maps.event.addDomListener(window, 'load', initialize);

References:
https://developers.google.com/maps/documentation/javascript/examples/infowindow-simple

iOS 8.0 Extensions - Passing data from Host app to Extension

The sample application had a share button. Upon pressing share it brought up a share sheet which had the extension, selecting the extension brought up the extension ui and could read the data in it.

- (IBAction)share:(id)sender {
    
    // Create a UIActivityViewController with our image
    NSString *data = @"This is a data being passed from an extension invoker app to the extension";
     UIActivityViewController *activityViewController = [[UIActivityViewController alloc] initWithActivityItems:@[data] applicationActivities:nil];
    // Set a completion handler to handle what the UIActivityViewController returns
    [activityViewController setCompletionWithItemsHandler:^(NSString *activityType, BOOL completed, NSArray *returnedItems, NSError * error){
        
        if([returnedItems count] > 0){
            
            NSExtensionItem* extensionItem = [returnedItems firstObject];
            NSItemProvider* imageItemProvider = [[extensionItem attachments] firstObject];
            
            if([imageItemProvider hasItemConformingToTypeIdentifier:(NSString *)kUTTypeImage]){
                
                [imageItemProvider loadItemForTypeIdentifier:(NSString *)kUTTypeImage options:nil completionHandler:^(UIImage *item, NSError *error) {
                    
                    if(item && !error){
                        
                        dispatch_async(dispatch_get_main_queue(), ^{
//                            [imageView setImage:item];
                        });
                        
                    }
                }];
                
            }
        }
    }];
    [self presentViewController:activityViewController animated:YES completion:nil];

}


Having the data passed like in the above, the trick is to read it in the extension by using the right identifier. In the viewDidLoad of extension, code is like below 

- (void)viewDidLoad {
    [super viewDidLoad];
    
    // Get the item[s] we're handling from the extension context.
    
    // For example, look for an image and place it into an image view.
    // Replace this with something appropriate for the type[s] your extension supports.
    BOOL textFound = NO;
    for (NSExtensionItem *item in self.extensionContext.inputItems) {
        for (NSItemProvider *itemProvider in item.attachments) {
            if ([itemProvider hasItemConformingToTypeIdentifier:(NSString *)kUTTypeText]) {
                [itemProvider loadItemForTypeIdentifier:(NSString *)kUTTypeText options:nil completionHandler:^(NSString *text, NSError *error) {
                    NSLog(@" -- Got input from the extension invoker --");
                    NSLog(@"text is :%@",text);
                    
                    
                }];
                textFound = YES;
                break;
            }
        }
        
        if (textFound) {
            // We only handle one image, so stop looking for more.
            break;
        }
    }
}

MAC OS app - reading contents from a file

Since the file reading operations will take good amount of time, It is good idea to start that in a separate thread


[NSThread detachNewThreadSelector:@selector(scanAndFindLocationChangeReports:) toTarget:self withObject:absString];


-(void) scanAndFindLocationChangeReports:(NSString*) filePath
{
    const char* filePathCStr = [filePath cStringUsingEncoding:NSUTF8StringEncoding];
    FILE *file = fopen(filePathCStr, "r");
    // check for NULL
    while(!feof(file))
    {
        NSString *line = readLineAsNSString(file);
        // do stuff with line; line is autoreleased, so you should NOT release it (unless you also retain it beforehand)
        NSLog(@"-- File Line is :%@",line);
    }
    fclose(file);
    [self performSelectorOnMainThread:@selector(updateLabels) withObject:nil waitUntilDone:NO];
}


NSString *readLineAsNSString(FILE *file)
{
    char buffer[4096];
    
    // tune this capacity to your liking -- larger buffer sizes will be faster, but
    // use more memory
    NSMutableString *result = [NSMutableString stringWithCapacity:256];
    
    // Read up to 4095 non-newline characters, then read and discard the newline
    int charsRead;
    do
    {
        if(fscanf(file, "%4095[^\n]%n%*c", buffer, &charsRead) == 1)
            [result appendFormat:@"%s", buffer];
        else
            break;
    } while(charsRead == 4095);
    
    return result;
}

MAC OS Application few common codes

As part of a test project, had to create a sample app that can do the following

- Open a file, and for this show a file explorer window
- Read contents from the file
- Create a webview and load content into the webview

The code is something like below for all these

1. Showing a file opener window
 NSOpenPanel* openDlg = [NSOpenPanel openPanel];
    [openDlg setPrompt:@"Select"];
    [openDlg beginWithCompletionHandler:^(NSInteger result){
        NSArray* files = [openDlg URLs];
        if(files && [files count] > 0)
        {
            NSURL *fileUrl = [files objectAtIndex:0];
            NSString *absString = [fileUrl absoluteString];
            absString = [absString stringByReplacingOccurrencesOfString:@"file://" withString:@""];
            NSLog(@"File URL is :%@ : abs url:%@",fileUrl, absString);
            [self.textLabel setStringValue:@"Computing..."];
            self.locationChangeReports = [[NSMutableArray alloc]init];
            [NSThread detachNewThreadSelector:@selector(scanAndFindLocationChangeReports:) toTarget:self withObject:absString];
        }

    }];


- Showing a web view

//    NSURL *myURL = [NSURL URLWithString:@"http://www.google.com"];
//    NSURLRequest *myRequest = [NSURLRequest requestWithURL:myURL];
    [self.webView.mainFrame loadHTMLString:[self getMapDataString] baseURL:nil];

Difference from iOS webview is that there is a mainFrame in which loadHTMLString is provided. 

MAC OS Packaging Application

All I wanted was to be able to send a MAC OS application as an executable which otter user can click and launch. However,  it appears that we don’t need to have a dog or installer in order to do this. From the mac app link, we just can open the application. Tried this on another MAC desktops by launching the .app file and it just works. Since the app is not signed, it asks for the permission. 




Sunday, November 2, 2014

iOS Listening to Audio Route Changes


Application can listen to Audio route change by observing the AVAudioSessionRouteChangeNotification
Apple documentation has the below state diagram depicting the over all scenario for this 

The below are the actions to be done by the app in order to respond to the audio route changes

1. Implement the methods to be invoked upon the route change
2. Register for the AVAudioSessionRouteChangeNotification notification to respond to the route changes 

As an example, the app receives notification when a user unplugs the headset during playback. Following the apple guidelines, the app pauses. The app can then provide a display that prompts the user to continue playing. 

When the system sends a route change notification, it provides information such as below. 

It contains a userInfo dictionary which contains information on why the route changed and what was the previous route. These info can be retrieved using the dictionary keys AVAudioSessionRouteChangeReasonKey and AVAudioSessionRouteChangePreviousRouteKey.  

Below are the reasons for the Audio session route changes 
enum {
    AVAudioSessionRouteChangeReasonUnknown  = 0,
    AVAudioSessionRouteChangeReasonNewDeviceAvailable  = 1,
    AVAudioSessionRouteChangeReasonOldDeviceUnavailable  = 2,
    AVAudioSessionRouteChangeReasonCategoryChange  = 3,
    AVAudioSessionRouteChangeReasonOverride  = 4,
    AVAudioSessionRouteChangeReasonWakeFromSleep  = 6,
    AVAudioSessionRouteChangeReasonNoSuitableRouteForCategory  = 7,
    AVAudioSessionRouteChangeReasonRouteConfigurationChange  = 8,
};

Application can also get to know the previous route by checking the value for the key AVAudioSessionRouteChangePreviousRouteKey
One reason for route change is  AVAudioSessionRouteChangeReasonCategoryChange. Due to this, if application is interested only in headset plug in and plug out, then check the reason for the route change and then process the change accordingly. 

References:

iOS How to Check if Head set is connected

There are couple of options suggested in the blogs to find whether headset is plugged into an iOS device. 

Below are the two ways we can check the audio route changes

- (BOOL)isHeadsetPluggedIn2
{
    AVAudioSessionRouteDescription *route = [[AVAudioSession sharedInstance] currentRoute];
    
    BOOL headphonesLocated = NO;
    for( AVAudioSessionPortDescription *portDescription in route.outputs )
    {
        headphonesLocated |= ( [portDescription.portType isEqualToString:AVAudioSessionPortHeadphones] );
    }
    return headphonesLocated;
}

- (BOOL)isHeadsetPluggedIn3 {
    UInt32 routeSize = sizeof (CFStringRef);
    CFStringRef route;
    
    OSStatus error = AudioSessionGetProperty (kAudioSessionProperty_AudioRoute,
                                              &routeSize,
                                              &route);
    
    /* Known values of route:
     * "Headset"
     * "Headphone"
     * "Speaker"
     * "SpeakerAndMicrophone"
     * "HeadphonesAndMicrophone"
     * "HeadsetInOut"
     * "ReceiverAndMicrophone"
     * "Lineout"
     */
    
    if (!error && (route != NULL)) {
        
        NSString* routeStr = (__bridge NSString*)route;
        
        NSRange headphoneRange = [routeStr rangeOfString : @"Head"];
        
        if (headphoneRange.location != NSNotFound) return YES;
        
    }
    
    return NO;
}

Both the above approaches work, but  isHeadsetPluggedIn3 uses the AudioSessionGetProperty which is deprecated in iOS 5.0 onwards and gives warning. 

The method isHeadsetPluggedIn2 looks to be good. 

References:
http://stackoverflow.com/questions/3728781/detect-if-headphones-not-microphone-are-plugged-in-to-an-ios-device