Sunday, August 31, 2014

Rosy Writer - Dissecting and Execution - Part III


What is Tarball Distribution


(Basically, a tar ball is a group of files compressed into a single file. The group of files are either compiled and installed or simply installed onto a new system once all the package pre-requisites are met. To make installing and removing programs and package easier, many distributions use an alternate format that automatically installed pre-requisites, along with the needed package for developer. 

References:
http://pmdocs.sophos.com/Latest/en/pmdocs/concepts/GSGObtainDistributionTar.html

Compiling WebRTC library - Part III


Compiling webRTC library - Part II


Compiling WebRTC library - Part I


SIP SDP - Part VII - Security Considerations


SIP SDP - Part VI - Example SDP transactions


SIP SDP - Part V - Indicating capabilities


SIP SDP - Part IV - Modifying a Session


SIP SDP - Part III - Offerer processing of the Answer


SIP SDP - Part II - Generating Answer


SIP SDP - Part 1 - Generating Initial Offer

Introduction to SDP 
SDP was originally conceived as a way to describe multicast sessions carried on the Mbone. The Session announcement protocol (SAP) was devices as a mechanism to carry the SDP messages. Although the SDP specification allows for unicast operation, it is not complete. Unlike multicast, where there is a global view of the session that is used by all the participants, unicast sessions require information from both participants, and agreement on parameters between them. 

As an example, a multicast session requires conveying a single multicast address for a particular media stream. However, for a unicast session, two addresses are needed- one for each participant As another example, multicast session requires and indication of which codecs will be used in the session. However, for unicast the set of codecs needs to be determined by finding an overlap in the set supported by each participant. 

As a result even though the SDP has the expressiveness to describe unicast sessions, it is missing the semantics and operational details of how it is actually done. This is remedied by defining offer/answer model based on SDP. In this model, one participant in the session generates an SDP that constitutes the offer - the set of media streams and codecs the offerer wishes to use, along with the IP addresses and ports the offered would like to use to receive the media. The offer is conveyed to the other participant called answerer. The answerer generates the SDP answer. 

Using the offer answer, it is also possible to update the session after the initial offer/answer is done. 

Protocol Operation
The offer answer exchange assumes existence of a high level protocol such as SIP which is capable of exchange the SDP for establishment of sessions. 
The agent receiving the offer may reject the offer, for which a higher level protocol is required. Some of the rules that may be imposed are 

At any time, either agent MAY generate a new offer that updates the session. However, it MUST NOT generate a new offer if it has received an offer which is not yet accepted or rejected. Further more, it should not generate an offer if another one it has generated is not yet accepted or rejected. If an agent receives an offer after it has sent an offer for which no answer has been recived, it should be treated as a glare condition. 

The term glare was originally used in circuit switched telecommunications networks to describe the condition where two switches both attempts to seize the same available circuit on the same trunk at the same time. Here it means both agents have attempted to send an updated offer at the same time. The higher level protocols must provide facility to overcome such situations. 

Generating Initial Offer

The offer must be a valid SDP message as defined by the RFC 2327 with one exemption that e or p line is not mandatory. The numeric value of session id must be and version in the o line field MUST be represented with a 64 bit signed integer. The initial value must be less than (2**62) - 1.Even though SDP specification allows multiple SDP messages to be concatenated into one large message, the offer answer model restricts that and mandates that to be only ONE. 

The SDP "s=" conveys subject of the session, which is reasonably defined for multi cast, but ill defined for unicast. For unicast sessions, It is recommended that it contains a single space character or - (dash) 

The "t=" specifies the time of the session. Generally for unicast streams for unicast sessions are created and destroyed through external signalling means such as SIP. In that case "t=" line should have a value of "0   0". 

The offer will contain 0 or more media streams and each media stream is described by an "m=" line and its associated attributes. Zero or media stream indicates that offerer wishes to communicate, but the stream for the session will be added at a later point of time through a modified offer. 

The stream MAY be for a mix of unicast and multicast; the latter obviously implies a multicast address in the relevant "c=" line(s).

Construction of each offered stream depends on whether the stream is unicast or multicast.

Offer Answer Model 

Unicast Streams

If the offerer wishes to only receive media from its peer, it MUST mark the stream as receiveonly. Similarly if the offerer wishes to only send the media, then mark the a attribute as send only by using a=sendonly . This is commonly known as direction attribute. IF the offerer wishes to communicate, but wishes to neither send or receive at this point, then the stream should be marked as a=inactive. Note that if the offer answer is for RTP, the RTCP should be always sent and received irrespective of the direction attribute, i.e. whether it is sendonly, recevonly or  inactive. In other words, directionality of the stream has no impact on RTCP usage. 

For sendonly and recvonly streams, the port number in the and address in the offer indicate where the offerer would like to receive the media stream. For send only RTP streams, the address and port number indirectly indicates where the offerer want to receive the RTCP reports. Unless indicated explicitly, the RTCP reports are sent to the port number one higher than indicated in the SDP. The IP address and port number present in the offer indicate nothing about the source IP address and port number of RTP and RTCP packets that will be sent by the offerer. A port number zero indicates the stream is offered but must not be used. The answer with zero port number can indicate a rejected stream. Further more, existing streams can be terminated by setting the port to zero. In general port number zero indicates that the stream is not wanted. 

The list of media formats for each stream conveys two pieces of information, namely the set of formats that the offer is capable of sending and / or receiving. The format can mention the codec and any parameter for the codec such as in case of RTP. In case of RTP, RTP payload type numbers indicate the payload format.  For sendrecv streams, the the offer should indicate the codecs that the offerer is willing to send and receive with. For sendrecv and send only streams, the answerer may send back the response with the payload type different from the one in the offer. In this case, the offerer should send the RTP with the payload type that the answerer responded with. 

as per RFC 2327, fmtp parameters may be present to provide additional parameters of the media format. In case of RTP streams, all media descriptions SHOULD contains a=rtpmap mappings from RTP payload types to encodings. If there is no a=rtpmap, then the default payload mapping as defined by RFC 1890 should be used. The new RFC was introduced to move away from the static payload mapping. 

In all the cases, the m line must be listed in the order of preferences, with the first format being the most preferred. 

If the ptime attribute is present in the offer for a stream, it indicates that desired packetization interval that the offerer would like to receive. A value of zero is allowed but discouraged. It indicates that no RTP should be sent, and this would cause the RTCP also to be stopped. 

If multiple media streams of different types are present, it means that the offerer wishes to use those streams at the same time. A typical case is audio and video stream present in the video conference. 

If multiple media streams of same type are present in the offer, it means that the offerer wishes to send (and/or) receive multiple streams of that type at the same time. When sending multiple streams of same type, it is matter of local policy as to how each media source of that type (for e.g. a video camera and VCR in the case of video) is mapped to each stream. 

A typical usage example of multiple media streams of same type is a pre-paid calling card application. where the user can press and hold the pound key at any time during the call to hung up and make a new call on the same card. This requires media from user to two destinations - the remote gateway, and the DTMF processing application which looks for the pound. This could be accomplished by two media streams, one is sendrecv to the gateway, and the other sendonly to the DTMF application. 

Multicast streams:

IF session description contains a multicast media stream which is listed as receive only, it means that the participants, including the offerer and the answerer, can only receive the media on that stream. This differs from the unicast view where the directionality refers to the flow of media between the offerer and answerer.  

OpenTok iOS SDK


The OpenTok iOS SDK 2.2 lets you use OpenTok powered video session in apps we build for iPhone, iPad and iPod Touch devices. 
There are few samples available that uses this framework. 

1. Helloworld - The basic application that helps to get familar with the  SDK. 
2. Lets build OTPublisher - The project provides classes that implement the OTVideoCapture and OTVideoRenderer interfaces of the core Publisher and Subscriber classes. Using these modules, we can see the basic workflow of sourcing video frames from the video camera in and out of OpenTok, via the OTPublisherKit and OTSubscriberKit interfaces 
3. Live Video Capture : This project extends the video capture module implemented in the project above. and demonstrates how the AVFoundation media capture APIs can be used simultaneously stream video and capture high-resolution photos from the same camera. 
4. Overlay Graphics - This project shows how to overlay graphics on publisher and subscriber views and uses SVG graphic format for icons. This project borrows publisher and subscriber modules implemented in project 2. 
5. Multi Party call. This project demonstrates how to use the OpenTok iOS SDK for a multi party call. The application publishes audio/video from an iOS device and can connect to the multiple subscribers. However it shows only one subscriber video at a time due to CPU limitations n iOS devices. 

Each project includes a symlink to OpenTok.framework, up one directory level from the root of this repository. If the link is from a tar ball distribution, then the link should work just fine. Otherwise the symlink needs to be updated manually. 

This uses the WebRTC libraries. And provides many abstraction layers for various platforms such as PhoneGap, Android, iOS. etc. The trail comes for 30 days and ability make use of the backend platform. 

References:

WebRTC High Level Look - Part V - Security

WebRTC provides several features to avoid the security problems. 

1. WebRTC implements user of secure protocols such as DTLS and SRTP. 
2. Encryption is mandatory for all WebRTC components including signalling mechanisms
3. WebRTC is not a plugin, its components run in the browser sandbox and not as a separate process.
4. Camera and microphone access must be granted explicitly and when the camera and microphone are running, this is clearly shown to the user. 

In the WebRTC trust model, 

1. Browser acts as Trusted Computing Base (TCB)
- This means that this is the only system that user can really trust 
- The Job of TCB is to enforce user's desired security policies
2. Authenticated entities 
            - Entity is checked by browser
3. Unauthenticated Entities 
- These are other network elements who send and receive traffic 

Examples of Authenticated Entities 
- Calling Services (Known Origin)
- Identity providers
- Other users (when cryptographically verified)
- Sometimes network elements with right topology (e.g. behind our firewall)

Authenticated is not equal to trusted: 
Evil is still evil even if I know it is him. 

- But authentication is the basis of trust decision 
- And may be I want to call Dr. Evil after all 

The basic design principle is that 

- It is always safe to use the web 
- Even to malicious site 

Calls are encrypted where possible 
- At minimum between WebRTC clients unless the site takes direct action 

When available, directly verify the far side
- Minimize the required trust in calling site

- Be compatible with as many trust identity providers as possible. 

Overall Topology is like below 



References:

Saturday, August 30, 2014

WebRTC High Level Look - Part IV - STUN & TURN

Then and now:

In the old days, long time ago, each site had a public IP and each other they exchange IP address and they could do peer to peer communication. But in the world of NAT, this is no longer possible as this introduced the private IPs. This is where STUN is in use. 

The Stun works like below 
- The client sends a STUN packet request and asks what is my public IP address
- The STUN server sees the IP address the request came from, puts the address into the packet and sends it back.

This usually works but in some cases, it doesn't. For this reason we have the TURN concept. 

- TURN provides a cloud fallback when a peer to peer communication fails 
- This asks for a public address which anybody can access. 

The downside of this is that the data is actually related through the server and there is an operational cost to it. 

Now we have two mechanism where one is super cheap but not always reliable, and the other always reliable but has some cost associated to it, To get the best of both the worlds, we have technology called ICE. 

ICE framework tries both the paths parallel and figures out which is best and follows it. If it could do STUN it does STUN and if it could not and could do TURN, it does the TURN. as per the statistics, 80% of the time, STUN works, only 20 % we have to Go with the TURN. which means in real world, 1 out of 8 calls could  fail if we rely only on STUN. 

The overall architecture with TURN, STUN is like below:




Deploying a TURN or STUN server
- Google has a testing STUN server stun.l.google.com:19302
- WebRTC has stun and  turn server code as part of the webrtc source code package
- There is also a readily available product rfc5766-turn-server which can be deployed in amazon cloud and can be used. (these are amazone VM images) 

References:
http://www.html5rocks.com/en/tutorials/webrtc/basics/

WebRTC - High Level Look - Part III - Servers and Protocols

Abstract Signalling
- This is needed to exchange 'Session Description objects'
- What formats the originator end support, what codec they use, what type of security,
- Network information for peer-to-peer setup 

This singling part can be any, it could be XHR, it could be SIP, XMPP or Websockets etc

The overall architecture is like below 



The overall working is like below 

The app gets the session description from the browser, sends across the clouds to the other side, Once it gets the message back from the other side with the other side's session description, both sides session description is passed down to the webrtc in the broser. WebRTC can then set up and conduct the media. 

An RTP Session description could look like something below. Many apps can alter this to make it custom, but this is a basic information. 

v=0 
o=762434233223 2 IN IP4 127.0.0.1
s=-
t=0 0 
a=group:BUNDLE audio video 
a=msid-semantic:WMS
m=audio 1 RTP/SAVPF 111 103 104 0 8 107 106 105 13 126
c=IN IP4 0.0.0.0 
a=rtcp:1 IN IP4 0.0.0.0
a=ice-ufrag:W2TGWEQWEWQ
a=ice:pwd:xdQerwerew 
a=extmap:1 urn:ietf:params:rtp=hdrext:ssrc-audio-level
a=mid:audio
a=rtcp-mux
..

References: 

Friday, August 29, 2014

WebRTC a high level look - Part II - RTCDataChannel


This is useful especially after the peer connection is established, to send the arbitrary data over the established peer connection. 
This is especially useful when there is something like game data to the transmitted to the other end. 

This is as easy as sending a json payload and use the send method to send it across. This is much similar to web socket. the below are few points about the RTCDataChannel 

- Same API as WebSockets 
- Ultra - low latency 
- Unreliable or reliable 
- Secure - Standard TLS encryption is used. 

The peerconection can be made for just audio video or include the peer data connection as well. 

A sample code is something like below 

var pc = new webKitRTCPeerConnection(servers, optional:[{RTCPChannels: true}])

pc.onDataChannel = function (event)
{
receiveDataChannel = event.channel; 
receiveChannel.onmessage = function (event)
{
document.querySelector ("div#receive").innerHTML = event.data;
};
};

sendChannel - pc.createDataChannel("sendDataChannel"m {reliable:true});

document.querySelector ("button#send").onClick = function ()
{
var data = document.querySelector("textarea#send").value;
sendChanenl.send(data);
};

On the Demo page it also show how the data can be transmitted peer to peer for e.g. a file. This doesn't even touch any of the servers. 

References:

WebRTC a high level look


- WebRTC brings real time communication to the Web
- It builds a state of the art media stack into chrome
- Develops a new communication platform .

This is supported in 
Chrome, Chrome for Android, Firefox, Opera, Native Java and Objective C bindings 

There are three main category of APIs that exists in WebRTC
- Acquiring audio and video 
- Communicating Audio and Video 
- Communicating Arbitrary Data 

Because of these three above, there are three primary objects 
- MediaStream (aka getUserMedia) 
- RTCPeerConnection
- RTCDataChannel 

MediaStream
Representa a single stream of synchronised audio and video. Each media stream contains tracks. and can contain multiple tracks.
To get access to this media, navigator.getMediaData function is used. 

getUserMediaData takes three arguments like the below. 

var constraints = {video:true};

function successCallback(stream)
{
var video = document.querySelector("video");
            video.src = window.URL.createObjectURL(stream);
}

function errorCallback(error)
{
console.log("navigator.getUserMediaError :",error);
}

the video Constraint can actually specify more properties such as width, height like in example below 

var qvgaConstraints = {
video : 
{
mandatory : {
maxWidth : 320,
maxHeight : 180
}
}
}

The stream once captured can be processed and then can be give as a stream. There are few examples that appears in the apprtc Google IO demo page. For e.g. the stream can be processed and made as in ASCII form and given as the input. Now, the stream can also be a screenshare. Inorder to do that the constraint has to be something like below 

var constraints = {
video: {
mandatory : 
{
chromeMediaSource : 'screen'
}
}

navigator.getUserMedia (constraints, gotStream);

RTCPeerConnection
This is all about making a connection to a peer. What happens is, the media data that is obtained is plugged into the RTCPeerconnection and sends to the peer. Then on the other end, this pops up as the peer connection and then ti can be plugged to the video to display on the screen and then decoded and displayed. 

Behind the scenes, the RTCPeerConection does a lot 

- Signal Processing - Process the captured data to remove the noises, echo etc. 
- Code handling - Deals with the actual compression and decompression of actual audio and video 
- Peer to Peer communication - Finding the actual peer to peer connection, through firewalls, through NAT, through relays,
- Security - Encryption of the data so that the user data is fully secure
- Bandwidth management - If user is having 2mbps connection, the whole is used, if only 200kbps, thats all it uses 

The RTCPeerConnection sample could be something like below 

pc = new RTCPeerConnection(null)
pc.onaddstream = gotRemoteStream 
pc.addStream(localStream);
pc.createOffer(gotOffer);

function gotOffer(desc)
{
pc.setLocalMediaDescription(desc);
sendOffer(desc);
}

function gotAnsewer(desc)
{
pc.setRemoteDescription(desc);
}

function gotRemoteStream(e)
{
attachMediaStream(remoteVideo, e.strean);
}

References: 

Saturday, August 23, 2014

What is MacPort ?



the MacPorts Project is an open-source community initiative to design an easy-to-use system for compiling, installing and upgrading either command-line, X11 or Aqua based open-source software on the OS X operating system. To that end, it provides the command line drive MacPorts software package under a BSD 3 clause lincese, and through it easy access to thousands of ports that greatly simplify the task or compiling and installing open-source software on the mac. 

The Macports provide single software tree that attempts to track the latest release of every software title(port) it distributes without splitting the into stable vs unstable branches, targeting mainly the current latest OS X release and the immediately previous two (OS X 10.8 Mountain Lion and 10.7 Lion). There are around 19951 ports in the tree distributed among 83 different categories, and more are being added on regular basis. 


Macport is conceptually divided into two parts, the infrastructure, known as MacPorts base, and the set of available ports. A Macport is a set of specifications contained in a Portfile that define an application, its characteristics, and any files or special instructions required to install it. This allows a developer to use a single command to tell MacPorts to automatically download, compile and install applications and libraries. 

Installing and running MacPort require Xcode command line utilities. Below are few main commands that can help developer 

sudo port self update =>  Updates the port files to the latest version. This is much essential especially for the application that are getting chagned frequently. If the port file is old and the actual vendor file is new, this will result in checksum errors unless the port is self updated to latest version. 

port version will give the latest version. 

To uninstall and install the macport, do the below. This should not be ideally required in most of the case, but depending on the situation, one may have to do this hard step as well. 

sudo port -fp uninstall installed

This doesn't remove some of the directories though removes most of them. To remove in its entirelyeity do the below 

sudo rn -rf /opt/local 
/Applications/DarwinPorta
/Applications/Macports 
/Library/LaunchDaemons/org.macporta.*
/Library/Receipts/DarwinPorta*.pkg
/Library/Receipts/Macporta*.png 
/Library/StartupItems/DarwinPortsStartup
/Library/Tcl/drawports/1.0
/Library/Tcl/macports 1/0
~/.macports 

if one want to enable debug during seldupdate, below can be used

sudo port -d selfupdate 

if want to update the Macports only and do not the port files, below can be done 

sudo port -d selfupdate --nosync

The Port Doctor. This command check all the possible environmental problems in the running machine

port doctor 

Port reclaim uninstalls any unused ports etc 

port list 

this lists all the ports available. The list is very very long, instead can use the port search 

Some of the common formats for search are: 

port search --name --glob 'php*'

port search --name --line --glob 'php*' 

port search --name --line --regex '^php\d*$'

port search red

Checking dépendances of a port e.g.  

port deps apache2 

Variants of a port 

port variants apache2 

Port installation with debug  

sudo port -v install apache2 

Cleaning port 
sudo port clean

Uninstalling a port 

sudo port uninstall libcomerr 

Port Contents 
This displays the list of all files that have been installed by a given port. 

port contentes xorg-renderproto 

display all installed ports 

port installed 

Check outdated ports 

port outdated 

Port upgrade

sudo port upgrade outdated 

Checking port dependancies 

port dépendances openssl 




references:

MAC permissions and editing


ls with -l option will give the list of files and  folders with their permissions 

ls -l 

as an example it gave the output like the below 

-rw-r--r--  1 root  admin  1941 Oct 10  2013 archive_sites.conf

The first letter here is - which indicates that it is a file. d would indicate it is a directory and l would indicate it as symbolic link

The next three characters specifies owner permissions

- indicates that no access, r indicates read access, w indicates write access, x indicates execute or folder browsing access

The next three letter set indicates group permissions. In this case, it is r-- which indicates that it is only read access. 

The last set indicates everyone else's permissions. again that is r-- indicates that is read only access to everyone else. 

The next element indicates the number of hard links for this file or folder in this case it is 1. 
Then given is the user name and followed by it, the group the user is assigned to. 

Followed by this there is a octal notation value for each entity (user, group and everyone else). 0 - means no access
1 indicates execution, 2 indicates write and 4 indicates read only. 

To change the ownership via command line, MAC terminal can use the chown command. 

The chown command works by accepting the username and group and then the items's path for e.g. like the below 

sudo chown myname:admin /opt/local/etc/macports/macports.conf 

To change the permissions via command line, chmod to be used. 

when using the chmod command, followed by the command, specify the account type (u for owner, g for group, and o for everyone), 
modifier (+ for adding, - for removing, = exact ) and the privilege 

for e.g. 

sudo chmod u=rwx /opt/local/etc/macports/macports.conf

the above command gave read, write and execute access to the macports.conf file 

the below command would give same level of permissions for all the users, e.g. owner, group and everyone 

sudo chmod ugo=rwx /opt/local/etc/macports/macports.conf

Please note that if some cases, we may need access  to the folders that leads to this file also to get full access to this in this case, we need to also do the below

sudo chmod ugo=rwx /opt/local/etc/macports

To recursively change the ownership of the file, below can be tried

sudo chown -Rv username directory

References:

Tuesday, August 19, 2014

Creating iOS Framework - Part II

In this part, the plan is to create the framework. Below given is some explicit properties of a framework

- The directory Structure - The framework is having a special directory structure that is recognised by the Xcode. To create this directory structure, we can create a build script. 
- The Slices - Currently, when we build the library, it is only for the current architecture. In order to be a framework useful, it needs to include builds for all the architectures on which it needs to run. 
To do this, we can create a new product which will build the required architectures and place them in the framework. 

The framework has a special directory structure like given below. 

RWUIControls.framework 
- Headers/ 
- RWUIcontrols 
- Versions
Headers 
RWUIknobcontrols.h
RWUIControls.h

RWUIControls
      Current 

Inorder to create a framework, we need to create a script that does it. The script can be added to the library project to the target and the script looks like below
to add the script, move tot he project build phases of the library project and add the Build phase run script phase. 

- The script basically needs to do the following 

- create the framework folder upto the Versions 
- Create the required SimLinks 
- Copy the public headers into the framework 

#ensure if any part of the script fail, then entire script fail 
set -e 
export FRAMEWORK_LOCN="${BUILT_PRODUCTS_DIR}/${PRODUCT_NAME}.framework"

#create path to the real headers 
mkdir -p "${FRAMEWORK_LOCN}/Versions/A/Headers"

#create required simlinks
/bin/ln -sfh A "${FRAMEWORK_LOCN}/Versions/Current"
/bin/ln -sfh Versions/Current/Headers "${FRAMEWORK_LOCN}/Headers"
/bin/ln -sfh "Versions/Current/${PRODUCT_NAME}" "${FRAMEWORK_LOCN}/PRODUCT_NAME"

#copy the public headers into the framework 
/bin/cp -a "${TARGET_BUILD_DIR}/${PUBLIC_HEADERS_BUILD_PATH}/"  "${FRAMEWOR_LOCN}/Versions/A/Headers"

The symbolic link is nothing but a special kind of file that contains reference to other files or directory in the form of an absolute or relative path and that affects the path resolution 
Once the above steps are over, we can build the target and see in the same products folder we can see the framework is appearing. However, this doesn't have the library file inside. 
To do this bit more steps involved and given below.

Multi Architecture Build

By default when the build is made is Release mode, it will build for 3 device architectures and they are 

arm7 - used in the oldest iOS 7.0 supporting devices
arm7s - as used in iPhone 5 and 5C
arm64 - used in iPhone 5S 

However it will be also useful to build for the remaining two architectures as well for development benefits 

i386 - For 32 bit simulator 
x86_64 Used in 64 bit simulator 

The process of creating the binary for all 5 architectures creates so called fat binary. 

This involves a bit more scripting and below are different phases in it 

Step 1: Create a framework aggregate target . in the library project, create an aggregate target. This can be done by add target -> Others -> Aggregate 
Once the aggregate target is added, it can be named as Framework 

Step2 : Inorder for the library to be built before the framework is created, add the dependancy as the library. 

Step 3: Now add one more build phases with Run script phase and in this script below are the main steps involved 

In the script, there is a build_static_library function which takes an SDK argument and the ONLY_ACTIVE_ARCH to be set to NO. This arguments are for the xcrun xcodebuild command line utility. 
the other important function in this is, make_fat_library. this takes two arguments and combine into one. This uses the lipo command 

xcrun lip -create "${1}" "${2}" -output "{3}" 


Once the fat library is built, we can see the architectures included in this by the below commands

My-MacBook-Pro:vee24.framework username$ cd /Users/username/Desktop/RR/projects/MyProject/docs/VideoChat/SDK/12Sep2014/vee24.framework/
My-MacBook-Pro:vee24.framework username$ xcrun lipo -info vee24
Architectures in the fat file: vee24 are: armv7 arm64

That would print something like below. 
Architectures in the fat file: vee24 are: armv7 arm64


The entire script can be just copy pasted into the run script phase and the RW page is available for future reference is at 

https://dl.dropboxusercontent.com/u/27466522/Per/samplecode/multi_platform_framework_script.rtf


References:
http://www.raywenderlich.com/65964/create-a-framework-for-ios

Friday, August 15, 2014

Creating framework in iOS - Part 1

There are two main approaches for distributing the logic for other application vendors. 

1. Create Static library => problem with this that the header files are to be distributed explicitly which is an awkward approach. Not very elegant. 
2. Create framework  => This is a way to package a static library and header files into a single 

Trying to create a framework in below route: 

- Build a basic static library project in Xcode
- Build an app with dependancy on this library project 
- Discover how to convert this static library into a framework
- Package other resource files into a framework 

A framework basically is a collection of resources; it collects a static library and its header files into a single structure that Xcode can easily incorporate into a project. 

On OSX, its possible to create a Dynamically Linked framework. Through dynamic linking, frameworks can be updated easily and transparently without requiring applications to relink them. At runtime, a single copy of the library's code is shared among all process using it thus reducing memory usage and improving system performance. This is a powerful stuff. 

On iOS such a custom framework cannot be added in this above manner, so the only dynamically linked frameworks are the ones apple provides 

Due to this, statically linked frameworks are used and it is a convenient way to package up a code-base for re-use in different apps. 
Since framework is nothing really different from a static library, instead it is just a one stop shop for static framework, the first thing first is to create a static library. 

Decided to follow the exact same step as mentioned by raywenderlich.com and do something different later 

Step 1: Create a static library project 
Used the Xcode to create a static library project and the name given was RWUIControls. This created a header and implementation file by default. Sicne the tutorial instructs to delete the .m file with the intention that every control is accessible using this function, and other specific ui controls are imported in this function, the .m file doenst do anything much for this. Deleted from the project. 

Step 2: The next major step is to make the header files publicly accessible. For this, a new build phase needs to be added. The new build phase can be added by the focusing the target and selecting Editor -> Add Build Phase -> Add Copy Header Phase.
When creating this, it added  a build phase and the Copy Header Build phase had Public and Private sections, To the public section, added the RWUIControls.h file. 

Step 3: Including some functionality. The RW had given a knob control implementation file and that is going to be added to this project as a group now. When added the files to the project, it created the group and the header file RWKnobControl.h is added to the public headers section. 

Also since the developer user of this framework will be only referring the RWUIControls.h, it is important that we import the knob control header also into this header file. This way, user of this framework doesn't have to import multiple header files. 

Step 4: This step is to configure the build settings. 
The first step here is to set the path at which the public header files will be copied. For setting this, by selecting the target, search for Public Header in the Build settings tab. Now, set this to incldue/${PROJECT_NAME}. After entering, this changed to incldue/RWUIControls. 

The following are the other settings

Dead Code Stripping : NO
String Debug Symbols during Copy : NO
String Style : Non Global Symbols 

Step 5: 
In the tutorial, there is a way we can make the library that is created as a dependancy project. This is done to make sure the framework development is done using the sample application. For this one, in this step, it creates a dependancy sample application. 

This is fairly not so complex. First we need to create a UI project say for e.g. a Single View application. Now in the build Phases tab, add the RWUIControls library as the target dependancy. Also in the link binary with libraries section, add the library. it may show up in red, but that is okay. This will make sure the library is built before the UI application is compiled.

To note, inorder to have the library appear in the dependancy project listing, the project needs to be copied by checking the "copy items into destination group's folder"

References: 
http://www.raywenderlich.com/65964/create-a-framework-for-ios



Tuesday, August 5, 2014

Dissecting RosyWriter - Part 1

This is a sample application which demonstrates how to use the AVFoundation framework to capture, process, preview and save video on iOS devices. 

When the application launches, it creates an AVCaptureSession with audio and video device inputs, and outputs for audio and video data. These outputs continuously supply frames of audio and video to the app via the captureOutput:didOutputSampleBuffer:fromConnection  delegate method. 

The app applies a very simple processing step to each video frame. Specifically it sets the green element of each pixel to zero, which gives the entire frame a purple tint. Audio frames are not processed. 

After a frame of video is processed, RosyWriter uses OpenGL ES2 to display it on the screen. This step uses the CVOpenGLESTextureCache API, new in iOS 5.0, for enhanced performance. 
When the user chooses to record a movie, an AVAssetWriter is used to write the processed video and un-processed audio to a QuickTime movie file. 

In the RosyWriterViewController, it creates the instance of RosyWriterVideoProcessor and calls the  setupAndStartCaptureSession method. Also it creates instance of RosyWriterPreviewView which is an Open GL display. This view controller has  toggleRecording method which is for start and stop recording. 

Once the recording begins, the processor will send the recordingWillStart. Also calls back saying recordingDidStart once the recording started

In startRecording, below is the code

- assetWriter is allocated like assetWriter = [AVAssetWriter alloc] initWithURL:movieURL fileType:(NSString*)kUTTypeQuickTimeMovie error:&error];

- In the setupAndStartCaptureSession in this class, application is doing some important tasks. 

- Create the preview buffer queue. err = CMBufferQueueCreate(kAllocatorDefault, 1, CMBuffer);
- Create a movie writing queue which is a dispatch queue. dispatch_queue_create("Movie writing queue", DISPATCH_QUEUE_SERIAL);
- if the capture session is not setup yet, then setup the capture session. In the capture session setup, below are the main tasks done. 
- crete a capture session with captureSession = [AVCaptureSession alloc]init];
- Create the audio connection
- create the video connection 
- If the recording is not started yet, then start the recording. 

Now the major operation is in the didOutputSampleBuffer method. This returns the connection and the sample buffer received. 
If it is a video connection, application does the below few tasks 

- Get the frame rate
- Does the processing of the pixel buffer 
- Enques the processed buffer to the preview buffer 
- It calls back the pixelBufferReadyForDisplay once the queue operation is done. 
- Write the sample buffer to the asset writer stream. 

References: 
https://developer.apple.com/library/ios/samplecode/RosyWriter/Introduction/Intro.html