Notes

advertisement
Using the Accelerometer on Android
Before the dawn of smartphones, one of the few hardware components applications could interact with
was the keyboard. But times have changed and interacting with hardware components is becoming more
and more common.
Using gestures often feels more natural than interacting with a user interface through mouse and
keyboard. This is especially true for touch devices, such as smartphones and tablets. I find that using
gestures can bring an Android application to life, making it more interesting and exciting for the user.
In this tutorial, we'll use a gesture that you find in quite a few mobile applications, the shake gesture.
We'll use the shake gesture to randomly generate six Lottery numbers and display them on the screen
using a pretty animation.
1. Getting Started
Step 1: Project Setup
Start a new Android project in your favorite IDE (Integrated Development Environment) for Android
development. For this tutorial, I'll be using IntelliJ IDEA.
If your IDE supports Android development, it'll have created a Main class for you. The name of this class
may vary depending on which IDE you're using. The Main class plays a key role when your application is
launched. Your IDE should also have created a main layout file that the Main class uses to create the
application's user interface.
Since we're going to make use of a shake gesture, it's a good idea to lock the device's orientation. This
will ensure that the application's user interface isn't constantly switching between portrait and landscape.
Open the project's manifest file and set the screenOrientation option to portrait.
<activity android:name="com.Lottery.Main"
android:screenOrientation="portrait"
android:label="@string/app_name">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
Step 2: Setting Up the Sensor
With our project set up, it's time to get our hands dirty and write some code. At the moment, the main
activity class has an onCreate method in which we set the main layout by invoking setContentView as
shown below.
public class Main extends Activity {
/** Called when the activity is first created. */
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
}
}
Depending on the IDE that you're using, you may need to add a few import statements to Main.java, the
file in which your Main class lives. Most IDEs will insert these import statements for you, but I want to
make sure we're on the same page before we continue. The first import statement, import
android.app.Activity, imports the Activity class while the second import statement, import
android.os.Bundle, imports the Bundle class. The third import statement, com.example.R, contains the
definitions for the resources of the application. This import statement will differ from the one you see
below as it depends on the name of your package.
import android.app.Activity;
import android.os.Bundle;
import com.example.R;
In the next step, we'll leverage the SensorEventListener interface, which is declared in the Android SDK.
To use the SensorEventListener interface, the Main activity class needs to implement it as shown in the
code snippet below. If you take a look at the updated Main activity class, you'll find that I use the
implements keyword to tell the compiler that the Main class implements the SensorEventListener
interface.
public class Main extends Activity implements SensorEventListener {
/** Called when the activity is first created. */
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
}
}
To use the SensorEventListener interface, you need to add another import statement as shown below.
Most IDEs will intelligently add the import statement for you so you probably won't have to worry about
this.
1
import android.hardware.SensorEventListener;
From the moment, you update the Main class implementation as shown above, you'll see a few errors pop
up. This isn't surprising since we need two implement two required methods of the SensorEventListener
interface.
If you're using IntelliJ IDEA, you should be prompted to add these required methods when you click the
error. If you're using a different IDE, this behavior may be different. Let's add the two required methods
by hand as shown in the code snippet below. Make sure to add these methods in the Main class and
outside of the onCreate method.
@Override
public void onSensorChanged(SensorEvent event) {
}
@Override
public void onAccuracyChanged(Sensor sensor, int accuracy) {
}
Let's take a look at the onSensorChanged method. We will be using this method to detect the shake
gesture. The onSensorChanged method is invoked every time the built-in sensor detects a change. This
method is invoked repeatedly whenever the device is in motion. To use the Sensor and SensorEvent
classes, we add two additional import statements as shown below.
1
2
import android.hardware.Sensor;
import android.hardware.SensorEvent;
Before we implement onSensorChanged, we need to declare two private variables in the Main class,
senSensorManager of type SensorManager and senAccelerometer of type Sensor.
private SensorManager senSensorManager;
private Sensor senAccelerometer;
The SensorManager class is declared in android.hardware.SensorManager. If you're seeing any errors pop
up, double-check that the SensorManager class is imported as well.
1
import android.hardware.SensorManager;
In the onCreate method, we initialize the variables we've just declared and register a listener. Take a look
at the updated implementation of the onCreate method.
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
senSensorManager = (SensorManager) getSystemService(Context.SENSOR_SERVICE);
senAccelerometer = senSensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER);
senSensorManager.registerListener(this, senAccelerometer ,
SensorManager.SENSOR_DELAY_NORMAL);
}
To initialize the SensorManager instance, we invoke getSystemService to fetch the system's
SensorManager instance, which we in turn use to access the system's sensors. The getSystemService
method is used to get a reference to a service of the system by passing the name of the service. With the
sensor manager at our disposal, we get a reference to the system's accelerometer by invoking
getDefaultSensor on the sensor manager and passing the type of sensor we're interested in. We then
register the sensor using one of the SensorManager's public methods, registerListener. This method
accepts three arguments, the activity's context, a sensor, and the rate at which sensor events are delivered
to us.
public class Main extends Activity implements SensorEventListener {
private SensorManager senSensorManager;
private Sensor senAccelerometer;
/** Called when the activity is first created. */
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
senSensorManager = (SensorManager) getSystemService(Context.SENSOR_SERVICE);
senAccelerometer = senSensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER);
senSensorManager.registerListener(this, senAccelerometer ,
SensorManager.SENSOR_DELAY_NORMAL);
}
@Override
public void onSensorChanged(SensorEvent sensorEvent) {
}
@Override
public void onAccuracyChanged(Sensor sensor, int accuracy) {
}
}
There are two other methods that we need to override, onPause and onResume. These are methods of the
Main class. It's good practice to unregister the sensor when the application hibernates and register the
sensor again when the application resumes. Take a look at the code snippets below to get an idea of how
this works in practice.
protected void onPause() {
super.onPause();
senSensorManager.unregisterListener(this);
}
protected void onResume() {
super.onResume();
senSensorManager.registerListener(this, senAccelerometer,
SensorManager.SENSOR_DELAY_NORMAL);
}
Step 3: Detecting the Shake Gesture
We can now start to focus on the meat of the application. It will require a bit of math to figure out when a
shake gesture takes place. Most of the logic will go into the onSensorChanged method. We start by
declaring a few variables in our Main class. Take a look at the code snippet below.
private long lastUpdate = 0;
private float last_x, last_y, last_z;
private static final int SHAKE_THRESHOLD = 600;
Let's now zoom in on the implementation of the onSensorChanged method. We grab a reference to the
Sensor instance using the SensorEvent instance that is passed to us. As you can see in the code snippet
below, we double-check that we get a reference to the correct sensor type, the system's accelerometer.
public void onSensorChange(SensorEvent sensorEvent) {
Sensor mySensor = sensorEvent.sensor;
if (mySensor.getType() == Sensor.TYPE_ACCELEROMETER) {
}
}
The next step is to extract the device's position in space, the x, y, and z axis. Take a look at the image
below to better understand what I'm referring to. The x axis defines lateral movement, while the y axis
defines vertical movement. The z axis is little trickier as it defines movement in and out of the plane
defined by the x and y axes.
The system's sensors are incredibly sensitive. When holding a device in your hand, it is constantly in
motion, no matter how steady your hand is. The result is that the onSensorChanged method is invoked
several times per second. We don't need all this data so we need to make sure we only sample a subset of
the data we get from the device's accelerometer. We store the system's current time (in milliseconds) store
it in curTime and check whether more than 100 milliseconds have passed since the last time
onSensorChanged was invoked.
I've shown you how the accelerometer works and how you can use it to detect a shake gesture. Of course,
there are many other use cases for the accelerometer. With a basic understanding of detecting gestures
using the accelerometer, I encourage you to experiment with the accelerometer to see what else you can
do with it.
Difference between Google's Android and Apple's iOS
Google's Android and Apple's iOS are operating systems used primarily in mobile technology, such as
smartphones and tablets. Android, which is Linux-based and partly open source, is more PC-like than
iOS, in that its interface and basic features are generally more customizable from top to bottom. However,
iOS' uniform design elements are sometimes seen as being more user-friendly.
You should choose your smartphone and tablet systems carefully, as switching from iOS to Android or
vice versa will require you to buy apps again in the Google Play or Apple App Store. Android is now the
world’s most commonly used smartphone platform and is used by many different phone manufacturers.
iOS is only used on Apple devices, such as the iPhone.
Widgets
Company/Developer
OS family
Customizability
Initial release
Programmed in
Dependent on a PC or
Android
iOS
User Rating (838):
User Rating (789):
.08/5
.88/5
Yes
No, except in NotificationCenter
Google
Apple Inc.
Linux
OS X, UNIX
A lot. Can change almost anything.
Limited unless jailbroken
September 23, 2008
July 29, 2007
C, C++, java
C, C++, Objective-C
No
No
depends on model
with desktop application
Open source
Closed, with open source
a Mac
Easy media transfer
Source model
components.
Kernel, UI, and some standard apps
The iOS kernel is not open source
but is based on the open-source
Open source
Darwin OS.
Call features
supported
Auto-respond
Auto-respond, call-back reminder,
do not disturb mode
Internet browsing
Available on
Android
iOS
User Rating (838):
User Rating (789):
.08/5
.88/5
Google Chrome (or Android Browser on
Mobile Safari (Other browsers are
older versions; other browsers are available)
available)
Many phones and tablets, including Kindle
iPod Touch, iPhone, iPad, Apple
Fire(modified android), LG, HTC, Samsung, TV(2nd and 3rd generation)
Sony, Motorola, Nexus, and others.
Interface
Messaging
Voice commands
Maps
Video chat
App store
Touch screen, Smartwatch
Touch screen
Google Hangouts
iMessage
Google Now (on newer versions)
Siri
Google Maps
Apple Maps
Google Hangouts
Facetime
Google Play – 1,000,000+ Apps. Other app
Apple app store – 1,000,000+ Apps
stores like Amazon and Getjar also distribute
Android apps. (unconfirmed ".APK's")
Market share
81% of smartphones, 3.7% of tablets in
12.9% of smartphones, 87% of
North America (as of Jan'13) and 44.4% of
tablets in North America (as of
tablets in Japan (as of Jan'13). In the United
Jan'13) and 40.1% of tablets in Japan
States in Q1 2013 - 52.3% phones, 47.7%
(as of Jan'13)
tablets.
Available language(s)
Latest stable release
Device manufacturer
Upcoming
Android
iOS
User Rating (838):
User Rating (789):
.08/5
.88/5
32 Languages
34 Languages
Android 4.4 Kitkat (October, 2013)
7.1 (March 10, 2014)
Google, LG, Samsung, HTC, Sony, ASUS,
Apple Inc
Motorola, and many more
Unknown
Unknown
Current
Current
android.com
apple.com
releases/Release dates
Working state
Website
Interface
iOS and Android both use touch interfaces that have a lot in common - swiping, tapping and pinch-andzoom. Both operating systems boot to a homescreen, which is similar to a computer desktop. While an
iOS home screen only contains rows of app icons, Android allows the use of widgets, which display autoupdating information such as weather and email. The iOS user interface features a dock where users can
pin their most frequently used applications.
1 Interface
1.1 User experience
2 Apps available on iOS vs Android
3 Stability of Apps and the Operating System
4 Software upgrades
5 Device Selection
6 Call Features
7 Messaging
8 Video Chat
9 Voice Commands on Android vs iOS
10 Maps
11 Web Browsing
12 Facebook integration
13 Mobile payments
14 Security
15 Building and Publishing Apps for iOS vs. Android
15.1 UI Design for Android vs. iOS 7
16 The Bottomline: Choosing between iOS and Android
16.1 iOS pros and cons
16.2 Android pros and cons
Interface
A status bar runs across the top on both iOS and Android, offering information such the time, WiFi or cell
signal, and battery life; on Android the status bar also shows the number of newly received emails,
messages and reminders.
User experience
Pfeiffer Report released in September 2013 rates iOS significantly better than Android on cognitive load
and user friction.
Apps available on iOS vs Android
Android gets apps from Google Play, which currently has 600,000 apps available, most of which will run
on tablets. However, some Android devices, such as the Kindle Fire, use separate app stores that have a
smaller selection of apps available. Many originally iOS-only apps are now available for Android,
including Instagram and Pinterest, and Google’s more open app-store means other exclusive apps are also
available, including Adobe Flash Player and BitTorrent. Android also offers access to Google-based apps,
such as Youtube and Google Docs.
The Apple app store currently offers 700,000 apps, 250,000 of which are available for the iPad. Most
developers prefer to develop games for iOS before they develop for Android. Since a recent update, the
Youtube app has become unavailable on iOS, but iOS still offers some exclusive apps, including the
popular game Infinity Blade and Twitter client Tweetbot.
The bottomline when comparing Google and Apple's app stores is that most popular apps are available for
both platforms. But for tablets, there are more apps designed specifically for the iPad while Android tablet
apps are often scaled up versions of Android smartphone apps. Developers at startups often focus on one
platform (usually iOS) when they first launch their smartphone app because they do not have resources to
serve multiple platforms from the get go. For example, the popular Instagram app started with iOS and
their Android app came much later.
Stability of Apps and the Operating System
The Crittercism Mobile Experience Report published in March 2014 ranked Android KitKat as more
stable than iOS 7.1. Other findings from the report include:
Android 2.3 Gingerbread has the highest total crash rate, at 1.7%. Other versions of Android — Ice
Cream Sandwich, Jelly Bean, and KitKat — have a crash rate of 0.7%.
iOs 7.1 has a crash rate of 1.6%., and the rates for iOS 7.0 and iOS 5 are 2.1% and 2.5% respectively.
Phone versions of both Android and iOS are more stable than their tablet versions.
Crash rates for apps vary by category — games are most likely to crash (4.4% crash rate) and ecommerce apps have the lowest crash rate of 0.4%.
Software upgrades
Although Google does update Android frequently, some users may find that they do not receive the
updates on their phone, or even purchase phones with out-of-date software. Phone manufacturers decide
whether and when to offer software upgrades. They may not offer an upgrade to the latest version of
Android for all the phones and tablets in their product line. Even when an upgrade is offered, it is usually
several months after the new version of Android has been released.
This is one area where iOS users have an advantage. iOS upgrades are generally available to all iOS
devices. There could be exceptions for devices older than three years, or for certain features like Siri,
which was available for iPhone 4S users but not for older versions of iPhone. Apple cites hardware
capability as the reason some older devices may not receive all new features in an upgrade.
Device Selection
A wide variety of Android devices are available at many different price points, sizes and hardware
capabilities.
iOS is only available on Apple devices: the iPhone as a phone, the iPad as a tablet, and the iPod Touch as
an MP3 player. These tend to be more expensive than equivalent hardware using Android.
Call Features
Android allows the user to send one of a number of self-composed texts as autoreplies when declining a
call.
iOS’s phone app has many abilities, including the ability to reply to a phonecall with a canned text
message instead of answering, or to set a callback reminder. It also has a Do Not Disturb mode.
Messaging
Android allows users to log onto GTalk for instant messages. iOS does not offer a native way to chat to
non-Apple users. Users can message over Apple users using iMessage or use apps from Google for GTalk
and Microsoft for Skype.
Video Chat
Google Hangouts on Android can also be used for video chat, allowing users to chat over either 3G or
Wi-Fi. iOS uses Facetime, which can place video calls over both 3G and WiFi. However, it only allows
users to communicate with other Apple devices.
Voice Commands on Android vs iOS
iOS uses Siri, a voice-based virtual assistant, to understand and respond to both dictation as well as
spoken commands. Siri includes many features, such as reading sports scores and standings, making
reservations at restaurants and finding movie times at the local theater. You can also dictate texts and
emails, schedule calendar events, and interface with car audio and navigation.
Android offers a similar assistant, Google Now, which features the above abilities, plus can keep track of
your calendar and give verbal reminders when it is time to leave. It allows for voice search and dictation.
Maps
Apps like Google Maps, Waze and Bing are available for both iOS and Android. When Google released
its maps app for iOS in December 2012, the iOS version surpassed the version available for Android in
terms of features, design and ease of use. The Android version is not expected to stay behind. Apple's
own mapping app, which is bundled with every iOS device, was widely panned when it was launched
with iOS 6.
Web Browsing
Android uses Google Chrome as its web-browser, while iOS uses Safari. Both Internet browsers are
similar in quality and abilities and Google Chrome is also available for iOS. Safari is not available for
Android.
Words With Friends app on Android (L) & iOS (R)
Words With Friends app on Android (L) & iOS (R)
Facebook integration
Android is integrated with Facebook, allowing users to update their statuses or upload pictures from many
apps, and to pull contact data from their Facebook friends.
iOS is also fully integrated with Facebook, allowing users to update their status and upload images from
various apps, sync their contacts with Facebook, and have their Facebook events automatically added to
their iOS Calendar. iOS now offers much deeper integration with Facebook and Twitter because of how
tightly it is weaved into core apps on iOS.
Mobile payments
Android uses Google Wallet, an app that allows for mobile payments. Some Android phones are equipped
with an NFC chip (near-field communication) that is used for making wireless payments simply by
tapping the phone at the checkout counter. This service integrates with Google Wallet but is not available
on all Android phones or wireless carriers.
iOS offers Passbook, an app that collects in one place tickets, reward cards, and credit/debit cards. There
are no mobile payment features in iOS.
Security
Android’s applications are isolated from the rest of the system’s resources, unless a user specifically
grants an application access to other features. This makes the system less vulnerable to bugs, but
developer confusion means that many apps ask for unnecessary permissions. The most widespread
malware on Android is one where text messages are sent to premium rate numbers without the knowledge
of the user, and the sending of personal information to unauthorized third parties. As it is the more
popular smartphone operating system, it is more likely to be the focus of attacks.
Malware writers are less likely to write apps for iOS, due to Apple's review of all the apps and
verification of the identity of app publishers. However, if an iOS device is jailbroken and apps installed
from outside Apple's store, it can be vulnerable to attacks and malware.
Building and Publishing Apps for iOS vs. Android
Android apps are programmed using C, C++ and Java. It is an "open" platform; anyone can download the
Android source code and Android SDK for free. Anyone can create and distribute Android apps for free;
users are free to download apps from outside the official Google Play store. There is, however, a one-time
$25 registration fee for developers who want to publish their apps (whether free or paid apps) on the
official Google Play store. Apps published on Google Play undergo a review by Google. The Android
SDK is available for all platforms - Mac, PC and Linux.
iOS apps are programmed using Objective-C. Developers must pay $99 every year for access to the iOS
SDK and the right to publish in Apple's app store. The iOS SDK is only available for the Mac platform.
Some app development platforms - such as Titanium Appcelerator and PhoneGap - offer a way to code
once (say in Javascript and/or HTML) and have the platform convert it into "native" code for both
Android and iOS platforms.
UI Design for Android vs. iOS 7
In Beyond Flat, SeatGeek founder Jack Groetzinger outlines a lot of the differences in how Android and
iOS approach their design aesthetic and what it means for app developers. For example,
Buttons:Android buttons are usually monochromatic, with a tendency towards using iconography when
possible. The standard iOS 7 button is plain monochromatic text with no background or border. When
iOS 7 does use button borders, they tend to be quite simple.
The Action Bar of Navigation Bar: The nav bar in iOS is usually just a Back button linking to the
previous screen. In Android, the navigation bar usually has several action buttons.
Intents: Intents on Android allows applications to flexibly interoperate with each other. For example, apps
can "register" themselves as capable of sharing which allows the user to share using this app from any
other app.
The article outlines several other differences and is a great read.
The Bottomline: Choosing between iOS and Android
To summarize the key pros and cons of Android and iOS:
iOS pros and cons
Massive app ecosystem: distinct advantage for tablet apps while on smartphones popular apps are usually
available for both platforms
Deeper integration with Facebook and Twitter: it is easier to post updates and share on social networks
using iOS than Android because of how deeply integrated these platforms are with iOS.
iOS-only apps like Passbook, FaceTime, and mobile payments app Square (available on iOS 3GS,4,4S,5
and up, nut only for a limited Android phones)
Interface is locked down: Limited customization options for the home screens; only rows of app icons are
allowed. No third-party apps are pre-installed by the wireless carrier. Users can only install apps from the
App Store
Software upgrades: Apple offers software upgrades to all devices that have the hardware capable of
handling the new software.
Android pros and cons
Massive hardware selection: A large number of Android devices are available at various price points, with
varying hardware capabilities, screen sizes and features.
Highly customizable user experience: The home screen can be customized with not just app icons but
widgets that allow the user to stay connected or informed. Other examples include SwiftKey, which
modifies your Android smartphone’s keyboard, and apps that emulate older gaming consoles. Google has
fewer restrictions than Apple on what kinds of apps it allows in its Play store. Moreover, you can choose
to install Android apps from places other than the Google Play store.
Several prominent people have shifted from iPhone to Android. Android's connection to the Google
ecosystem of services is strong and arguably more useful compared with Apple's cloud services suite.
XCode Tutorial Practice 3: Storyboards, MapKit, View Navigation
We’re taking a practical approach to learning iOS programming. This means lots of hands on work and
practice!
In order to benefit the most from these lessons, you should open XCode and try to follow along.
What are Storyboards?
In Xcode 5, the project uses storyboards automatically. If you’re using Xcode 4, however, you have to
explicitly choose to use them in the new project settings dialog window.
We’ve already been using storyboards in the last few practice demos but if we never talked about them
explicitly, then here it is: Storyboards are a way to visually design your user interface, app flow and
transitions from view to view. I’m sure you’ve realized that for yourself already!
How to Use Storyboards
Let’s create a brand new XCode single-view project and make sure that the “Use Storyboard” option is
enabled. See below
Note: Xcode 5 users will automatically be using storyboards. The screenshots in this tutorial were taken
with Xcode 4 and storyboards but after this initial project settings dialog, Xcode 5 users will find that it’s
exactly the same!
If you forgot what all of the project creation options mean, please refer back to the lesson where we
created our first XCode project together.
When your XCode project is created, you’ll have a Storyboard file.
Click on that file now and your Editor Area will turn into the Interface Builder view for storyboards.
Creating a Navigation Controller
We’ll explain what a navigation controller is in the next lesson when we do a recap. For now, click on the
ViewController and then go up to the Editor Menu and choose “Embed in…” Navigation Controller.
If your Navigation Controller and View Controller are overlapping, you can double click an empty space
in the Storyboard to zoom out and then readjust the views. Note: In the zoomed out view, you won’t be
able to drag any UIElements from the Library Pane onto the view
Adding Another View Controller
Since our default single view application comes with a ViewController and view, it makes sense that in
order to have a second view, we would need a second ViewController and View right?
Right! So let’s search for ViewController in the Library Pane and click and drag it into your Storyboard.
Now you have two views and view controllers!
However, because the ViewController we dragged onto the Storyboard has an auto generated code
behind, let’s create our own UIViewController class and then tell Interface Builder that it should use the
one we created instead.
So right click on the root folder and select “New File…”, “Objective-C Class”, then in the create file
dialog, type in “MapViewController” and in the subclass box, put inUIViewController.
I’ll explain in the recap what subclassing means.
Now go to the Storyboard and select the second Viewcontroller than you had dragged in. In the Properties
inspector, there’s a tab with a field called “Custom Class” that will let you select the
“MapViewController” that we just created. Select it.
Now this ViewController and View represented by the Storyboard will be an instance of the
MapViewController class.
All you have to do is add the transition from one ViewController to the next.
Navigating Between Views
Storyboards make this part really easy.
In your first view, add a UIButton by searching for it in the Library pane and dragging the button onto the
view.
Note: You won’t be able to add UIElements into views if you’re in the zoomed out view. Double click in
an empty area of the Storyboard to zoom back in.
After you’ve added the button to the first ViewController, hold down the control key on your keyboard
and click and drag a line from the button to the second ViewController.
When you release your mouse, a small context menu will popup asking you how you want to transition.
Choose “Push” for now (We’ll explore the other options in the recap lesson).
You’ve just created a Segue (pronounced “seg-way”) between the Views!
Run your app and navigate between the two views!
Model View Controller Pattern In Action
Remember in the last recap, we talked about the model-view-controller pattern.
In this demo, let’s put that into action!
We’ll start by creating a new class.
Right click the root folder in your File Navigator and choose “New File…”, “Objective-C Class” from the
context menu.
In the new file creation dialog, name it “Location” and in the Subclass field, make sure it says
“NSObject”.
This class will represent a physical location.
Next we’re going to create a class to be our model and it will be in charge of supplying our
ViewControllers the data that the our Views will display.
Once again, right click the root folder in your File Navigator and choose “New File…”, “Objective-C
Class”.
This time you’ll name it “LocationDataController” and make sure that the Subclass field says
“NSObject”.
Working on Location Class
Since the location class is going to represent a physical location, let’s give it some properties to store
information that a location would have.
Open up Location.h and type out the following properties (If you need a refresher, check the last tutorial
on properties).
1 @property (strong, nonatomic) NSString *address;
2
3 @property (strong, nonatomic) NSString *photoFileName;
4
5 @property (nonatomic) float latitude;
6
7 @property (nonatomic) float longitude;
Next, we’re going to LocationDataController.h and add the following method (If you need a refresher
on methods, check the last tutorial on how to declare and use methods):
1 #import <Foundation/Foundation.h>
2 #import "Location.h"
3
4 @interface LocationDataController : NSObject
5
6 - (Location*)getPointOfInterest;
7
8 @end
And then in LocationDataController.m, we’re going to implement that method like below:
1 #import "LocationDataController.h"
2
3 @implementation LocationDataController
4
5 - (Location*)getPointOfInterest
6 {
7
Location *myLocation = [[Location alloc] init];
8
myLocation.address = @"Philz Coffee, 399 Golden Gate Ave, San Francisco, CA 94102";
9
myLocation.photoFileName = @"coffeebeans.png";
10 myLocation.latitude = 37.781453;
11 myLocation.longitude = -122.417158;
12
13 return myLocation;
14 }
15
16 @end
This method will now return a hardcoded location object. “Hardcoded” is just a term that means the data
is not coming from any data source, it’s just coded in and will always be the same. The opposite of
“hardcoded” is a term we use called “dynamic” which means that the data could be anything.
Showing Data From The Model
Now, let’s make our initial ViewController display data from our model.
Go to the storyboard and add a UILabel and UIImageView to the initial view.
We’re going to want to resize the UILabel to accommodate a few lines and furthermore, to make sure the
address wraps set “Lines” to 0.
Now that we have the UIElements in place, you’ll want to expose them as properties to the
ViewController.
We’ve done this before with XIB files using the dual view Assistant Editor and it’s no different with
Storyboards (if you forgot how to do this, check this tutorial).
So click the first ViewController in the Storyboard, click the “Assistant Editor” button in the upper right
hand corner to go into dual view.
On the left pane, you should have the storyboard and on the right pane you should
have ViewController.h.
Now hold down control and click and drag from the UILabel to the right hand side.
Name the property “addressLabel”.
Now do the same for the UIImageView and call it “photoImageView”.
Now we’re ready to use the data to set the UIElements.
Go to ViewController.m and at the top, import the LocationDataController.h and Location.h file.
Declare a method called “viewDidAppear” and in it, let’s create a new instance of
LocationDataController, retrieve the data and set the properties of the UIElements.
1 #import "ViewController.h"
2 #import "LocationDataController.h"
3 #import "Location.h"
4
5 @interface ViewController ()
6
7 @end
8
9 @implementation ViewController
10
11 - (void)viewDidLoad
12 {
13 [super viewDidLoad];
14 // Do any additional setup after loading the view, typically from a nib.
15 }
16
17 - (void)viewDidAppear:(BOOL)animated
18 {
19 LocationDataController *model = [[LocationDataController alloc] init];
20 Location *poi = [model getPointOfInterest];
21
22 self.addressLabel.text = poi.address;
23 [self.photoImageView setImage:[UIImage imageNamed:poi.photoFileName]];
24 }
25
26 - (void)didReceiveMemoryWarning
27 {
28 [super didReceiveMemoryWarning];
29 // Dispose of any resources that can be recreated.
30 }
31
32 @end
In order for XCode to find the photo, you’ll have to include the photo below in your XCode project.
Then drag and drop the image into the File Navigator area of your XCode project and click “Finish” in
the dialog that pops up.
Run the app now and you should see this on your screen!
Adding a Map
In the second view we’ll add a map, so go back to the storyboard and in the Library Pane, search for
MapView.
Click and drag that onto the second View.
We also need to expose the MapView to the MapViewController as a property so that the
MapViewController will be able to reference and call the methods or set the properties of the MapView
element.
Following the same procedure as above, open up dual view and control-click drag the MapView
UIElement to the right pane where MapViewcontroller.h is, and name the property “mapView”.
Adding the MapKit Framework
We need additional code libraries known as frameworks to add mapping functionality to our app.
Go to the project node and the Editor Area will show you the Project Settings.
Click on the app you want to add frameworks for and then click the Summary tab.
For Xcode 5 users, its the General tab.
Open up the “Linked Libraries and Frameworks” section and click the little “+” icon to bring up a list of
frameworks you can add.
Search and add MapKit.
Now you should see them in your File Navigator like this (I usually drag mine into the Frameworks
folder):
For Xcode 5 users, its automatically added into the Frameworks folder
Finally, go back to MapViewController.h and import the MapKit header file.
You should end up with the below in MapViewController.h:
1 #import <UIKit/UIKit.h>
2 #import <MapKit/MapKit.h>
3
4 @interface MapViewController : UIViewController
5 @property (strong, nonatomic) IBOutlet MKMapView *mapView;
6
7 @end
Setting The Map Location
Now go to MapViewController.h and in the “viewDidAppear” method, we’re going to move the map
to the location of the point of interest by calling a special method of the map and passing in the latitude
and longitude of our desired location.
But first at the top of MapViewController.m, import the Location and LocationDataController class
headers (right under “#import MapViewController.h”
1 #import "LocationDataController.h"
2 #import "Location.h"
Type out a “viewDidAppear” method like below and inside of it, get the location from our
LocationDataController class and move the map to the vicinity of the point of interest.
- (void)viewDidAppear:(BOOL)animated
1
{
2
LocationDataController *model = [[LocationDataController alloc] init];
3
Location *poi = [model getPointOfInterest];
4
5
CLLocationCoordinate2D poiCoodinates;
6
poiCoodinates.latitude = poi.latitude;
7
poiCoodinates.longitude= poi.longitude;
8
9
MKCoordinateRegion viewRegion = MKCoordinateRegionMakeWithDistance(poiCoodinates, 750,
10
750);
11
12
[self.mapView setRegion:viewRegion animated:YES];
13
}
After doing that, run your project and you should be able to navigate between the two views and see the
address and photo in one and a map of the point of interest in the other view.
context-constrained authorisation (CoCoA) framework for pervasive grid computing
The paper discusses access control implications when bridging Pervasive and Grid computing, and
analyses the limitations of current Grid authorisation solutions when applied to Pervasive Grid
environments. The key authorisation requirements for Pervasive Grid computing are identified and a
novel Grid authorisation framework, the context-constrained authorisation framework CoCoA, is
proposed. The CoCoA framework takes into account not only users' static attributes, but also their
dynamic contextual attributes that are inherent in Pervasive computing. It adheres to open Grid standards,
uses a modular layered approach to complement existing Grid authorisation systems, and inter-works with
other Grid security building blocks. A prototype implementation of the CoCoA framework is presented
and its performance evaluated.
CoCoA architecture design
5.1 High level design requirements To support context-aware access control in a Pervasive Grid typically
run in a pervasive, heterogeneous, large-scale and cross-institutional environment, the issues of
interoperability, compatibility, usability, scalability, and extensibility, in addition to context-awareness,
should be considered. For this reason, the following requirements have been identified for the design of
the CoCoA architecture: (R1) CoCoA should use a loosely-coupled modular architecture that conforms to
international standards and specifications. The clear separation of CoCoA’s functionality would
potentially allow greater interoperability with other software and middleware components. In other words,
CoCoA should be designed in a layered architecture to promote modularity and extensibility. In addition,
CoCoA components should adopt a service-oriented architecture, where each component is to work
independently and be able to expose its functionality to other Grid components, CoCoA components or
other software components through a well-defined interface. (R2) CoCoA should be able to acquire,
process and store contextual data from multiple sources for authorisation purposes. This, in turn, requires
that CoCoA adopt a standards-based communication framework for discovering, querying and managing
heterogeneous context sources such as software and hardware sensors, pervasive computing devices
and/or other forms of context providers. Furthermore, contextual information should be stored in a
suitable format that can easily be retrieved and processed. (R3) CoCoA should be interoperable and
compatible with current Grid authorisation solutions and standards. The addition of a context-aware
authorisation service should impose no, or minimum, modification to existing Grid security solutions. To
preserve the investments already made in Grid solutions and to achieve maximum interoperability and
compatibility with existing Grid authorisation software, CoCoA should be designed as an extension to,
rather than a replacement of, existing authorisation systems. It should be sufficiently generic to act as an
additional Grid security service to provide a context-constrained authorisation service to existing Grid
applications. In short, CoCoA components should provide an open standards-based interface, adopt a
standards-based contextual attribute assertion message format, and adopt a standards-based authorisation
policy language for describing context- constrained authorisation policies. (R4) CoCoA should provide
security administrators a convenient way to manage users, contextual attributes and context providers. In
other words, CoCoA should provide an intuitive administration interface that can be remotely and
securely accessed by administrators and a logging functionality for auditing purposes.
CoCoA components
5.3.1 Context authority The Context Authority (CTXA) provides a real-time acquisition of contextual
data. Ideally, it should have a generic yet lightweight communication interface to support a diverse array
of sources or devices (e.g. sensors, software agents and existing context-aware frameworks). Contextual
data that is obtained by CTXA may need to be transformed into a representation that can be understood
by the underlying authorisation decision engine. For example, an RFID signal from an RFID reader
should be coded into some meaningful contextual representation, e.g. timestamp and source of the signal.
5.3.2 Context policy information point
The Context Policy Information Point (CTX PIP) is responsible for collecting a subject’s context attribute
assertion from the corresponding CTXA, extracting contextual data from the assertion and passing the
data to CTX PDP. CTX PIP functions in a similar way as a VOMS PIP. While VOMS PIP collects and
parses a subject’s VOMS credentials to retrieve the relevant attributes needed for authorisation, CTX PIP
retrieves a subject’s contextual attributes from a CTXA assertion.
5.3.3 Context policy decision point
The Context Policy Decision Point (CTX PDP) is in charge of making an authorisation decision to grant
or deny a subject’s access request for a particular resource. It should be able to retrieve contextual
attributes from CTX PIP and evaluate them against the corresponding authorisation policy in order to
come to an authorisation decision. CTX PDP is also responsible for informing CSS about a subject’s
authorisation detail so that CSS can keep track of the subject’s authorisation session.
5.3.4 Context session service
When a contextual data changes in the midst of a granted access to a Grid resource, the resource provider
should be notified and react to any revised authorisation decision affected by the current state of context.
The Context Session Service (CSS) is designed to coordinate with CTXA and CTX PDP, and they
collectively achieve this capability. During the initial authorisation of a subject, CSS receives the
authorisation decision from CTX PDP, and issues a subscription request to the subject’s CTXA that
monitors and maintains the contextual data for the subject. CSS then keeps track of the session that has
been authorised for this subject. Whenever the subject’s contextual data changes (even in the midst of the
session), CTXA will issue a notification message to notify CSS of this change. CSS will then send the
updated contextual data to CTX PDP for re-evaluation. Assuming that the subject has already been
granted with the access and is currently in the middle of a session to access the resource, if the PDP
returns a DENY decision to CSS, CSS will notify the concerned Grid resource provider through its
Notification API so that the resource provider could promptly terminate or suspend the access. However
if the PDP returns an ALLOW authorisation decision, no notification will be sent and the access is
allowed to continue. 5.3.5 Notification application programming interface The Notification Application
Programming Interface (API) is designed as a means for a Grid resource to be notified whenever a
renewed authorisation decision is made based upon the latest context changes. The service responds to the
notification using a call-back function that enforces the new authorisation decision to deny or allow
further access to the resource. The design of the API imposes minimal changes or modifications to
existing Grid applications and services. Ideally, there should be different API bindings catering for the
most common Grid application programming languages such as Java, C, and Python. Additionally, these
APIs should also interface with common Grid execution management services.
5.4 CoCoA architecture
Having described the CoCoA architectural components, their functionality and design considerations, this
section details the integration of these components and explains how the components collectively provide
a dynamic context- constrained authorisation service. The CoCoA architecture consists of four functional
abstraction layers: the Context Monitoring Layer, the Context Storage and Management Layer, the
Context Distribution Layer and the Grid Application Layer. As shown in Fig. 4, the top three layers are
performed by CTXA that is the core component of the CoCoA architecture. The fourth layer uses a
Globus Toolkit (GT) Interface to connect directly with CSS, CTX PIP, and CTX PDP.
5.4.1 Context monitoring layer
The Context Monitoring Layer provides the functionality to satisfy the design requirement (R2). There is
a wide range of different contextual attributes that may be considered for authorisation purposes, as listed
in Table 1. Different contextual attributes may have values provided from different sources or devices,
which are also known as context providers. The context providers monitor and send raw contextual data
to CTXA. We therefore refer a context provider as a Monitoring Agent (MA). A generic communication
interface is provided between MAs and CTXA thus allowing a variety of sources to be connected to the
CoCoA architecture. The communication between an MA and CTXA is performed in two steps. Firstly,
the MA uses the API supplied by the device manufacturer to retrieve raw contextual data. Then the raw
data is sent to CTXA using the generic communication interface provided by CoCoA. For example, if an
RFID reader is used as a source of contextual data, then the MA for the RFID reader will be installed on
the device which is connected to the RFID reader. When a user authenticates using the RFID reader, the
MA reads the user’s RFID tag ID and sends this information to CTXA via the generic interface. MAs can
be deployed on authentication devices, sensor nodes or existing context-aware components to propagate
contextual data from these sources to CTXA.
5.4.2 Context storage and management layer
The primary function of the Context Storage and Management Layer is to manage contextual-related data
and metadata. At the heart of this layer is the Context Manager Service (CMS) that uses an RDBMS to
store all context related information managed in the administrative domain. The CMS database stores
information about registered users, events passed down from the Context Monitoring Layer, the bindings
between the users and their contextual data and a session table that keeps track of all active events of a
subject.
The Administration Module (see Fig. 4) contains all the functions that are necessary for an organisational
administrator to manage the users’ contextual data within CMS. This can be done either locally or
remotely through an intuitive Web interface. The administration Web interface satisfies the design
requirement (R4). It provides the functionality for registering users (Users table), registering MAs
(Context_provider table), and binding users to context attribute values (Context_Attribute table). The
User Module (see Fig. 4) can be used as a sign-on system that acts as a starting point for CoCoA to gather
contextual data for a particular user. Thereafter, contextual attributes relating to the user will be received
and stored by CMS until the user has logged out. The sign-on system can be a Web-based login portal or
other forms of integrated authentication systems, such as a smart-card system located at the entrance of
organisation’s premises. This allows the CoCoA system to be activated whenever a user arrives for work
or enters a smart lab, and later be deactivated when the user leaves. Once the user has successfully logged
in, CMS creates a user session entry in the database. Hereafter, whenever CMS receives an event message
from the Context Monitoring Layer, it looks up the Context_Attribute table to identify which user this
contextual data is related to before updating the user’s session information accordingly. Additionally,
CMS also sends a notification message to the Context Distribution Layer. 5.4.3 Context distribution layer
The Context Distribution Layer consists of three functional modules: Context Filtering and Privacy
Service (CFPS), Context Notification Service (CNS) and Context Assertion Service (CTX AS). These
functional modules collectively perform the task of securely distributing contextual data to other CoCoA
components (i.e. CSS and CTX PIP) in the Grid Application Layer. CFPS acts as a filter to censor the
release of contextual attribute values that are sensitive or confidential to nontrustworthy entities. CNS is
responsible for notifying CSS and CTX PIP of contextual data changes. CNS uses an observer
(publish/subscribe) notification model [41], where CTXA is the Subject (publisher), and the attribute
requester is the Observer (subscriber).
CTX AS issues SAML assertion messages to assert a subject’s contextual attributes and values. Figure 5
shows an SAML assertion snippet that contains a media and location context attributes. The
\AttributeStatement[ block is used to enclose contextual elements that are associated with a subject, while
contextual attributes are inserted within the\AttributeName[tag and their values within
the\AttributeValue[tag. The reason for using SAML for attribute assertions is due to its ability to provide
authenticity and integrity protections for the asserted messages through the use of digital signatures. CTX
AS appends the signer’s (i.e. the organisational CTXA) X.509 certificate together with the signed
contextual assertion. If the signature is successfully verified, the attribute requester can be assured that the
assertion message was indeed generated by the claimed CTX AS and that the contents within the assertion
have not been tampered with in transit. Additionally, we have also used the SAML condition elements to
express the validity period of a context attribute assertion. For example, a SAML assertion that asserts the
location of a user is only valid during the period specified by the NotBefore and NotOnOrAfter elements.
To further explain the authorisation flow within this layer, we begin with a subject who initially tries to
access a Grid resource in another organisation. The CTX AS in the subject’s home organisation will
receive a context attribute request (via the Grid Application Layer) from the resource provider’s CTX
PIP. This request contains the subject’s X.509 DN. The CTX AS then uses the subject’s DN as a search
key to query the CMS for the relevant subject’s contextual attributes. The returned attribute values within
the subject’s session are then sent to the CTX AS via the CFPS to filter subject attributes that are
considered as privacy sensitive. Our basic prototype uses a ‘safe-list’ to explicitly specify what attributes
can be sent out to which organisation. Once the ‘filtered’ contextual attribute values are received, the
CTX AS then generates a SAML assertion containing these attributes to the requesting CTX PIP. Context
attribute assertions will become invalid whenever one or more of the attributes within the assertions have
changed. In order to propagate these changes, the CNS allows one or more attribute requesters to monitor
a subject’s context attribute values. This is done by having the attribute requesters subscribe to the CNS.
The subscription message will prompt the CNS to keep an internal record of the subscriber along with the
subject’s session in which the subscriber is interested. Thereafter, whenever the CNS receives a
User_Session_Change message from the CMS, it will look up the list of subscribers and identify which of
the subscribers need to be notified of the changes.
5.4.4 Grid application layer
The Grid Application Layer consists of components that were developed as GT4-compatible services in
line with the design requirement (R3). Figure 6 gives an overview of these components and their
interactions.
In summary, the working of the CoCoA framework can be described as follows:
(1) A subject tries to access a Grid resource. This request will be intercepted by the PEP of the Grid
resource.
(2) The PEP invokes the CTX PIP (via the GT4 authorisation callout) to retrieve all contextual attributes
of the subject.
(3) The CTX PIP issues a SAML assertion request to the subject’s CTXA (the URI of the CTXA is
embedded into the subject’s proxy credential extension). If the request is valid, the subject’s CTXA
returns a SAML assertion that contains all the active contextual attributes of the subject.
(4) The CTX PIP then extracts the contextual attributes and data from the assertion, verifies their
authenticity, and, if the verification is positive, passes the subject’s contextual attribute values to the CTX
PDP.
(5) The CTX PDP uses the subject’s contextual data in conjunction with other PEP-supplied information
(e.g. subject name and requested operation) to make an authorisation decision. This decision is then
returned to the PEP for enforcement.
(6) At the same time, the CTX PDP sends the subject’s details (subject’s identity, CTXA URI,
authorisation decision, timestamp and requested resource) to the CSS.
(7) The CSS uses these details to issue a subscription message to the subject’s CTXA in order to be
notified of any subsequent contextual data change in relation to the subject.
(8) Whenever the CSS receives a notification message signalling that one or more of the subject’s
contextual attributes have changed their values, the CSS will request a revised authorisation decision from
the CTX PDP.
(9) This revised authorisation decision will then be sent to the Grid resource through the Notification API.
The new authorisation decision can then be enforced using a programmer-defined call-back function
provided by the Notification API.
Download