logo

Thông báo

Icon
Error

Chia sẻ
Tùy chọn
Xem
Xem bài viết cuối
Offline admin  
#1 Đã gửi : 16/03/2015 lúc 04:57:23(UTC)
admin

Danh hiệu: Administration

Chức danh:

Nhóm: Administrators
Gia nhập: 23-07-2013(UTC)
Bài viết: 6,102
Man
Viet Nam
Đến từ: Vietnam

Cảm ơn: 10 lần
Được cảm ơn: 2 lần trong 2 bài viết

AWS JS SDK - The Canonical Angular Guide

One of the biggest benefits to building a single page app (SPA) is the ability to host flat files, rather than needing to build and service a back-end infrastructure.

However, most of the applications that we will build need to be powered by a back-end server with custom data. There are a growing number of options to enable us developers to focus on building only our front-end code and leave the back-ends alone.

Amazon released a new option for us late last week to allow us to build server-less web applications from right inside the browser: Amazon AWS Javascript SDK.

Their browser-based (and server-side with NodeJS) SDK allows us to confidently host our applications and interact with production-grade back-end services.

Now, it’s possible to host our application stack entirely on Amazon infrastructure, using S3 to host our application and files, DynamoDB as a NoSQL store, and other web-scale services. We can even securely accept payments from the client side and get all the benefits of the Amazon CDN.

With this release, the Javascript SDK now allows us to interact with a large portion of the dozens of Amazon AWS services. These services include:

DynamoDB

The fast, fully managed NoSQL database service that allows you to scale to infinite size with automatic triplicate replication with secure access controls.

Simple Notification Service (SNS)

The fast, flexible fully managed push notification service that allows us to push messages to mobile devices as well as other services, such as email or even to amazon’s own Simple Queue Service (SQS).

Simple Queue Service (SQS)

The fast, reliable, fully managed queue service that allows us to create huge queues in a fully managed way. It allows us to create large request objects so we can fully decouple our application’s components from each other using a common queue.

Simple Storage Service (S3)

The web-scale and fully managed data store that allows us to store large objects (up to 5 terabytes) with an unlimited number of objects. We can use S3 to securely store encrypted and protected data all over the world. We’ll even use S3 to host our own Angular apps.

Security Token Service (STS)

The web-service that allows us to request temporary and limited privileged credentials for IAM users. We won’t cover this in-depth, but it does provide a nice interface to creating limited secure operations on our data.

The full list of services can be found on the official project here.

AWSJS + Angular

In this section, we intend on demonstrating how to get our applications up and running on the AWSJS stack in minutes.

To do so, we’re going to create a mini, bare-bones version of Gumroad that we will allow our users to upload screenshots and we’ll let them sell their screenshots by integrating with the fantastic Stripe API.

We cannot recommend enough these two services and this mini-demo is not intended on replacing their services, only to demonstrate the power of Angular and the AWS API.

To create our product, we’ll need to:

  • Allow users to login to our service and store their unique emails
  • Allow users to upload files that are associated with them
  • Allow buyers to click on images and present them with an option to buy the uploaded image
  • Take credit card charges and accept money, directly from a single page angular app

We’ve included the entire source of the article at http://d.pr/aL9q.

Getting started

We’ll start with a standard structured index.html:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
<!doctype html>

<html>

<head>

<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.0-rc.3/angular.min.js"></script>

<script src="http://code.angularjs.org/1.2.0-rc.3/angular-route.min.js"></script>

<link rel="stylesheet" href="styles/bootstrap.min.css">

</head>

<body>

<div ng-view></div>

<script src="scripts/app.js"></script>

<script src="scripts/controllers.js"></script>

<script src="scripts/services.js"></script>

<script src="scripts/directives.js"></script>

</body>

</html>

In this standard angular template, we’re not loading anything crazy. We’re loading the base angular library as well as ngRoute and our custom application code.

Our application code is also standard. Our scripts/app.js simply defines an angular module along with a single route /:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
angular.module('myApp', [

'ngRoute',

'myApp.services',

'myApp.directives'])

.config(function($routeProvider) {

$routeProvider

.when('/', {

controller: 'MainCtrl',

templateUrl: 'templates/main.html',

})

.otherwise({

redirectTo: '/'

});

});

Our scripts/controllers.js creates controllers from the main module:

1
2
3
4
angular.module('myApp')

.controller('MainCtrl', function($scope) {

});

And our scripts/services.js and scripts/directives.js are simple as well:

1
2
// scripts/services.js

angular.module('myApp.services', []);

1
2
// scripts/directives.js

angular.module('myApp.directives', [])

Angular structure

Introduction

The aws ecosystem is huge and is used all over the world, in production. The gross amount of useful services that Amazon runs makes it a fantastic platform to power our applications on top of.

Historically, the APIs have not always been the easiest to use and understand, so we hope to address some of that confusion here.

Traditionally, we’d use a signed request with our applications utilizing the clientid/secret access key model. Since we’re operating in the browser, it’s not a good idea to embed our clientid and our client_secret in the browser where anyone can see it. (It’s not much of a secret anyway if it’s embedded in clear text, right?).

Luckily, the AWS team has provided us with an alternative method of identifying and authenticating our site to give access to the aws resources.

The first steps to creating an AWS-based angular app will be to set up this relatively complex authentication and authorization we’ll use throughout the process.

Currently (at the time of this writing), the AWS JS library integrates cleanly with three authentication providers:

  • Facebook
  • Google Plus
  • Amazon Login

In this section, we’ll be focusing on integrating with the Google+ API to host our login, but the process is very similar for the other two authentication providers.

Installation

First things first, we’ll need to install the files in our index.html. Inside of our index.html, we’ll need to include the aws-sdk library as well as the Google API library.

We’ll modify our index.html to include these libraries:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
<!doctype html>

<html>

<head>

<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.0-rc.3/angular.min.js"></script>

<script src="http://code.angularjs.org/1.2.0-rc.3/angular-route.min.js"></script>

<script src="https://sdk.amazonaws.com/js/aws-sdk-2.0.0-rc1.min.js"></script>

<link rel="stylesheet" href="styles/bootstrap.min.css">

</head>

<body>

<div ng-view></div>

<script src="scripts/app.js"></script>

<script src="scripts/controllers.js"></script>

<script src="scripts/services.js"></script>

<script src="scripts/directives.js"></script>

<script type="text/javascript" src="https://js.stripe.com/v2/"></script>

<script type="text/javascript">

(function() {

var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true;

po.src = 'https://apis.google.com/js/client:plusone.js?onload=onLoadCallback';

var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s);

})();

</script>

</body>

</html>

Now, notice that we added an onload callback for the Google Javascript library and we did not use the ng-app to bootstrap our application. If we let angular automatically bootstrap our application, we’ll run into a race condition where the Google API may not be loaded when the application starts.

This non-deterministic nature of our application will make the experience unusable, so instead we will manually bootstrap our app in the onLoadCallback function.

To manually bootstrap the application, we’ll add the onLoadCallback function to the window service. Before we can call to bootstrap angular, we’ll need to ensure that the google login client is loaded.

The google API client, or gapi gets included at run-time and is set by default to lazy-load its services. By telling the gapi.client to load the oauth2 library in advance of starting our app, we will avoid any potential mishaps of the oauth2 library being unavailable.

1
2
3
4
5
6
7
8
9
10
11
// in scripts/app.js

window.onLoadCallback = function() {

// When the document is ready

angular.element(document).ready(function() {

// Bootstrap the oauth2 library

gapi.client.load('oauth2', 'v2', function() {

// Finally, bootstrap our angular app

angular.bootstrap(document, ['myApp']);

});

});

}

With the libraries available and our application ready to be bootstrapped, we can set up the authorization part of our app.

Running

As we are using services that depend upon our URL to be an expected URL, we’ll need to run this as a server, rather than simply loading the html in our browser.

We recommend using the incredibly simple python SimpleHTTPServer

1
$ python -m SimpleHTTPServer 9000

Now we can load the url http://localhost:9000/ in our browser.

User authorization/authentication

First, we’ll need to get a client_id and a client_secret from Google so that we’ll be able to actually interact with the google plus login system.

To get an app, head over to the Google APIs console and create a project.

Create a google+ project

Open the project by clicking on the name and click on the APIs & auth nav button. From here, we’ll need to enable the Google+ API. Find the APIs button and click on it. Find the Google+ API item and click the OFF to ON slider.

Enable Google+ API

Once that’s set, we’ll need to create and register an application and use it’s application ID to make authenticated calls.

Find the Registered apps option and click on it to create an app. Make sure to select the Web Application option when it asks about the type of application.

Create a registered application

Once this is set, you’ll be brought to the application details page. Select the OAuth 2.0 Client ID dropdown and take note of the application’s Client ID. We’ll use this in a few minutes.

Lastly, add the localhost origin to the WEB ORIGIN of the application. This will ensure we can develop with the API locally:

Registered app details

The google console has changed slightly and no longer accepts localhost as a valid origin. When in development, we like to change our local computer name. For instance, my local computer name is ari.dev. In the google web console, change the name to http://ari.dev:9000 and load the html by the name in the browser.

Next, we’ll create a google+ login directive. This Angular directive will enable us to add a customized login button to our app with a single file element.

For more information about directives, check out our in-depth post on directives.

We’re going to have two pieces of functionality with our google login, we’ll create an element that we’ll attach a the standard google login button and we’ll want to run a custom function after the button has been rendered.

The final directive will look like the following in scripts/directives.js:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
angular.module('myApp.directives', [])

.directive('googleSignin', function() {

return {

restrict: 'A',

template: '<span id="signinButton"></span>',

replace: true,

scope: {

afterSignin: '&'

},

link: function(scope, ele, attrs) {

// Set standard google class

attrs.$set('class', 'g-signin');

// Set the clientid

attrs.$set('data-clientid',

attrs.clientId+'.apps.googleusercontent.com');

// build scope urls

var scopes = attrs.scopes || [

'auth/plus.login',

'auth/userinfo.email'

];

var scopeUrls = [];

for (var i = 0; i < scopes.length; i++) {

scopeUrls.push('https://www.googleapis.com/' + scopes[i]);

};

// Create a custom callback method

var callbackId = "_googleSigninCallback",

directiveScope = scope;

window[callbackId] = function() {

var oauth = arguments[0];

directiveScope.afterSignin({oauth: oauth});

window[callbackId] = null;

};

// Set standard google signin button settings

attrs.$set('data-callback', callbackId);

attrs.$set('data-cookiepolicy', 'single_host_origin');

attrs.$set('data-requestvisibleactions', 'http://schemas.google.com/AddActivity')

attrs.$set('data-scope', scopeUrls.join(' '));

// Finally, reload the client library to

// force the button to be painted in the browser

(function() {

var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true;

po.src = 'https://apis.google.com/js/client:plusone.js';

var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s);

})();

}

}

});

Although it’s long, it’s fairly straightforward. We’re assigning the google button class g-signin, attaching the clientid based on an attribute we pass in, building the scopes, etc.

One unique part of this directive is that we’re creating a custom callback on the window object. Effectively, this will allow us to fake the callback method needing to be called in Javascript when we make the function to allow us to actually make the call to the local afterSignin action instead.

We’ll then clean up the global object because we’re allergic to global state in AngularJS.

With our directive primed and ready to go, we can include the directive in our view. We’re going to call the directive in our view like so, replacing the client-id and the after-signin attributes on the directive to our own.

Make sure to include the oauth parameter exactly as it’s spelled in the after-signup attribute. This is called this way due to how angular directives call methods with parameters inside of directives.

1
2
3
4
5
<h2>Signin to ngroad</h2>

<div google-signin

client-id='CLIENT_ID'

after-signin="signedIn(oauth)"></div>

<pre>{{ user | json }}</pre>

See it

Signin to ngroad

Đăng nhập bằng Google
{

"state": "",

"error": "immediate_failed",

"error_subtype": "origin_mismatch",

"num_sessions": "1",

"session_state": "d524d039e2418f59f4f10875990873a6b52045f0.vl6j85Y0KvfqhhpE.59e3",

"client_id": "395118764244-q12ta6un8j1ns15o5blj203sho962prs.apps.googleusercontent.com",

"scope": "https://www.googleapis.com/auth/plus.login https://www.googleapis.com/auth/userinfo.email",

"g_user_cookie_policy": "single_host_origin",

"cookie_policy": "single_host_origin",

"response_type": "code token id_token gsession",

"issued_at": "1426499527",

"expires_in": "86400",

"expires_at": "1426585927",

"status": {

"google_logged_in": true,

"signed_in": false,

"method": null

}

}

The user data in the example is the returned access_token for your login (if you log in). It is not saved on our servers, not sensitive data, and will disappear when you leave the page.

Finally, we’ll need our button to actually cause an action, so we’ll need to define our after-signin method signedIn(oauth) in our controller.

This signedIn() method will kill off the authenticated page for us in our real application. Note, this method would be an ideal place to set a redirect to a new route, for instance the /dashboard route for authenticated users.

1
2
3
4
5
6
7
angular.module('myApp')

.controller('MainCtrl',

function($scope) {

$scope.signedIn = function(oauth) {

$scope.user = oauth;

}

});

UserService

Before we dive a bit deeper into the AWS-side of things, let’s create ourselves a UserService that is responsible for holding on to our new user. This UserService will handle the bulk of the work for interacting with the AWS backend as well as keep a copy of the current user.

Although we’re not quite ready to attach a backend, we can start building it out to handle holding on to a persistent copy of the user instance.

In our scripts/services.js, we’ll create the beginnings of our UserService:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
angular.module('myApp.services', [])

.factory('UserService', function($q, $http) {

var service = {

_user: null,

setCurrentUser: function(u) {

if (u && !u.error) {

service._user = u;

return service.currentUser();

} else {

var d = $q.defer();

d.reject(u.error);

return d.promise;

}

},

currentUser: function() {

var d = $q.defer();

d.resolve(service._user);

return d.promise;

}

};

return service;

});

Although this setup is a bit contrived for the time being, we’ll want the functionality to set the currentUser as a permanent fixture in the service.

Remember, services are singleton objects that live for the duration of the application lifecycle.

Now, instead of simply setting our user in the return of the signedIn() function, we can set the user to the UserService:

1
2
3
4
5
6
7
8
9
10
angular.module('myApp')

.controller('MainCtrl',

function($scope) {

$scope.signedIn = function(oauth) {

UserService.setCurrentUser(oauth)

.then(function(user) {

$scope.user = user;

});

}

});

For our application to work, we’re going to need to hold on to actual user emails so we can provide a better method of interacting with our users as well as holding on to some persistent, unique data per-user.

We’ll use the gapi.client.oauth2.userinfo.get() method to fetch the user’s email address rather than holding on to the user’s access_token (and other various access details).

In our UserService, we’ll update our currentUser() method to include this functionality:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
// ...

},

currentUser: function() {

var d = $q.defer();

if (service._user) {

d.resolve(service._user);

} else {

gapi.client.oauth2.userinfo.get()

.execute(function(e) {

service._user = e;

})

}

return d.promise;

}

// ...

All aboard AWS

Now, as we said when we first started this journey, we’ll need to set up authorization with the AWS services.

If you do not have an AWS account, head over to aws.amazon.com and grab an account. It’s free and quick.

Now, first things first: we’ll need to create an IAM role. IAM, or AWS’s Identity and Access Management service is one of the reasons why the AWS services are so powerful. We can create fine-grain access controls over our systems and data using IAM.

Unfortunately, this flexibility and power of IAM also makes it a bit more complex, so we’ll walk through creating it here and making it as clear as we can.

Let’s create the IAM role. Head to the IAM console and click on the Roles navigation link.

Click the Create New Role button and give our new role a name. We’ll call ours the google-web-role.

Create a new role

Next, we’ll need to configure the IAM role to be a Web Identity Provider Access role type. This is how we’ll be able to manage our new role’s access to our AWS services.

Set the role type

Now, remember the CLIENT ID that we created from google above? In the next screen, select Google from the dropdown and paste the CLIENT ID into the Audience box.

This will join our IAM role and our Google app together so that our application can call out to AWS services with an authenticated Google user.

Google auth

Click through the Verify Trust (the next screen). This screen shows the raw configuration for AWS services. Next, we’ll create the policy for our applications.

The Policy Generator is the easiest method of getting up and running quickly to build policies. This is where we’ll set what actions our users can and cannot take.

In this step, we’re going to be taking very specific actions that our web users can take. We’re going to allow our users to the following actions for each service:

S3

On the specific bucket (ng-newsletter-example, in our example app), we’re going to allow our users to take the following actions:

  • GetObject
  • ListBucket
  • PutObject

The Amazon Resource Name (ARN) for our s3 bucket is:

1
arn:aws:s3:::ng-newsletter-example/*

DynamoDB

For two specific table resources, we’re going to allow the following actions:

  • GetItem
  • PutItem
  • Query

The Amazon Resource Name (ARN) for our dynamoDB tables are the following:

1
2
3
4
[

"arn:aws:dynamodb:us-east-1:<ACCOUNT_ID>:table/Users",

"arn:aws:dynamodb:us-east-1:<ACCOUNT_ID>:table/UsersItems"

]

Your can be found on your Account dashboard. Click on the My Account button at the top of the page and navigate to the page. Your ACCOUNT_ID is the number called ‘Account Number:’.

The final version of our policy can be found here.

Adding the IAM policy

For more information on the confusing ARN numbers, check out the amazon documentation on them here.

One final piece of information that we’ll need to hold on to is the Role ARN. We can find this Role ARN on the summary tab of the IAM user in our IAM console.

Take note of this string as we’ll set it in a moment.

Role ARN

Now that we’re finally done with creating our IAM user, we can move on to integrating it inside of our angular app.

AWSService

We’ll move the root of our application for integrating with AWS into it’s own service we’re going to build called theAWSService.

Since we are going to need to have the ability to custom configure our service at configure-time, we’ll want to create it as a provider.

Remember, the only service-type that can be injected into the .config() function is the .provider() type.

First, we’ll create the stub of our provider in scripts/services.js:

1
2
3
4
5
6
7
8
9
10
11
12
13
// ...

.provider('AWSService', function() {

var self = this;

self.arn = null;

self.setArn = function(arn) {

if (arn) self.arn = arn;

}

self.$get = function($q) {

return {}

}

});

As we can already start to notice, we’ll need to set the Role ARN for this service so that we can attach the proper user to the correct services.

Setting up our AWSService as a provider like we do above enables us to set the following in our scripts/app.js file:

1
2
3
4
5
6
7
8
angular.module('myApp',

['ngRoute', 'myApp.services', 'myApp.directives']

)

.config(function(AWSServiceProvider) {

AWSServiceProvider

.setArn(

'arn:aws:iam::<ACCOUNT_ID>:role/google-web-role');

})

Now, we can carry on with the AWSService and not worry about overriding our Role ARN as well as it becomes incredibly easy to share amongst our different applications instead of recreating it every time.

Our AWSService at this point doesn’t really do anything yet. The last component that we’ll need to ensure works is that we give access to our actual users who log in.

This final step is where we’ll need to tell the AWS library that we have an authenticated user that can operate as our IAM role.

We’ll create this credentials as a promise that will eventually be resolved so we can define the different portions of our application without needing to bother checking if the credentials have been loaded simply by using the .then() method on promises.

Let’s modify our $get() method in our service adding a method that we’ll call setToken() to create a new set of WebIdentityCredentials:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
// ...

self.$get = function($q) {

var credentialsDefer = $q.defer(),

credentialsPromise = credentialsDefer.promise;

return {

credentials: function() {

return credentialsPromise;

},

setToken: function(token, providerId) {

var config = {

RoleArn: self.arn,

WebIdentityToken: token,

RoleSessionName: 'web-id'

}

if (providerId) {

config['ProviderId'] = providerId;

}

self.config = config;

AWS.config.credentials =

new AWS.WebIdentityCredentials(config);

credentialsDefer

.resolve(AWS.config.credentials);

}

}

}

// ...

Now, when we get our oauth.access_token back from our login through Google, we’ll pass in the id_token to this function which will take care of the AWS config setup.

Let’s modify the UserService service such that we call the setToken() method:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
// ...

.factory('UserService', function($q, $http) {

var service = {

_user: null,

setCurrentUser: function(u) {

if (u && !u.error) {

AWSService.setToken(u.id_token);

return service.currentUser();

} else {

var d = $q.defer();

d.reject(u.error);

return d.promise;

}

},

// ...

Starting on dynamo

In our application, we’ll want to associate any images that one user uploads to that unique user. To create this association, we’ll create a dynamo table that stores our users as well as another that stores the association between the user and the user’s uploaded files.

To start interacting with dynamo, we’ll first need to instantiate a dynamo object. We’ll do this inside of our AWSService service object, like so:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
// ...

setToken: function(token, providerId) {

// ...

},

dynamo: function(params) {

var d = $q.defer();

credentialsPromise.then(function() {

var table = new AWS.DynamoDB(params);

d.resolve(table);

});

return d.promise;

},

// ...

As we discussed earlier, by using promises inside of our service objects, we only need to use the promise .then()api method to ensure our credentials are set when we’re starting to use them.

You might ask why we’re setting params with our dynamo function. Sometimes we’ll want to interact with our dynamoDB with different configurations and different setups. This might cause us to need to recreate objects that we already use once in our page.

Rather than having this duplication around with our different AWS objects, we’ll cache these objects using the built-in angular $cacheFactory service.

$cacheFactory

The $cacheFactory service enables us to create an object if we need it or recycle and reuse an object if we’ve already needed it in the past.

To start caching, we’ll create a dynamoCache object where we’ll store our cached dynamo objects:

1
2
3
4
5
6
7
8
9
// ...

self.$get = function($q, $cacheFactory) {

var dynamoCache = $cacheFactory('dynamo'),

credentialsDefer = $q.defer(),

credentialsPromise = credentialsDefer.promise;

return {

// ...

Now, back in our dynamo method, we can draw from the cache if the object exists in the cache or we can set it to create the object when necessary:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
// ...

dynamo: function(params) {

var d = $q.defer();

credentialsPromise.then(function() {

var table =

dynamoCache.get(JSON.stringify(params));

if (!table) {

var table = new AWS.DynamoDB(params);

dynamoCache.put(JSON.stringify(params), table);

};

d.resolve(table);

});

return d.promise;

},

// ...

Saving our currentUser

When a user logs in and we fetch the user’s email, this is a good point for us to add the user to our user’s database.

To create a dynamo object, we’ll use the promise api method .then() again, this time outside of the service. We’ll create an object that will enable us to interact with the User’s table we’ll create in the dynamo API console.

We’ll need to manually create these dynamo tables the first time because we do not want to give access to our web users the ability to create dynamo tables, which might include us.

To create a dynamo table, head to the dynamo console and find the Create Table button.

Create a table called Users with a primary key type of Hash. The Hash Attribute Name will be the primary key that we’ll use to get and put objects on the table. For this demo, we’ll use the string: User email.

Create the Users dynamo table

Click through the next two screens and set up a basic alarm by entering your email. Although this step isn’t 100% necessary, it’s easy to forget that our tables are up and without being reminded, we might just end up leaving them up forever.

Once we’ve clicked through the final review screen and click create, we’ll end up with a brand new Dynamo table where we will store our users.

While we are at the console, we’ll create the join table. This is the table that will join the User and the items they upload.

Find the Create Table button again and create a table called UsersItems with a primary key type of Hash and Range. For this table, The Hash Attribute Name will also be User email and the Range Attribute Name will be ItemId.

This will allow us to query for User’s who have created items based on the User’s email.

The rest of the options that are available on the next screens are optional and we can click through the rest.

At this point, we have two dynamo tables available.

Back to our UserService, we’ll first query the table to see if the user is already saved in our database, otherwise we’ll create an entry in our dynamo database.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
var service = {

_user: null,

UsersTable: "Users",

UserItemsTable: "UsersItems",

// ...

currentUser: function() {

var d = $q.defer();

if (service._user) {

d.resolve(service._user);

} else {

// After we've loaded the credentials

AWSService.credentials().then(function() {

gapi.client.oauth2.userinfo.get()

.execute(function(e) {

var email = e.email;

// Get the dynamo instance for the

// UsersTable

AWSService.dynamo({

params: {TableName: service.UsersTable}

})

.then(function(table) {

// find the user by email

table.getItem({

Key: {'User email': {S: email}}

}, function(err, data) {

if (Object.keys(data).length == 0) {

// User didn't previously exist

// so create an entry

var itemParams = {

Item: {

'User email': {S: email},

data: { S: JSON.stringify(e) }

}

};

table.putItem(itemParams,

function(err, data) {

service._user = e;

d.resolve(e);

});

} else {

// The user already exists

service._user =

JSON.parse(data.Item.data.S);

d.resolve(service._user);

}

});

});

});

});

}

return d.promise;

},

// ...

Although it looks like a lot of code, this simply does a find or create by username on our dynamoDB.

At this point, we can finally get back to our view and check out what’s happening in the view.

In our templates/main.html, we’ll add a container that simply shows the Login form if there is no user and shows the user details if there is a user.

We’ll do this with simple ng-show directives and our new google-signin directive.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
<div class="container">

<h1>Home</h1>

<div ng-show="!user" class="row">

<div class="col-md-12">

<h2>Signup or login to ngroad</h2>

<div google-signin

client-id='395118764200'

after-signin="signedIn(oauth)"></div>

</div>

</div>

<div ng-show="user">

<pre>{{ user | json }}</pre>

</div>

</div>

With our view set up, we can now work with logged in users inside the second <div> (in production, it’s a good idea to make it a separate route).

Uploading to s3

Now that we have our logged in user stored in dynamo, it’s time we create the ability to handle file upload where we’ll store our files directly on S3.

First and foremost, a shallow dive into CORS. CORS, or Cross-Origin Resource Sharing is a security features that modern browsers support allowing us to make requests to foreign domains using a standard protocol.

Luckily, the AWS team has made supporting CORS incredibly simple. If we’re hosting our site on s3, then we don’t even need to set up CORS (other than for development purposes).

To enable CORS on a bucket, head to the s3 console and find the bucket that we’re going to use for file uploads. For this demo, we’re using the ng-newsletter-example bucket.

Once the bucket has been located, click on it and load the Properties tab and pull open the Permissions option. click on the Add CORS configuration button and pick the standard CORS configuration.

Enable CORS on an S3 bucket

We’ll create a simple file upload directive that kicks off a method that uses the HTML5 File API to handle the file upload. This way, when the user selects the file the file upload will immediately start.

To handle the file selection directive, we’ll create a simple directive that binds to the change event and calls a method after the file has been selected.

The directive is simple:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
// ...

.directive('fileUpload', function() {

return {

restrict: 'A',

scope: { fileUpload: '&' },

template: '<input type="file" id="file" /> ',

replace: true,

link: function(scope, ele, attrs) {

ele.bind('change', function() {

var file = ele[0].files;

if (file) scope.fileUpload({files: file});

})

}

}

})

This directive can be used in our view like so:

1
2
3
4
5
6
<!-- ... -->

<div class="row"

<div class="col-md-12">

<div file-upload="onFile(files)"></div>

</div>

</div>

Now, when the file selection has been made, it will call the method onFile(files) in our current scope.

Although we’re creating our own file directive here, we recommend checking out the ngUpload library for handling file uploads.

Inside the onFile(files) method, we’ll want to handle the file upload to s3 and save the record to our dynamo database table. Instead of placing this functionality in the controller, we’ll be nice angular citizens and place this in our UserService service.

First, we’ll need to make sure we have the ability to get an s3 Javascript object just like we made the dynamoavailable.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
// ...

var dynamoCache = $cacheFactory('dynamo'),

s3Cache = $cacheFactory('s3Cache');

// ...

return {

// ...

s3: function(params) {

var d = $q.defer();

credentialsPromise.then(function() {

var s3Obj = s3Cache.get(JSON.stringify(params));

if (!s3Obj) {

var s3Obj = new AWS.S3(params);

s3Cache.put(JSON.stringify(params), s3Obj);

}

d.resolve(s3Obj);

});

return d.promise;

},

// ...

This method works the exact same way that our dynamo object creation works, giving us direct access to the s3 instance object as we’ll see shortly.

Handling file uploads

To handle file uploads, we’ll create a method that we’ll call uploadItemForSale() in our UserService. Planning our functionality, we’ll want to:

  • Upload the file to S3
  • Get a signedUrl for the file
  • Save this information to our database

We’re going to be using our current user through this process, so we’ll start out by making sure we have our user and get an instance

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
// in scripts/services.js

// ...

},

Bucket: 'ng-newsletter-example',

uploadItemForSale: function(items) {

var d = $q.defer();

service.currentUser().then(function(user) {

// Handle the upload

AWSService.s3({

params: {

Bucket: service.Bucket

}

}).then(function(s3) {

// We have a handle of our s3 bucket

// in the s3 object

});

});

return d.promise;

},

// ...

With the handle of the s3 bucket, we can create a file to upload. There are 3 required parameters when uploading to s3:

  • Key - The key of the file object
  • Body - The file blob itself
  • ContentType - The type of file

Luckily for us, all this information is available on the file object when we get it from the browser.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
// ...

// Handle the upload

AWSService.s3({

params: {

Bucket: service.Bucket

}

}).then(function(s3) {

// We have a handle of our s3 bucket

// in the s3 object

var file = items[0]; // Get the first file

var params = {

Key: file.name,

Body: file,

ContentType: file.type

}

s3.putObject(params, function(err, data) {

// The file has been uploaded

// or an error has occurred during the upload

});

});

// ...

By default, s3 uploads files in a protected form. It prevents us from uploading and having the files available to the public without some work. This is a definite feature as anything that we upload to s3 will be protected and it forces us to make conscious choices about what files will be public and which are not.

With that in mind, we’ll create a temporary url that expires after a given amount of time. In our ngroad marketplace, this will give a time-expiry on each of the items that are available for sale.

In any case, to create a temporary url, we’ll fetch a signedUrl and store that in our join table for User’s Items:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
// ...

s3.putObject(params, function(err, data) {

if (!err) {

var params = {

Bucket: service.Bucket,

Key: file.name,

Expires: 900*4 // 1 hour

};

s3.getSignedUrl('getObject', params,

function(err, url) {

// Now we have a url

});

}

});

});

// ...

Finally, we can save our User’s object along with the file they uploaded in our Join table:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
// ...

s3.getSignedUrl('getObject', params,

function(err, url) {

// Now we have a url

AWSService.dynamo({

params: {TableName: service.UserItemsTable}

}).then(function(table) {

var itemParams = {

Item: {

'ItemId': {S: file.name},

'User email': {S: user.email},

data: {

S: JSON.stringify({

itemId: file.name,

itemSize: file.size,

itemUrl: url

})

}

}

};

table.putItem(itemParams, function(err, data) {

d.resolve(data);

});

});

});

// ...

This method, all together is available here.

We can use this new method inside of our controller’s onFile() method, which we can write to be similar to:

1
2
3
4
5
6
$scope.onFile = function(files) {

UserService.uploadItemForSale(files)

.then(function(data) {

// Refresh the current items for sale

});

}

Querying dynamo

Ideally, we’ll want to be able to list all the products a certain user has available for purchase. In order to set up a listing of the available items, we will use the query api.

The dynamo query api is a tad esoteric and can be considerably confusing when looking at it at first glance.

The dynamo query documentation is availablehttp://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html

Basically, we’ll match object schemes up with a comparison operator, such as equal, lt (less than), or gt (greater than) and several more. Our join table’s key is the User email key, so we’ll match this key against the current user’s email as the query key.

As we did with our other APIs related to users, we’ll creat a method inside of our UserService to handle this querying of the database:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
// ...

itemsForSale: function() {

var d = $q.defer();

service.currentUser().then(function(user) {

AWSService.dynamo({

params: {TableName: service.UserItemsTable}

}).then(function(table) {

table.query({

TableName: service.UserItemsTable,

KeyConditions: {

"User email": {

"ComparisonOperator": "EQ",

"AttributeValueList": [

{S: user.email}

]

}

}

}, function(err, data) {

var items = [];

if (data) {

angular.forEach(data.Items, function(item) {

items.push(JSON.parse(item.data.S));

});

d.resolve(items);

} else {

d.reject(err);

}

})

});

});

return d.promise;

},

// ...

In the above query, the KeyConditions and "User email" are required parameters.

Showing the listing in HTML

To show our user’s images in HTML, we’ll simply assign the result of our new itemsForSale() method to a property of the controller’s scope:

1
2
3
4
5
6
7
8
9
var getItemsForSale = function() {

UserService.itemsForSale()

.then(function(images) {

$scope.images = images;

});

}

getItemsForSale(); // Load the user's list initially

Now we can iterate over the list of items easily using the ng-repeat directive:

1
2
3
4
5
6
7
8
9
10
<!-- ... -->

<div ng-show="images">

<div class="col-sm-6 col-md-4"

ng-repeat="image in images">

<div class="thumbnail">

<img ng-click="sellImage(image)"

data-ng-src="{{image.itemUrl}}" />

</div>

</div>

</div>

Image listing

Selling our work

The final component of our AWS-powered demo app is the ability to create sales from our Single Page App.

In order to actually take money from customers, we’ll need a thin backend component that will need to convert stripe tokens into sales on Stripe. We cover this in our upcoming book that’s available for pre-release at ng-book.com.

To start handling payments, we’ll create a StripeService that will handle creating charges for us. Since we’ll want to support configuring Stripe in the .config() method in our module, we’ll need to create a .provider().

The service itself is incredibly simple as it leverages the Stripe.js library to do the heavy lifting work.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
// ...

.provider('StripeService', function() {

var self = this;

self.setPublishableKey = function(key) {

Stripe.setPublishableKey(key);

}

self.$get = function($q) {

return {

createCharge: function(obj) {

var d = $q.defer();

if (!obj.hasOwnProperty('number') ||

!obj.hasOwnProperty('cvc') ||

!obj.hasOwnProperty('exp_month') ||

!obj.hasOwnProperty('exp_year')

) {

d.reject("Bad input", obj);

} else {

Stripe.card.createToken(obj,

function(status, resp) {

if (status == 200) {

d.resolve(resp);

} else {

d.reject(status);

}

});

}

return d.promise;

}

}

}

});

If you do not have a Stripe account, get one at stripe.com. Stripe is an incredibly developer friendly payment processing gateway, which makes it ideal for us building our ngroad marketplace on it.

Once you have an account, find your Account Settings page and locate the API Keys page. Find the publishable key (either the test one – which will not actually make charges or the production version) and take note of it.

In our scripts/app.js file, add the following line and replace the ‘pktestYOUR_KEY’ publishable key with yours.

1
2
3
4
.config(function(StripeServiceProvider) {

StripeServiceProvider

.setPublishableKey('pk_test_YOUR_KEY');

})

Using Stripe

When a user clicks on an image they like, we’ll open a form in the browser that takes credit card information. We’ll set the form to submit to an action on our controller called submitPayment().

Notice above where we have the thumbnail of the image, we include an action when the image is clicked that calls the sellImage() action with the image.

Implementing the sellImage() function in the MainCtrl, it looks like:

1
2
3
4
5
6
7
// ...

$scope.sellImage = function(image) {

$scope.showCC = true;

$scope.currentItem = image;

}

// ...

Now, when the image is clicked, the showCC property will be true and we can show the credit card form. We’ve included an incredibly simple one here:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
<div ng-show="showCC">

<form ng-submit="submitPayment()">

<span ng-bind="errors"></span>

<span>Card Number</span>

<input type="text"

ng-minlength="16"

ng-maxlength="20"

size="20"

data-stripe="number"

ng-model="charge.number" />

<span>CVC</span>

<input type="text"

ng-minlength="3"

ng-maxlength="4"

data-stripe="cvc"

ng-model="charge.cvc" />

<span>Expiration (MM/YYYY)</span>

<input type="text"

ng-minlength="2"

ng-maxlength="2"

size="2"

data-stripe="exp_month"

ng-model="charge.exp_month" />

<span> / </span>

<input type="text"

ng-minlength="4"

ng-maxlength="4"

size="4"

data-stripe="exp-year"

ng-model="charge.exp_year" />

<input type="hidden"

name="email"

value="user.email" />

<button type="submit">Submit Payment</button>

</form>

</div>

We’re binding the form almost entirely to the charge object on the scope, which we will use when we make the charge.

The form itself submits to the function submitPayment() on the controller’s scope. The submitPayment() function looks like:

1
2
3
4
5
6
7
8
9
10
// ...

$scope.submitPayment = function() {

UserService

.createPayment($scope.currentItem, $scope.charge)

.then(function(data) {

$scope.showCC = false;

});

}

// ...

The last thing that we’ll have to do to be able to take charges is implement the createPayment() method on the UserService.

Now, since we’re taking payment on the client-side, we’re technically not going to be able to process payments, we can only accept the stripeToken which we can set a background process to manage handling turning the stripe tokens into actual payments.

Inside of our createPayment() function, we’ll call our StripeService to generate the stripeToken. Then, we’ll add the payment to an Amazon SQS queue so that our background process can make the charge.

First, we’ll use the AWSService to access our SQS queues.

Unlike our other services, the SQS service requires a bit more integration to make it work as they require us to have a URL to interact with them. In our AWSService service object, we’ll need to cache the URL that we’re working with and create a new object every time time using that object instead. The idea behind the workflow is the exact same, however.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
// ...

self.$get = function($q, $cacheFactory) {

var dynamoCache = $cacheFactory('dynamo'),

s3Cache = $cacheFactory('s3Cache'),

sqsCache = $cacheFactory('sqs');

// ...

sqs: function(params) {

var d = $q.defer();

credentialsPromise.then(function() {

var url = sqsCache.get(JSON.stringify(params)),

queued = $q.defer();

if (!url) {

var sqs = new AWS.SQS();

sqs.createQueue(params,

function(err, data) {

if (data) {

url = data.QueueUrl;

sqsCache.put(JSON.stringify(params), url);

queued.resolve(url);

} else {

queued.reject(err);

}

});

} else {

queued.resolve(url);

}

queued.promise.then(function(url) {

var queue =

new AWS.SQS({params: {QueueUrl: url}});

d.resolve(queue);

});

})

return d.promise;

}

// ...

Now we can use SQS inside of our createPayment() function. One caveat to the SQS service is that it can only send simple messages, such as with strings and numbers. It cannot send objects, so we’ll need to call JSON.stringify on our objects that we’ll want to pass through the queue.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
// ...

ChargeTable: "UserCharges",

// ...

createPayment: function(item, charge) {

var d = $q.defer();

StripeService.createCharge(charge)

.then(function(data) {

var stripeToken = data.id;

AWSService.sqs(

{QueueName: service.ChargeTable}

).then(function(queue) {

queue.sendMessage({

MessageBody: JSON.stringify({

item: item,

stripeToken: stripeToken

})

}, function(err, data) {

d.resolve(data);

})

})

}, function(err) {

d.reject(err);

});

return d.promise;

}

When we submit the form…

Payment handling

Our SQS queue grows and we have a payment just waiting to be completed.

SQS queue

Conclusion

The entire source for this article is available at http://d.pr/aL9q.

Amazon’s AWS presents us with powerful services so that we can completely change the way we work and deploy our angular apps.

For more in-depth information about Angular, both more in-depth articles about back-end infrastructure and all levels of Angular, check out our upcoming book at ng-book.com.

Ai đang xem chủ đề này?
OceanSpiders 2.0
Di chuyển  
Bạn không thể tạo chủ đề mới trong diễn đàn này.
Bạn không thể trả lời chủ đề trong diễn đàn này.
Bạn không thể xóa bài của bạn trong diễn đàn này.
Bạn không thể sửa bài của bạn trong diễn đàn này.
Bạn không thể tạo bình chọn trong diễn đàn này.
Bạn không thể bỏ phiếu bình chọn trong diễn đàn này.

| Cung cấp bởi YAF.NET 2.2.4.14 | YAF.NET © 2003-2019, Yet Another Forum.NET
Thời gian xử lý trang này hết 13.657 giây.