Posts Tagged 'n-tier'

Implementing a Data Storage Tier in the Cloud Using Amazon DynamoDB

One of the most important aspects of application design is the storage tier. For many developers a relational database has traditionally been the engine of choice. While that is often a good solution, the cloud has certainly brought other options to light. NoSQL is increasingly gaining popularity as an alternative that offers internet-class scalability when the benefits and overhead associated with a relational database are neither needed nor desired.

Of course good system design would incorporate a logical model of the data anyway which would be abstracted from any particular physical storage technology. That logical model could be implemented  using a variety of products and would mitigate the risk of vendor lock in. That is not, however, a subject I am going to cover in this post. Here I am going to just move ahead with how a simple data model can be implemented using Amazon DynamoDB.

It is very easy to get started using DynamoDB. After you subscribe to the service you can use the AWS Console to provision resources. At some point, though, you will have to write some code. Amazon provides an SDK for Java, .NET and PHP. If you use the Eclipse or Visual Studio IDE there is also a plug-in toolkit which will enable you to easily create projects that use DynamoDB and will allow you to interactively create tables right from within your development environment. You can also create tables in your code.

Figure 1. Installing the AWS toolkit for Visual Studio allows access to AWS Services from within the IDE

NoSQL databases such as DynamoDB are schema-less. That means tables are defined simply in terms of a hash key and an optional range key. Tables contain items and each item has one or more attributes. Attributes are simple name-value pairs. Items in a table do not necessarily all have to have the same attributes. Tables in DynamoDB are also provisioned for a “throughput capacity” (read and write) and may be configured for alarms using CloudWatch. Relationship semantics between the tables are handled in the code. While this may seem a little strange to someone who is used to working only with relational databases it does allow for a lot of flexibility.

Figure 2. Creating a table in DynamoDB

For example the application I am building is going to be used to track golf betting games that might occur in a regular weekend choose-up. On any given Sunday’s foursome there may be one or more of these games in play. Each game has a slightly different set of rules, point count, stake and payout logic. The goal of this application is to minimize confusion when it comes time to settle up on the 19th hole!

Now, this application could become pretty complex because of all of the possible sub-games that could occur on a round. Also golf betting is usually done based on net strokes per hole which may be taken as they lay or played off the low handicap. We will leave most of these complications aside for now and start by building something simple which can be later refined. This, to me anyway, is one of the beauties of a NoSQL approach to storage.

So, to get started, let’s say at a minimum I will need to store data for each Round, the Players involved and the Games in play. I will create three tables in code as follows after downloading the AWS SDK and re-using some sample code:

Figure 3. Creating Rounds table in code. Players and Games are similar but do not require a Range Key.

Next week we will populate these tables and start using them for storage in the application.

To learn more about cloud computing with Amazon, check out Learning Tree’s course, Cloud Computing with Amazon Web Services.

Kevin Kell

Committing to the Cloud

When I teach Learning Tree’s Introduction to Cloud Computing course I often get questions about how components from different public cloud vendors can be used together in an integrated application. The answer, of course, is: “it all depends on the application.” Here I would like to give a more comprehensive response than that.

This will be the first of a series of blog posts that will explore that question in some detail. I will go through the process of building a real-world application using services from Microsoft, Amazon and Google. Further, I will do all development and testing in the cloud. Dev/Test is often a good use case for organizations wishing to get started in the cloud. I will use a very minimal local machine which will allow me simply to connect to cloud resources using only RDP, SSH and HTTP.

My application will be designed using an n-tier architecture. There will be, at a minimum, a tier for storage, business logic and presentation. Since I am attempting to illustrate interoperability between cloud vendors it may make some sense to host components for each of the architectural tiers with different providers. So, somewhat arbitrarily, I will choose to host the storage on Amazon Web Services (exact service to be defined later), the business logic on Azure (so I can program in C# J) and the presentation on Google App Engine (since it is very cost-effective).

Follow along over the next few weeks. We will go from square zero to a fully functional, interoperable web application that uses services from the “Big 3” public cloud providers. We will provision our development, testing and deployment environments from a lightweight client (tablet PC). All the while we will track cloud costs and development time.

This should be fun!

Kevin Kell

Application Architectures in Windows Azure, Part 1

Once requirements have risen above the trivial, applications have always benefited from the use of good architectures.  These architectures have evolved, over time, to take best advantage of the technologies which were available.  Well architected applications have the advantage of being easier to maintain and understand than un-architected ones.  In addition well architected applications are easier to adapt to changes in the technology or business landscape.

Architectures almost always involve logical and/or physical separation of systems into reasonably abstracted layers.  Consider the now well known “n-Tier” architecture which followed naturally from client/server.  In the n-Tier architecture an application is modeled as a series of logical layers.  Components of the application are placed in one of the layers.  In that way we do not require that presentation layer code know anything at all about a specific data storage technology we are using.  In theory this would allow us to change implementation detail at any level without great (or any) impact on the other layers.  For example I could write my business logic independently of whether my data was stored in a SQL Server or Oracle relational database or in some other format.  I could also change my business logic without impacting my user interface and so on.

Figure 1 n-Tier Architecture

Of course all of this depends on building a good quality logical model of the application domain and intelligently separating the model into layers.  In most cases it is much more involved than simply using a wizard to bind a user interface widget directly to a SQL Server data source!

How Windows Azure has changed the playing field

While many of the concepts of n-Tier architecture continue to be relevant in Azure the fact is that there are some changes in thinking required in order to fully reap the benefits of this cloud platform.  In some cases it might be possible to simply host a well architected n-Tier application in Azure.  That application, however, might not necessarily be able to scale in the way that you would like.  In fact, with the Azure platform, Microsoft has imposed certain restrictions and benefits on developers.  Azure, itself, provides us with a basic starting architecture.  As with many Microsoft technologies the probability of success is increased by understanding what those architectural realities are.

Next week’s post will discuss the simple asynchronous web/worker role architectural pattern in Azure.  This pattern has some similarities and differences to the n-Tier archicture.   It is one of the fundamental starting point architectures that Azure provides.

In the meantime, for an interesting (and ongoing) discussion of n-Tier apps in “the cloud” you might like Cloud Computing Google Groups.

Kevin Kell


Learning Tree Logo

Cloud Computing Training

Learning Tree offers over 210 IT training and Management courses, including Cloud Computing training.

Enter your e-mail address to follow this blog and receive notifications of new posts by e-mail.

Join 53 other followers

Follow Learning Tree on Twitter

Archives

Do you need a customized Cloud training solution delivered at your facility?

Last year Learning Tree held nearly 2,500 on-site training events worldwide. To find out more about hosting one at your location, click here for a free consultation.
Live, online training
.NET Blog

%d bloggers like this: