Posts Tagged 'Amazon S3 storage'

Amazon S3 Storage: A Flexible Cloud Storage Solution

Amazon S3 storage is one of the AWS services that has been around the longest and is one of the most widely used. For those unfamiliar with S3, it enables individual data items, ranging in size from one byte to five terabytes to be stored in scalable storage. There is no limit to the number of items, referred to as objects, that can be stored in S3. Amazon have recently published the usage statistics for the end of Q1 2012 for this service. At this time there are 905 billion objects stored in S3 with Amazon routinely handling 650,000 requests per second for access to these objects. The number of objects is growing daily by more than one million. This is an incredible amount of data.

These results motivated me to write about the different ways in which users access S3 and the kind of ways they are using it. For those unfamiliar with S3, then storage buckets, which are URL addressable entities are created in S3 and the data objects placed in these. A simple usage scenario is storing Web application assets in S3. For example, a typical Web application has many images and handling the delivery of these images to remote clients, even when cached, places an unnecessary load on the applications servers. A cost-effective way to offload this work is to place the images on S3. Since these are URL addressable, the HTML pages can link to the images here. This means as pages are requested, the images are delivered from S3 storage, freeing the servers the Web application runs on from this workload.

Another way that S3 is being used is via the AWS Storage Gateway. This service, still in beta mode, enables a cost effective, secure, off-premise data backup and disaster recovery mechanism. On-premise storage appliances connect to the storage gateway, providing a secure communication channel to S3 storage. Data stored using the Storage Gateway is automatically encrypted. For disaster recovery, point-in-time snapshots can be taken of on-premise data and sent to S3 providing secure and immediate access to backups should the need arise.

What is interesting is the number of different ways Amazon AWS provides for accessing S3. The simplest is via the Web management console, but this is only really practical for small amounts of data. Other ways include a Web Services API, command line tools, AWS Import/Export developed for large data sets, the above mentioned storage gateway, direct connect pipes for performance as well as many third party backup services.

With a range of security settings on S3 buckets, controlling who has read/write/update access, S3 offers an incredibly safe and cost effective scalable storage solution. If you have a storage requirement that can be off-premise, needs to be low latency yet scale and be secure, I urge you to seriously consider Amazon S3. In my experience over the last two years its a fantastic service. The usage statistics are strong evidence that I am not the only one who thinks so.

For related course information check out Learning Tree’s course entitled Cloud Computing with Amazon Web Services™.

Chris Czarnecki

Amazon S3 Object Expiration

Amazon Simple Storage Service (S3) is a low cost storage solution for storing all types of data. Examples of its usage include static resources on dynamic Web applications. For instance, my company places all images for our Web applications on S3. This reduces the load on the application servers leaving them free to deal with the requests for dynamic data. Images are uploaded to an S3 bucket – an entity that is created on S3 to hold your data. Each bucket has a unique url, that can them be mapped to a CNAME for the domain where the main application is hosted. Such an approach is simple and very cost effective with current rates for S3 storage being $0.14 per GB. Other costs include requests (PUT, POST, GET etc) which are $0.01 per 1000 requests and data transfer out which is free for the first GB and then starts at $0.12 per GB.

Another use for S3 is for storing application log files. One of the down sides here is that over a period of time log files build up and after a certain point they are no longer needed. On self managed servers these files are normally compressed and ultimately removed by logrotate or a similar facility. This can be made to work for S3 but requires a lot of scripting to achieve. Today, Amazon released an object expiration facility for s3 storage. This is an elegant solution to automatically deleting S3 storage objects when they are no longer required. Log files are a perfect example for where this service is invaluable. From the AWS management console, object expiration rules will be able to be configured. When the rules match the objects that match the rule will be automatically deleted.

Chris Czarnecki


Learning Tree Logo

Cloud Computing Training

Learning Tree offers over 210 IT training and Management courses, including Cloud Computing training.

Enter your e-mail address to follow this blog and receive notifications of new posts by e-mail.

Join 53 other followers

Follow Learning Tree on Twitter

Archives

Do you need a customized Cloud training solution delivered at your facility?

Last year Learning Tree held nearly 2,500 on-site training events worldwide. To find out more about hosting one at your location, click here for a free consultation.
Live, online training
.NET Blog

%d bloggers like this: