Posts Tagged 'Amazon'

EC2 Security Revisited

A couple of weeks ago I was teaching Learning Tree’s Amazon Web Services course at our lovely Chicago area Education Center in Schaumburg, IL. In that class we provision a lot of AWS resources including several machine instances on EC2 for each attendee. Usually everything goes pretty smoothly. That week, however, we received an email from Amazon. They had received a complaint. It seemed that one of the instances we launched was making Denial of Service (DoS) attacks to other remote hosts on the Internet. This is specifically forbidden in the user agreement.

I was doubtful that any of the course attendees were intentionally doing this so I suspected that the machine had been hacked. The machine was based on an AMI from Bitnami and uses public key authentication, though, so it was puzzling how someone could have obtained the private key. Anyway, we immediately terminated the instance and launched a new one to take its place for the rest of the course.

In Learning Tree’s Cloud Security Essentials course we teach that the only way to truly know what is on an AMI is to launch an instance and do an inventory of it. I was pretty sure we had done that for this AMI but we might have missed something. I decided that I would do some further investigation this week when I got a break from teaching.

Serendipitously when I sat down this morning there was another email from Amazon:

>>

Dear AWS Customer,

Your security is important to us.  Bitrock, the creator of the Bitnami AMIs published in the EC2 Public AMI catalog, has made us aware of a security issue in several of their AMIs.  EC2 instances launched from these AMIs are at increased risk of access by unauthorized parties.  Specifically, AMIs containing PHP versions 5.3.x before 5.3.12 and 5.4.x before 5.4.2 are vulnerable and susceptible to attacks via remote code execution.   It appears you are running instances launched from some of the affected AMIs so we are making you aware of this security issue. This email will help you quickly and easily address this issue.

This security issue is described in detail at the following link, including information on how to correct the issue, how to detect signs of unauthorized access to an instance, and how to remove some types of malicious code:

http://wiki.bitnami.com/security/2013-11_PHP_security_issue

Instance IDs associated with your account that were launched with the affected AMIs include:

(… details omitted …)

Bitrock has provided updated AMIs to address this security issue which you can use to launch new EC2 instances.  These updated AMIs can be found at the following link:

http://bitnami.com/stack/roller/cloud/amazon

If you do not wish to continue using the affected instances you can terminate them and launch new instances with the updated AMIs.

Note that Bitnami has removed the insecure AMIs and you will no longer be able to launch them, so you must update any CloudFormation templates or Autoscaling groups that refer to the older insecure AMIs to use the updated AMIs instead.

(… additional details omitted …)

<<

So it seems there was a security issue in the AMI that had gone undetected. This is not uncommon as new exploits are continually discovered. That is why software must be continually patched and updated with the latest service releases. Since Amazon EC2 is an Infrastructure as a Service offering (IaaS) this is the user’s responsibility.

It was nice to have a resolution to the issue since it had been bothering me since it occurred. It was also nice that Amazon sent out this email and specifically identified instances that could have a problem. They also gave links to some specific instructions I could follow to harden each instance or a new AMI I could use to replace them.

In the end I think we will be replacing the AMI we use in the course. This situation was an example of the shared responsibility for security that exists between the cloud provider and the cloud consumer. You don’t always know exactly if you have a potential security issue until you look for it. Even then you may not be totally sure until something actually happens. In this case once the threat was identified the cloud provider moved quickly to mitigate damage.

Kevin Kell

The Cloud goes to Hollywood

Earlier this week I attended a one day seminar presented by Amazon Web Services in Los Angeles entitled “Digital Media in the AWS Cloud”. Since I was involved in a media project recently I wanted to see what services Amazon and some of their partners offer specifically to handle media workloads. Some of these services I had worked with before and others were new to me.

The five areas of consideration are:

  1. Ingest, Storage and Archiving
  2. Processing
  3. Security
  4. Delivery
  5. Automating workflows

Media workflows typically involve many huge files. To facilitate moving these assets into the cloud Amazon offers a service called Amazon Direct Connect. This service allows you to bypass the public Internet and create a dedicated network connection into AWS. This allows for transfer speeds up to 10 Gb/s. A fast file transfer product from Aspera and an open source solution called Tsunami UDP were also showcased as a way to reduce upload time. Live data is typically uploaded to S3 and then archived in Glacier. It turns out the archiving can be accomplished automatically by simply setting a lifecycle rule for objects in buckets that automatically moves them to Glacier at a certain date or when the objects reach a specified age. Pretty cool. I had not tried that before but I certainly will now!

For processing Amazon has recently added a service called Elastic Transcoder. Although technically still considered to be in beta this service looks extremely promising. It provides a cost effective way to transcode video files in a highly scalable manner using the familiar cloud on-demand, self-service payment and provisioning model. This lowers the barriers to entry for smaller studios which may have previously been unable to afford the large capital investment required to acquire on-premises transcoding capabilities.

In terms of security I was delighted to learn that AWS complies with the best practices established by Motion Picture Association of America (MPAA) for storage, processing and privacy of media assets. This means that developers who create solutions on top of AWS are only responsible for creating compliance at the operating system and application layers. It seems that Hollywood, with its very legitimate security concerns, is beginning to trust Amazon’s shared responsibility model.

Delivery is accomplished using Amazon’s CloudFront service. This service offers caching of media files to globally distributed edge locations which are geographically close to users. CloudFront works very nicely in conjunction with S3 but can also be used to cache static content from any web server whether it is running on EC2 or not.

Finally, the workflows can be automated using the Simple Workflow Service (SWF). This service provides a robust way to coordinate tasks and manage state asynchronously for use cases that involve multiple AWS services. In this way the entire pipeline from ingest through processing can be specified in a workflow then scaled and repeated as required.

So, in summary, there is an AWS offering for many of the requirements needed to produce a small or feature length film. The elastic scalability of the services allows both small and large players to compete by only paying for the resources they need and use. In addition there are many specialized AMIs available in the AWS Marketplace which are specifically built for media processing. That, however, is a discussion for another time!

To learn more about how AWS can be leveraged to process your workload (media or otherwise) you might like to attend Learning Tree’s Hands-on Amazon Web Services course.

Kevin Kell

Cloud Computing in Canada

As I return from my vacation and prepare for my upcoming class in Ottawa I have been thinking about the current state of affairs regarding cloud computing in Canada. I love cloud computing and I love Canada. It is only natural that these two things should find their way into a blog post!

Firstly, none of the major cloud providers have yet seen fit to physically host a data center in the Great White North. Why that is I don’t know. Perhaps it is just too gosh danged cold.  Secondly, some organizations in Canada have a concern that their data not be hosted in the U.S. So where does that leave us? Certainly some organizations in Canada could use a public cloud service like Amazon AWS or Microsoft Azure. Alternatively they could choose to host their own private cloud.

None of this has anything to do with technology, of course. Legal and regulatory compliance will be the bane of cloud computing. Given enough time, I believe, regulatory requirements will expand to fill the all the space available.

Does that mean, neighbors to the North, that you should turn your back on cloud computing? Absolutely not!

There is a technology there and there is a there, there. Stay focused on the technology. Cloud computing does offer returns that are heretofore unforeseen in IT. On-demand provisioning, self-service access, pay-as-you-go service are but some of the benefits.

That said there are at least some noteworthy Canadian cloud computing providers:

http://www.tenzing.com

http://cloudpath.pathcom.com

http://cloudcomputing.radiant.net

Still, ultimately, the choice of a public cloud provider comes down to trust (but certainly not blind trust!). Canadian, American or other, the choice is yours. Who do you trust?

For me, myself, personally (and professionally), I think I can trust Amazon. Why? Do I think they will rip me off? No. Do I think they will lose my data? No. Do I think they will somehow do a worse job of protecting my data than others (or I) can do? No, I do not.

So, I encourage you to embrace the Public Cloud where it is appropriate for your organization. Regardless of where you are in the world I believe that you can utilize public cloud offerings (even those hosted in the U.S.) to your benefit.

Kevin Kell

Amazon Extends Services for Microsoft Technologies

As someone who works primarily with Microsoft technologies I was delighted to see the Amazon announcement yesterday that they are going to offer two additional options for developers.

First, Amazon RDS is now going to include SQL Server in addition to MySQL and Oracle databases.

SQL Server is available in a variety of versions on RDS and, like Oracle, can have license fees included in the hourly instance charge or can utilize a “bring your own license” model for existing Microsoft Volume Licensing customers that have SQL Server covered by Microsoft Software Assurance contracts. In either case RDS allows SQL Servers to be provisioned on an as-needed, pay-as-you-go basis. The managed service provides automated software patching, monitoring and metrics, automated backups and the ability to create on-demand database snapshots. This offering appears to be in direct competition to Microsoft’s own SQL Azure, so the future should prove interesting!

Second, Amazon Elastic Beanstalk is now going to include support for .NET developers using the Windows/IIS/.NET solution stack rounding out the service offering which already supports Java and PHP.

Elastic Beanstalk is similar but somewhat different as a PaaS concept than Microsoft Azure. Azure is in many ways a more managed approach that takes care of a lot of administration for you behinds the scenes. Elastic Beanstalk exposes the entire underlying infrastructure to you if desired. Both offer a plug-in toolkit for Visual Studio that enables deployment directly from the development environment.

I am a great believer that competition is good and that certainly appears to be the case in the cloud as well. Amazon, in my opinion, has just raised the bar another notch. These two new services from Amazon will likely appeal to some developers familiar with Microsoft technologies. I wonder how, if and when Microsoft will respond!

Kevin Kell

Amazon DynamoDB Ups the Ante for NoSQL Database Service

This past week I watched with great interest as Amazon CTO Werner Vogels announced the launch of Amazon’s DynamoDB service. I feel that rather than trying to say something pithy I will just recommend that you check it out for yourselves.

DynamoDB is a NoSQL database service that is, in my opinion, head and shoulders above what Amazon previously offered with SimpleDB.

DynamoDB removes almost all of the administrative burden associated with provisioning a database for an application. Developers can simply create a database and assume it will be available to store and retrieve any amount of data and serve any level of traffic that may materialize. DynamoDB handles all the load balancing for you transparently behind the scenes.

Unlike some NoSQL databases, DynamoDB gives the developer the choice between strong consistency or eventual consistency on every database transaction. This allows for great control over what happens when data is read or written. Also, DynamoDB has built-in fault tolerance to automatically and synchronously replicate data across multiple Availability Zones.

DynamoDB also integrates with Amazon Elastic Map Reduce. For example it is pretty straightforward to use EMR to analyze data stored in DynamoDB and to archive results in Amazon S3.

DynamoDB is an example of another storage option offered in the cloud. Developers should consider this option for any future development projects they may have.

Kevin Kell

Importing Custom Images into Amazon EC2

A current project I am working on has a requirement that custom machine images be built and maintained such that they are usable both from within Amazon EC2 and on virtual machines hosted outside of EC2. These images are all based on the Windows operating system. Since we want to build each machine image only once (we will have about 200 of them!) it left us with a couple of options:

  1. Build the custom image on EC2 and export it for use on outside virtual machines
  2. Build the custom image on an outside virtual machine and import it for use in EC2

This article explores the second option. I will outline some of the challenges I experienced along the way and how I resolved them. Hopefully this may help someone else who is trying to do the same sort of thing.

In theory, the process is simple. Amazon has provided command line tools and decent documentation on how to do this. As with many endeavors, however, the devil is often in the details.

I had wanted to start from VMware images. VWware virtual disk files use the vmdk format. I soon discovered, however, that not all vmdk files are created equal. That is vmdk files which are used for vSphere are not the same as the vmdk files used in VMware Workstation. The EC2 command line tools will complain if you try to use a workstation vmdk. Unfortunately I did not have vSphere available at my disposal.

So, I decided instead to start from a vhd format disk. I know that there are products which claim to convert one to another but I did not want to go there at this point. I used Microsoft Virtual PC 2007 to create a base Windows Server 2008 virtual machine from an ISO image I downloaded using my MSDN subscription. At least that was a relatively easy way to get started. I then went on to customize that image for my requirements.

Next just use the tools and upload the image, right?

Well, for me it took a few tries. I learned after the first that running ec2-upload-disk-image from my local machine takes over 24 hours to complete. My vhd file was about 5.5 GiB. Not huge, but pretty big. I guess I have slow upload speed. After the upload completes some processing takes place on Amazon’s servers. This requires additional time. You monitor progress using ec2-describe-conversion-tasks. My first attempt seemed to get stuck. It never advanced beyond 6% complete.

For subsequent attempts I zipped the vhd file, uploaded it to S3 and then downloaded it to an EC2 instance I had provisioned with the command line tools. There I could un-zip the file and run ec2-upload-disk-image. That whole process, end to end, took about 5 hours so at least that was some improvement. My second effort spun up and I thought I was good to go.

Not so fast! It seemed now that even though the machine was running I had no way to connect to it. I had read in the documentation that Remote Desktop had to be enabled and that port 3389 needed to be opened on the Windows firewall. I had done all that. Still, no go.

For my next attempt I decided to have IIS started on the image so I could at least know that it was alive and communicating on the network. I also double-checked the remote connection settings, made sure that there were no conflicts on port 3389 and that it was definitely open on the Windows firewall.

This time I could see the web server but still couldn’t connect via RDP! To me that meant it had to be a firewall issue. After verifying that the EC2 security group had 3389 open I decided I would try again but this time I would turn the Windows firewall completely off. That worked! I was able to connect to my custom created instance using RDP.

Is there a better way to do this? Probably. However, at least now I know there is a way to achieve the goal! Of course I am not done yet. Make it work, make it right, make it fast!

For more about cloud computing with Amazon Web Services Learning Tree is developing a new course dedicated to that very topic!

Kevin Kell

Amazon Cloud Outage: The Post Mortem Continues

It has been a little over a week since the Amazon cloud outage. Pundits continue to weigh in and will likely do so for some time to come. An internet search for “Amazon cloud outage” returns over 450,000 hits, several thousand of which were in the last 24 hours. It seems there is plenty of blame to go around.

I had the good fortune to be teaching Learning Tree’s introductory Cloud Computing course last week in Los Angeles. Naturally this topic came up when we were discussing barriers to cloud adoption. One of the students offered an analogy which I thought was quite appropriate. While perhaps not perfect we can consider public clouds vs. on premise data centers as being roughly comparable to flying vs. driving. Statistically you are much safer flying than driving. Each time a plane crashes, however, it makes headline news because of the magnitude of the disaster. In contrast we hear very little about the countless traffic fatalities which occur on a daily basis.

Amazon has released their official response in “message 65648“. It seems that the root cause of the outage was failure of some Elastic Block Storage (EBS) volumes within a single Availability Zone in their US East Region. Last week Amazon notified, by email, all affected customers (including yours truly) and indicated that there would be a 10 day credit equal to 100% of the usage of EBS Volumes, EC2 and RDS instances within the affected Availability Zone. In my case that is acceptable. Businesses that depended on Amazon’s cloud for their revenue may be less easy to mollify.

What has become clear is that by moving to the cloud an organization is not absolved of the ultimate responsibility for ensuring that their systems perform. Some organizations, such as Netflix, were able to survive the outage (albeit not without some pain) by careful up front planning and architecture. For others the Amazon outage was disastrous. The key, it seems, was a healthy dose of paranoia about the cloud and proper disaster recovery and contingency planning right from the beginning. These are good lessons to learn and for many the lessons were learned the hard way.

This incident is certainly an embarrassment for Amazon. It will, perhaps, cause some to proceed more cautiously in adopting cloud technologies. It is doubtful, however, that there will be reversal in the trend for cloud adoption. The benefits of cloud computing continue to outweigh the risks for many organizations, especially if those risks are well-managed.

In fact, Forrester research estimates that the market for cloud services will grow from $41 billion in 2010 to over $240 billion by 2020. The Amazon incident is a setback, to be sure, but it is only a speed bump on the road to cloud computing. The experience will be used by Amazon and consumers of cloud services to build better systems improve their planning and take steps to ensure that something like this does not happen again.

So, I will continue to look at cloud computing solutions for use at Learning Tree and for my consulting clients. The genie is out of the bottle and once that has happened there is no going back. We are reminded, however, that disasters can and do occur. The cloud does not change that and the onus is still on system developers, administrators and owners to ensure fail-safe conditions subject to the organization’s Recovery Point and Recovery Time Objectives.

Kevin


Learning Tree Logo

Cloud Computing Training

Learning Tree offers over 210 IT training and Management courses, including Cloud Computing training.

Enter your e-mail address to follow this blog and receive notifications of new posts by e-mail.

Join 53 other followers

Follow Learning Tree on Twitter

Archives

Do you need a customized Cloud training solution delivered at your facility?

Last year Learning Tree held nearly 2,500 on-site training events worldwide. To find out more about hosting one at your location, click here for a free consultation.
Live, online training
.NET Blog

%d bloggers like this: