You don’t need to go to the movies to get pumped up about superheroes. Migrating from a traditional data center to the cloud completely changes the way you interact with your IT environment, making you into a real-life superhero. The cloud gives you the ability to be more agile, more efficient and more scalable, like an IT professional with superhuman strength.

OK, so maybe it’s not the same as the kinds of heroes we see on the big screen. Obviously, Captain Efficient, Super-Scalable Man (or Woman) and the Incredible Agile aren’t likely to appear in any movie theaters in the near future. However, the transformation from the traditional data center to the cloud does make for a compelling story. Picture it…

“Our hero, an overworked, underpaid IT manager responsible for an aging, ineffective and resource intensive environment, accidentally swallows a radioactive USB stick and develops superhero cloud-related powers and migrates everything to a secure, fast and cost-efficient network in the sky and finally gets a promotion.”

Sounds like a great story, right? And it’s one I witness daily (minus the radioactive USB stick). Yet, like most great movies, the sequel is seldom as good as the original. And that’s another story that I also witness on a regular basis. As soon businesses get their infrastructure in the cloud and start to realize the benefits in terms of management overhead, performance, scalability and cost, it seems like there’s a tendency to sit back, relax and enjoy it—which does not make for captivating viewing. Here’s what the movie looks like now…

“Our hero, now a senior cloud manager, gets to work at 9, sends a couple of emails, has a long lunch, surfs the web, goes home at 5. Repeat.”

We go from transformative change to static inaction. This approach is a fall back to the days and ways of the traditional data center when it made sense to sit back and rest on your laurels for a while. That’s no longer the right approach. Once you’re in the cloud, you don’t need to wait for 3 to 5 years before embarking on a redesign and replacement of your IT infrastructure. It can be done at any time to improve performance. I’m not advocating change for the sake of change, but the Amazon Web Services (AWS) cloud is constantly evolving, with improvements made daily to services old and new. Today’s IT superhero can be keeping up with those daily improvements with ease.

The improvements can be as simple as the addition of a new configuration option, or as exciting as a brand-new, ground-breaking service, or anything in between. Whatever it is, it can be tested and implemented right now and not months or years into the future.

Here are four examples of things our hero could be doing right now to further improve an AWS environment:

1. Instance Types

AWS is constantly updating the EC2 instance types available for your virtual machines. This is a win-win situation for users of EC2 as each new instance type that is released is not only more powerful but is also cheaper than the previous iteration.

In recent months, the t3 and m5 instance types have been released to supersede the t2 and m4 types respectively, and all that needs to be done to take advantage of these new, improved instance types is a simple restart of the EC2 instances. This is an easy way to improve performance and reduce the cost of your AWS spend. Learn more at https://aws.amazon.com/ec2/instance-types/.

2. Elastic Load Balancers (ELBs)

ELBs have been around for a long time. They were originally launched with the Classic ELB, but in recent years have been updated with the Application and Network versions.

The Classic ELB was purpose-built for the EC2-Classic instances, which was the old AWS standard before the introduction of EC2 VPC. However, I still regularly see Classic ELBs in use when they should be updated to the new versions, which have been improved specifically to work with HTTP/HTTPS (Application) or TCP (Network) applications.

Switching to the new versions do require a bit of work, but nothing too extreme and you will be rewarded with an improved ELB and a slightly reduced hourly cost. Learn more at https://aws.amazon.com/elasticloadbalancing/.

3. S3 & Glacier

Almost everyone who uses AWS uses Amazon S3, the service that provides an almost unlimited amount of storage space. However, S3 isn’t the only class of storage available. Depending on your requirements, there are different classes available for different use cases.

The latest class to be released is S3 One Zone-Infrequent Access, which is for data that does not require the availability and resilience of S3 Standard or S3 Standard-IA storage but needs to be available rapidly. It is used for storing things like secondary backup copies, or for S3 Cross-Region Replication data from another AWS Region.

Then there is Glacier, which is the low-cost storage class that provides high availability and resilience but with slower retrieval speeds. It is perfect for archiving your old data for safekeeping. As your data ages, it can automatically move between these storage classes using Lifecycle Policies with no effort on your part. Learn more at https://aws.amazon.com/s3/storage-classes/.

4. Elastic File System (EFS)

EFS allows you to create huge file systems that can be accessed in massively parallel fashion from multiple EC2 instances and other AWS resources. It is a great product. However, because throughput is based on the size of the file system, it has been known to suffer from performance issues when using small amounts of data and high I/O. When EFS volumes get overloaded, workarounds such as creating large files on the file system to increase I/O were the only solution.

Not anymore, as AWS has recently implemented a new feature called New Provisioned Throughput that allows you to predefine the level of I/O you require no matter the size of the volume. Learn more at https://aws.amazon.com/efs/.

These are only four examples of the many options available for seasoned users of AWS to improve their environments. What better way for our hero to write a better script for the next installment of the cloud superhero franchise and beyond than by staying on top of these regular improvements! Although we probably won’t see Super-Scalable Man (or Woman) in a blockbuster movie any time soon, darn it...
 

Our Cloud Computing Courses Duration and Fees

Cloud Computing Courses typically range from a few weeks to several months, with fees varying based on program and institution.

Program NameDurationFees
Post Graduate Program in Cloud Computing

Cohort Starts: 15 May, 2024

8 Months$ 4,500
AWS Cloud Architect11 Months$ 1,299
Cloud Architect11 Months$ 1,449
Microsoft Azure Cloud Architect11 Months$ 1,499
Azure DevOps Solutions Expert6 Months$ 1,649

Get Free Certifications with free video courses

  • Azure Fundamentals

    Cloud Computing

    Azure Fundamentals

    5 hours4.638K learners
  • Getting Started with AWS Services Fundamentals for Beginners

    Cloud Computing

    Getting Started with AWS Services Fundamentals for Beginners

    4 hours4.610.5K learners
prevNext

Learn from Industry Experts with free Masterclasses

  • Ascend to the Pinnacle of Cloud Excellence with AWS Cloud Architect Masters Program

    Cloud Computing

    Ascend to the Pinnacle of Cloud Excellence with AWS Cloud Architect Masters Program

    24th Apr, Wednesday9:00 PM IST
  • Build a Recession-Proof Cloud Career for 2024 with Caltech PGP Cloud Computing

    Cloud Computing

    Build a Recession-Proof Cloud Career for 2024 with Caltech PGP Cloud Computing

    2nd May, Thursday9:00 PM IST
  • Ascend to the Pinnacle of Cloud Excellence with AWS Cloud Architect Masters Program

    Cloud Computing

    Ascend to the Pinnacle of Cloud Excellence with AWS Cloud Architect Masters Program

    24th Apr, Wednesday9:00 PM IST
prevNext