This post appeared originally in our cloud microsite and has been moved here following the discontinuation of the blogs part of that site

Going cloud is the new black, and has been for a few years already. Customers ask us: What is your Cloud Strategy? Can you help us moving to the cloud? Are your services Cloud Compliant? Check out some real-life scenarios.

5 valuable considerations

Going cloud is the new black, and has been for a few years already. Customers ask us: What is your Cloud Strategy? Can you help us moving to the cloud? Are your services Cloud Compliant? (Is that even a valid term?)

Burning the bridges and moving everything to the cloud may sound compelling, and public and private cloud services may be quite rewarding. No hardware responsibility. Pay for what you actually use, not what you may use. Scale up. Scale down. Scale out. Infinite storage. Multi data center. Multi location. High availability. Global location based load balancing. Functions as a service. Databases as a service. Anything as a service! Why wait?

All players in the IT field should consider cloud technologies, but while planning the setup of a new stack, there are issues to take into consideration. Not all services suits any cloud setup. Here are a few real-life scenarios:

1. Apps in the cloud, cache locally

A media house had re-implemented their production stack at a public cloud provider. The apps worked great, the developers were happy, the performance was satisfactory, and the users content. But the costs went up, as the cloud provider’s network traffic toll was quite high. Using a CDN could be a solution, but was considered too costly and unnecessary, as most users were local to a few central locations.

Keeping the stack in the cloud, and adding local web caches to our data centers ended up being a good solution, keeping high traffic volumes on low cost lines, while sending backend traffic to the cloud.

For high volume sites, consider caching in local data centers

2. Apps locally, cache in the cloud

Another media house had a classic in-house publication system, but had readers around the globe. The volume was quite low, but with high quality content to paying customers worldwide. Users in South-East Asia or the West Coast of USA got high latency and slow content loading. Actually building a scaled-down CDN service, we put cache servers in public cloud provider locations close to the users, and got happy readers. Using the Varnish Plus product, local users got cached content even though protected by a paywall.

For low volume sites, consider local caching in the cloud

A media content provider was moving their services and content library to the cloud. They considered using their existing platform for building a library product, delivering storage and search for images and video data, directed at public service usage, and asked for advice. Their platform used Amazon’s AWS S3 for storage and AWS Glacier for backup. Then it occurred that storing data for public services abroad might have legal consequences, and we had to search legal advice on document storage. A proposed solution was to use Ceph based S3 compatible storage services in data centers within Norway. With this solution, network traffic expenses became a factor.

Storing data abroad may have consequences

4. Anything as a service, cost by call

Skipping the overhead of creating and maintaining virtual machines is tempting. Public cloud providers offer Database as a Service variants compatible with well-known databases, including most SQL and NoSQL variants. Include Functions as a Service, and you may be able to build a complete server-less solution including data backend and APIs.

A customer built a system using cloud provided services for database and message queue. It worked flawlessly for development and test, but when adding production traffic to the solution, cost became a large issue, as the services were tolled by request.

When using anything as a service, consider traffic driven costs against the overhead of server management.

Also, «serverless» computing is of course a lie. The servers are there, the interface just hides the database setup. In a test using one public cloud provider’s MySQL variant, a multi database slave instance setup was shown to be quite non-resilient against sudden death, with downtime for the service while the slaves were re-synced.

There is no such thing as «serverless» computing, just another level of abstraction in front of another computer

5. Trust the cloud provider, the cloud provider is your friend

A well-known story tells how a complete site was taken down by a public cloud provider’s robots looking for suspicious activity. With customer chat down, and no on-call service available, the developers were forced to handle the incident by waking up the CFO, and send credit card information manually to the provider. Read the full story at https://bit.ly/2yWQBnD

Last year, a major part of Amazon’s S3 storage system went down, and was unavailable for hours, making trouble for thousands of sites. Read the details at https://amzn.to/2melOup

Nobody is perfect. Not even public cloud providers. Also small fish are … small, so who are you gonna call?

Ingvar Hagelund

Team Lead, Application Management for Media at Redpill Linpro

Ingvar has been a system administrator at Redpill Linpro for more than 20 years. He is also a long time contributor to the Fedora and EPEL projects.

Just-Make-toolbox

make is a utility for automating builds. You specify the source and the build file and make will determine which file(s) have to be re-built. Using this functionality in make as an all-round tool for command running as well, is considered common practice. Yes, you could write Shell scripts for this instead and they would be probably equally good. But using make has its own charm (and gets you karma points).

Even this ... [continue reading]

Containerized Development Environment

Published on February 28, 2024

Ansible-runner

Published on February 27, 2024