Cloud computing: a story of incentives

Many businesses today run “in the cloud”. What this often means is that they have abstracted out the hardware entirely. Large corporations like Amazon, Google, Microsoft or IBM operate the servers. The business only needs to access the software, remotely.

In theory, this means that you can adjust your capacity for just your needs. If you need only twelve servers most of the year, then you pay for only twelve servers. And on the specific days when you need 100 servers, you pay for the 100 servers on these days only. You may even use “serverless” computing and pay just for what you use, saving even more money.

Is this the whole story?

I am not so sure.

A tremendous benefit of cloud computing for the developers and operation people is that it cuts through the red tape. If you are using the cloud, then a developer can get ten more servers at the click of a button. I have met credible people from well-known businesses who told me that their developers have practically an unlimited ability to allocate new cloud resources.

If we make it easy for developers to quickly use lots of computing resources, these developers might effectively think of computing and storage as infinite. It also wipes away all incentives to produce efficient systems.

You may end up warming up the planet. It is not a joke: training a single machine-learning model can have over four times the footprint of a car over its entire lifetime.

Your ten servers end up needing a much more complicated architecture than your single laptop. But that is not obviously a negative for some developers who get to try out fancier-than-needed software, always useful on a resume.

Developers are not alone. When I take pictures these days, I never delete them. They get uploaded to the cloud and I forget about them. When I managed the scarce storage on my PC where digital pictures could be stored, two decades ago, I would spend a lot of time deleting bad pictures. I no longer care.

Further reading: The computing power needed to train AI is now rising seven times faster than ever before and Facebook AI research’s latest breakthrough in natural language understanding is running up against the limits of existing computing power. (credit: Peter Turney)

Published by

Daniel Lemire

A computer science professor at the University of Quebec (TELUQ).

9 thoughts on “Cloud computing: a story of incentives”

  1. Cut the red tape ? I am not sure, Daniel, given the many tools available or in development to control how those resources are available according to ruled, most of the times budget rules.

    “Cloud”, on demand, resources also cost much much more. Running your standard operations this way is very costly. It can have an interest for “spot” tasks, but they are not that many real cases.

    You would think that “private cloud” is a good option, but it remains difficult today. Installing a system is ok. Dealing with the problems happening with those highly complex tasks require large teams of highly skilled people. Most big companies fail.

    So, well, I am quite surprised by the enthusiasm for those on demand cloud solutions.

    And I do not even discuss other aspects, like storing sensitive data

    1. “Cloud”, on demand, resources also cost much much more. Running your standard operations this way is very costly.

      It might be, but it is still very popular. Whenever I meet developers these day who are not part of a super large organization (e.g., not from the government), and I ask them whether they use the public cloud, the answer is almost always positive. And the answer is typically “we use AWS”.

      Cut the red tape ? I am not sure, Daniel

      Can you elaborate on your counterpoint? If you work for a company that says “start as many instances as you’d like, you don’t need approval”, then clearly, you can have as much fun as you’d like. Compare this with the burden of having to request additional servers, something that assuredly requires many levels of approval at most places.

      1. Well, like the other commentator, I do not know many companies where dev can use the credit card without control… And so called on demand, cloud services are not at all free.

        So, most companies I know of have an authorisation process for this kind of service. And it is legit: it is a spending.
        And most of them use software to implement their policies.

        Those who don’t have or will have bad surprises. Those services are full of cost traps.

        Do not be mistaken: I use them, for personnal projects. I love GCE. I love to be able to rent a server being billed per minute of use. I love to use this server to prepare an ML model, then plug 4 on demand high end GPU to do some stuff.
        But, if that was my daily use, it would cost me much less to buy this hardware.

  2. I do see it often that cloud services do make it easier to have resources provisioned, compared with more traditional approaches, but this may only last until companies realise that giving developers unfettered access to the credit card has never been a great idea.

    Don’t forget traditional server hosting, which has been around for a very long time (much longer than “cloud hosting”), and does handle stuff like managing hardware for clients. What makes “cloud hosting” attractive over “regular hosting”, from what I can tell is:

    near instant provisioning of resources, and hourly billing
    resource management via APIs (somewhat related to the above)
    hosted or “managed” services
    cargo-cult hype

    Non “cloud” hosts adopt some of the above, so the lines are blurring a bit between the two these days.

    Many web service developers like to think that they should be “scalable” (insert “webscale” meme here) in the sense that servers should automatically scale up if load increases. The prospect is attractive, particularly to startups which will experience viral growth (because all startups believe that this will happen), as their service won’t go down even if millions of visitors suddenly show up. Of course, such scalability is often not necessary if sufficient overprovisioning is employed, but I can only say that from experience – many don’t have such experience, and the notion of scalability that cloud services provide can help provide an answer to an unknown, so makes people feel safer I guess.

    The “managed” services can help further speed up provisioning. Note that I use managed in quotes as I think they’re mostly a misnomer (to the benefit of cloud providers) – they aren’t managed (beyond hardware/network management that any server host provides), rather, they’re pre-configured.
    Pre-configured services do mean that those not versed in configuration can quickly have something set up, and can help avoid common pitfalls (like forgetting to set up backups). Personally, as someone who likes to have full control over everything, these pre-configured services aren’t attractive to me, but many developers simply don’t care about all the nitty gritty details, so I can see their point.

    I see a lot of “tech envy” in developers. There’s plenty of blog posts about people being incredibly successful using the latest tech stacks or applications (or cloud, for that matter), which glamorises these sorts of things. Instead of sticking with old and boring technologies (e.g. relational databases), developers like to always change things around, try new stuff (e.g. “managed NoSQL databases”) and create new things. This also often helps with career prospects (professional experience in the latest tech), so you can see why there’s an incentive to adopt the latest trends.
    I suppose many web apps aren’t that exciting, if you break down the requirements. Many are basically “CRUD” (create/read/update/delete) apps which are essentially fancy wrappers to a database. So developers invent complexity to keep things interesting, such as adopting complicated architectures (microservices, message queues, orchestration, multiple data stores etc etc). It’s often easy to justify these designs/changes (“we need to be webscale”, “separate concerns”), and they often sound attractive to management (who like to tout all the changes/improvements they’ve helped drive (whilst downplaying the downsides introduced with the changes)).

    In a sense, these sorts of “fashion trends” aren’t just limited to developers – you see this in various other industries too. There’s reasons to adopt cloud, but there’s also a lot of unnecessary hype around it.

    Another thing to consider is that “cloud” is considered industry standard these days (i.e. “if you’re not on cloud, then why not?”). Furthermore, names like AWS, Google and Microsoft have credibility behind them. If you use AWS, and it goes down, then Amazon is just having a bad day. However, if you go with a lesser known provider, and it goes down, you’ll be forever justifying why you didn’t go with AWS.

    In terms of efficiency, hardware is cheap and developers are expensive. As hardware increasingly becomes more powerful, this relationship becomes truer by the day. Effects this has on our environment is rarely a concern unless it has much of an effect on the company’s bottom line (PR could be another angle, but it’s often not hard to manage in this regard).

    Cloud hosting does have various downsides. Compared to traditional server hosting, cloud hosting is ridiculously expensive, for what you get. For example, it’s not unusual to get 5-10x more bang for your buck at places like OVH compared to AWS, and that’s ignoring what they charge for bandwidth (which is even more crazy). Unless you can really make use of dynamic scaling (i.e. have workloads which vary greatly), cloud will almost certainly cost more than regular server hosting. However, in most organisations, developers don’t really care about what it costs, as it’s rarely their concern.
    (some places adopt a hybrid approach – baseline load is handled by dedicated servers, and dynamic load handled by cloud)

    Also, many of the services provided are proprietary/non-standard to an extent, and there’s some element of vendor lock-in. This, however, is rarely a concern when starting out.
    (personally, I have a suspicion that the absurd bandwidth costs that cloud providers charge, may be to encourage customers to keep all their services on the same platform, rather than simply pick and match the most cost effective solution)

    Because everything is billed, mistakes can be costly (for example, a rouge script which uses too many resources), whereas on traditional hosting, your server would just slow down and it’d be obvious that there was a problem. Cloud providers do often enable you to set up warnings if you’d get charged too much, but you have to notice and react to the warning, and also remember to set it up in the first place.

    Complexity is also an element. AWS, for example, offers many services (often with undescriptive names) and can require some knowledge to understand.
    This may seem a little contradictory to some extent, as cloud services are supposed to remove complexity with setting up services – I suppose it does to some extent, but it does add its own set of complexities on top.

    As for your examples of photos, I don’t really think this is a property of cloud. If you’re given limited storage space in the cloud, then you’d still have to manage what gets stored there. On the other hand, if you have a big harddrive at home, you’d be less inclined to delete stuff.

    1. As for your examples of photos, I don’t really think this is a
      property of cloud. If you’re given limited storage space in the cloud,
      then you’d still have to manage what gets stored there.

      I would not argue that it is “a property of the cloud”. I was just making an analogy.

  3. The main advantage of cloud computing is not really around cost nor performance. It’s about time and humans.

    Having anything right now is always better than having the same result tomorrow. Spinning a couple of instances to do something new is not a problem, if it cost more than the dev hours that will require optimization, do it at this time, never before. Doing it before imply you’re not doing some other task that could bring more value.

    Needing no humans on your side to do it is better than having to find, hire and keep a team to manage stuff internally. Cloud providers are essentially ”all those devops guys that cost a bunch that we can’t find anyway” as a service. Hiring is getting hard in tech, real hard.

    As deliveries need to go faster and faster and we have less and less people to do it, it’s inevitable that something that’s mainly just a cost will get outsourced if it means we can go faster with less people. No one cares if the shops you go to own or rent their spaces, it’s pretty much the same in tech.

  4. It is absolutely true that the cloud enables sloppiness. See, e.g. Frank McSherry’s paper “Scalability! But at what COST?” which compares “big data” distributed systems to the performance of a well-engineered program running on a single thread.

    At the same time, if one uses the cloud thoughtfully, it can be freeing. If a service is melting due to load issues, you could allocate a bunch of developers to optimizing it right now and maybe assessing whether the increased scale merits a wholly new architecture). But this introduces uncertainty and schedule risk into whatever project they were currently working on. It also hurts morale – no one wants to be interrupted to fight fires. It also risks making poor decisions in the heat of the moment.

    Instead, the cloud lets you buy time–simply pay for a larger server (or servers), and schedule the optimization work for the next sprint, in two weeks time. Thus, even though you are nominally paying for compute flexibility, you are actually buying flexibility at the developer level — and the developers are the most costly asset in many companies.

    The last place I worked was an 80-person software firm. Everyone in engineering had permission to launch new AWS resources. If it was for a transient thing, no sign off needed. If it was for an ongoing project, give a heads up to your manager and make sure it aligned with the overall architecture. No big deal.

    Everyone also had access to a dashboard that showed the company’s daily revenue vs its (not insubstantial) daily AWS expenses. Perhaps that was key, too.

    1. We always come back with to some kind of “debt”. Independently from the on demand / autoscale vs on premise, there is an old debate on carefully engineered vs quick and dirty.

      Which is the best ? IMHO, it depends… What matters the most is to make a thoughtful decision, with no belief in a magic solution.

Leave a Reply

Your email address will not be published. Required fields are marked *

To create code blocks or other preformatted text, indent by four spaces:

    This will be displayed in a monospaced font. The first four 
    spaces will be stripped off, but all other whitespace
    will be preserved.
    Markdown is turned off in code blocks:
     [This is not a link](

To create not a block, but an inline code span, use backticks:

Here is some inline `code`.

For more help see