Home » Cloud Computing » Cloud Computing: The Cases Of Use Of Tomorrow

Cloud Computing: The Cases Of Use Of Tomorrow


Many key accounts have now moved into the public cloud: customer relationship management (CRM) tools, collaboration, and communication solutions are blatant illustrations of the democratization of SaaS. The use of punctual and rapidly provisioned VMs, with or without a software layer, for development or R & D needs is now widespread.

So far, the main motivations for cloud computing adoption are the costs (economy and model) and the flexibility provided (speed of provisioning, elasticity on demand). The next step for these companies will undoubtedly be to refocus more on their core business and bring more innovation. Here are some examples of tomorrow’s use cases.

Cloud computing as an incubator

One of the emerging uses is the use of the public cloud as an incubator. Take Big Data, for example, the Cloud is an excellent opportunity for companies that want to test and experiment with these new technologies during a Proof-Of-Concept (POC), even if it means switching back to on-premise hosting for the implementation phase. in production.

Previously, implementing a POC required investing in infrastructure, contracting for test licenses, and taking into account implementation deadlines. Cloud providers now offer ready-to-use offers immediately and billed according to how they are used.

The Cloud business model lends itself perfectly to this use case and will promote technological innovation tomorrow.

Cloud as a promise for a better quality of service

The availability of critical applications remains a major challenge for CIOs today. The infrastructures used must be able to respond to the hazards that occur, such as incidents or load peaks. This is one of the promises of the Cloud, which will materialize tomorrow thanks to several levers.

Take the example of the permanent Resumption of Activities Plan (PRA). It involves replicating the application several times and simulating continuously, via a load balancer, the loss of one of them. This mechanism makes it possible to guarantee a better availability of the service.

In another example, for websites worldwide, it will be possible to replicate the underlying infrastructures in the various data centers of the provider and to direct the users who connect to them according to their geographical location. This time, the response time will be optimized.

As the last example, companies are starting to implement applications for which the infrastructure that hosts it can itself expand or shrink based on actual connections, or even shut down during periods of inactivity. This scale up / scale down mechanism both reduces costs (in a pay-as-you-go model), but also ensures optimal availability in the event of high activity. This is reflected technically by the evolution of resources even servers or their number, depending on the bases installed on it, but especially requires applications whose design is specially designed for.

An acceleration of the life cycle of the applications

PaaS side, many developments are yet to come, aimed primarily at reducing the time-to-market applications, through a simplification of their deployment.

The container approach is a first lever aimed at meeting this objective. The market is moving towards platforms that can execute code regardless of the underlying application server. This is made possible by the use of containers, which embed code, application and application server binaries, and are executable on any runtime environment. Solutions are developing to facilitate the creation and management of these containers, Docker for example.

Thus companies will no longer have to choose between PaaS offers offering an application server or another, the use of containers will run the code regardless of the environment used. The container approach is the first lever to facilitate the deployment of applications on PaaS offers.

The second way that is being democratized is the use of a configuration manager that automates the deployment of software components. This industrialization can be done using tools like Chef, Puppet, CFEngine … and brings many benefits. These rapidly expanding tools will thus make it possible to accelerate the application production cycle to more easily follow the application changes requested more frequently by the business lines.

However, this can not be done without an adaptation of the application lifecycle, to back software factories with PaaS offers They host them, and an adaptation of the existing organization and processes. This is precisely the purpose of the “DevOps” approach, which is characterized by close cooperation between the prime contractor, the development team and the operations team. For example, there is no need for a new feature to be requested every week and actually developed if the release of the application is only scheduled every 6 months.

The objective of this method is, therefore, to take into account from the launch of an application project functional requirements upstream, in order to align the frequencies of availability of new functionalities and those of putting into production. of use will be met tomorrow and the examples presented here do not all have the same degree of maturity: there are already some POCs made on cloud platforms, while the implementation of the “scale up / scale down” mechanism remains today a promise difficult to achieve. As for the evolution of the life cycle of applications, and the industrialization of their deployment, we see that this will be a longer period because a transformation of processes, skills, and teams within the ISD will also be required.

Tags:

News In Category

/