If the infrastructure of corporate data centers has evolved little over the past 10 or 20 years, the same is not true of uses. The cloud services came to shake up expectations: ease of provisioning and management of resources and pay-as-you-go are now the order of the day. With the right tools, data centers should be more agile and fluid in the future, as companies aim to balance the use of their internal infrastructure and cloud resources for optimization purposes.
More flexible, the new administration tools make cloud resources and on-premises computing partly interchangeable. “On-premises computing has evolved as fast as cloud services,” said Tony Lock, analyst at Freeform Dynamics. Formerly rather static, it had an infrastructure dedicated to specialized applications.
Tony LockAnalyst, Freeform Dynamics
“It has been transformed in 10 years: it is now much easier to extend most IT platforms. We no longer need to interrupt everything for a weekend to physically install the new equipment. All you have to do is bring these new machines into the data center and plug them in to make it work. »
Another change observed in the data center: virtualization. Users easily move applications from one physical server to another, which greatly enhances portability, even more so with the expansion of virtual networks or SDN (Software-Defined Network) observed over the past five to ten years, says Tony Lock.
The rapid evolution of automation tools that manage on-premises and cloud resources is giving concrete expression to the idea of bringing the two types of resources together in a single pool.
In June, HashiCorp announced version 1.0 of Terraformwitnessing the maturity and stability of its infrastructure management platform sufficient for production use, even though many customers had already deployed it without waiting for the latest version.
With this programmable infrastructure tool or IaC (Infrastructure as Code), the user will build his infrastructure using declarative configuration files that describe the target state of the infrastructure. These are guidelines authorizing the effective and repeated provisioning of the infrastructure by Terraform for a given application or service.
It is also possible to automate the complex changes to be made to the infrastructure by minimizing human interactions, by simply updating the configuration files. The great thing about Terraform is that in addition to internal infrastructure, it can also manage resources scattered across multiple cloud providers, including AWS, Azure, and Google Cloud Platform.
Since Terraform configurations are not tied to a particular cloud, they define the same application environment everywhere. You can move or copy an application very easily.
“The idea of a programmable infrastructure is not without appeal,” says Tony Lock. It is evolving, but it still has a long way to go to reach maturity. It is part of the more global framework of automation. IT is becoming more and more automated. Freed from repetitive, routine and pointless tasks now well supported by software, IT teams can focus on other aspects with greater added value for the business. »
Native cloud storage
Storage is becoming more and more flexible, at least when it comes to virtualized storage systems (SDS) designed to run on a cluster of servers and not on proprietary hardware. In the past, applications were often tied to fixed storage networks.
SDS storage is distinguished by its ease of expansion. It is usually sufficient to add nodes to the storage cluster.
Because this type of system is programmable, it is easy to provision and manage through APIs or using infrastructure tools like Terraform.
The degree of sophistication and flexibility of SDS storage is evident in the Limitless Data Platform of WekaIO, deployed in many supercomputer projects. The WekaIO platform unifies the namespace presented to applications, and it deploys on dedicated storage servers or in the cloud.
If necessary, organizations can then transfer data from their on-premises cluster to the public cloud and provision a Weka cluster there. Any file-based application can run in the cloud without further modification, according to WekaIO.
One of the flagship functions of the WekaIO system is the capture of a snapshot of the entire environment, including all data and metadata associated with the file system, which can then be transferred to an object store such as Amazon S3 cloud storage. .
The company can thus create and use a storage system for a particular project, then capture a snapshot that it will store in the cloud at the end of the project to free up the hosting infrastructure for other purposes. In case of resuming the project, it is sufficient to recreate the file system identically from the snapshot, explains WekaIO.
Tony LockAnalyst, Freeform Dynamics
But this scenario has a major downside: the potential cost, not of cloud data storage, but access. This is because large cloud providers like AWS charge fees when retrieving data.
According to T. Lock, “The extremely low pricing of some cloud platforms for storage costs alone is actually often offset by fairly high egress fees. Extracting data for review and use is going to be very expensive. Stored data doesn’t cost you much, but examining and using it will quickly become expensive. Some plans include an active archive with no exit charge, but at a higher price. »
The cloud storage provider Wasabi Technologies has freed itself from this convention and offers different calculation methods, in particular a monthly flat rate per terabyte.
If the IT infrastructure continues to become ever more fluid, flexible and adaptable, companies will soon no longer need to expand the capacity of their data centers. With the right management and automation tools, they can effectively manage their infrastructure dynamically, including reallocating their on-premises IT for other purposes and using cloud services to extend their resources.
In order to reach this stage, one point remains to be improved: the ability to identify the location of a problem in the event of an application slowing down or breaking down, a task that is sometimes difficult in a complex distributed system. Organizations with a microservices architecture will not be surprised. New techniques based on machine learning could prove useful, believes T. Lock.
Tony LockAnalyst, Freeform Dynamics
He continues: “Surveillance has improved a lot, and the question now is how to bring out the important in telemetry. This is where machine learning starts to pay off. Root cause analysis is one of the great challenges of IT that machine learning greatly simplifies. »
Another difficulty concerns the data governance : How do you ensure that data governance and security policies keep up with repeated workload moves and remain in effect? “If you can move everything all the time, how do you maintain good data governance and only run the right things in the right places with the right security?” asks T. Lock.
From tools existincluding the open source project Apache Atlas, presented as a single solution for all stages of data governance and metadata management. Atlas is designed initially for Hadoop data ecosystems, but integrates with other environments. For enterprises, it seems that the dream of being able to mix on-premises and cloud resources, then deposit and retrieve them without restrictions is finally becoming a reality.
We want to thank the author of this short article for this incredible content
Hybrid IT: succeeding in your cloud and datacenter mix
You can find our social media profiles here , as well as additional related pages here.https://yaroos.com/related-pages/