
News
Content for the HPC community and innovation enthusiasts: tutorials, news and press releases for users, engineers and administrators
- All News
- Aerospace & Defence
- Artificial Intelligence
- Blog
- Cloud Platform
- E4 Various
- European Projects
- HPC
- Kubernetes Cluster
- Latest news
- Press
Cloud Computing or Traditional Infrastructure? (Part 2)

A few articles ago, which you can find here, we addressed this topic, but two fundamental questions remained open for our Head of Infrastructure, Ettore Simone, who will continue to guide us in the world of OpenStack.
Questions:
The E4 Fluctus is based on the stable version produced by the community. But how ethical is it for a company like E4 to try to make money on a product created for free by a community?
And… Why choose E4, such a small company? And above all, in case of need of support, who to call?
Ettore Simone: OpenStack is a very big framework, made up of many specialized modules, each oriented to deal with a particular part of the Data Center. Its development mode – more unique than rare! – requires that the interaction API between these modules is specified even before the implementation of the component itself. That’s why the composition matrix of these modules and the related interactions can be quite complex and, at the same time, incredibly elastic.
Since these components are then inserted within infrastructures with specific needs, E4 comes into play with its skills and experience derived from years of implementing OpenStack solutions in complex industrial infrastructures. We therefore know very well what to do to make this software work in the most effective way in productive and mission-critical environments.
The industrialization of Fluctus is very much oriented towards being a ready-to-use solution from day 1 and already tested in controlled environments. Our OpenStack is offered in a tested and guaranteed configuration.
Regarding ethics, E4 is working to make OpenStack a product that meets the needs of organizations and industries, so it is contributing to the community with its improvements. Although the BSD and Apache licenses on which OpenStack is based do not require the release of the modified code as in the case of the GPL license, E4 undertakes to publish any improvements and bug fixes as a contribution to the Open Infrastructure Foundation (successor to the OpenStack Foundation).
The concept of Open Source is very similar to the concept of free communication and interchange of scientific discoveries. All Open Source software aim to solve a specific problem and ensure that the whole community benefits from it, guaranteeing a virtuous circle that allows to release continuous improvements of the solutions and, above all, accessible to all.
At this point, the question is: with these improved releases to the community, is there no risk of “doing charity” and also benefiting competitors?
Ettore Simone: This risk exists. The goal of Open Source is to improve the entire community. It is not uncommon to find similarities in other sectors as well. Let’s take construction: many construction techniques have spread throughout history across all builders, which have allowed an increasingly comfortable and healthy lifestyle. In specific cases the difference is made by the manufacturer himself, with his personal interpretation of the techniques and his particular creative flair.
Clear. Thank you. What about the hardware?
Ettore Simone: These solutions can work on commodity hardware like x86 platforms. They also can be used on Power or ARM platforms, for example. E4 has optimized versions of its solutions where the software is also combined with the most correct, tested, and aligned hardware to provide the services requested by the customer.
In E4’s solutions, the hardware is also optimized for the specific customer’s purpose: how much memory, how much storage, which processor, which network, how many nodes… E4 provides these solutions in a very short time, rather than making the customer spend months of analysis to understand which hardware to adopt or the most appropriate architectural design. We can thus effectively satisfy the most disparate needs: from small Hyper-Converged systems oriented to the Edge or small and medium business, to distributed and scaled systems on hundreds, or even thousands,,of nodes.
Well. Moving on to other topics: there are certainly advantages in standardizing and centralizing the orchestration, but at the security level, isn’t having this centralization a risk? Isn’t there too much democratization?
Ettore Simone: The work done in a Private Cloud environment or in a traditional infrastructure is pretty similar. The difference lies in the fact that, for example, the technician who is assigned to the management of networking has a simplification in the management of his equipment and an overall view of everything that is the layer of the network regardless of what it is composed of. In a more traditional infrastructure, however, network technicians have more specializations and deal with different parts of the technology, with the risk of having communication gaps.
The Cloud world offers a common thread that allows you to link the various components and expose a structured vision of the overall environment, allowing fewer people to manage more complex infrastructures.
The danger of damage exists, but only if the infrastructure is not correctly configured. The tools for correct and safe management of the infrastructure in a Private Cloud are many more than those of a traditional infrastructure (usually entrusted to people or in written documents).
In Infastructure-as-a-Service, on the other hand, it is possible to have more granularity on permissions, as well as offering the possibility of “declaring” the shape of the infrastructure with specific languages and finally making use of DevOps and Infrastructure as Code methodologies to guarantee what still today for many systems engineers it is utopia: the reproducibility of infrastructures and the supervision of their creation process.
E4, with Fluctus, enormously simplifies the steps to be climbed to enter this complex world. The initial configurations, the infrastructural design, the interactions between the various components, are already present. Our experts, continuously study the needs of customers, who are helped to find the solution that best meets their needs. Finally, the E4 team trains the customer and transfers the necessary knowledge to be autonomous.
Such a vast infrastructure is certainly mission-critical. How can I protect myself from operating problems. Can I have a backup? How do I restore its functioning?
Ettore Simone: This question opens up a very vast topic. I will try to mention a couple of methods by which the problem is approached:
- Resilience
- Reproducilbility
In optics of resilience, that is the ability of a system to “absorb” shocks and to survive them, Private Cloud infrastructures offer very effective and modular arsenals, which allow us to face the problem from various angles.
We can, for example, break down the entire infrastructure into smaller subsystems. These are autonomous in terms of storage and power supply, but still connected to the same control and networking back-plane, so that they are part of a single system. This is what is called region in many Cloud Providers.
The advantage of this decomposition is to allow the distribution of systems in HA, whose nodes can insist on independent regions, allowing a possible fail-over even in case of total down in a particular area. We will get what can be called a Business Continuity.
With the same model we can easily reproduce in the second region the workloads present in the first one. Both in preventive mode (systems in stand-by with low RTO), ie able to take service in a short time after pressing the “red button”; and in reactive mode (for example with disaster-revocery procedures with RTO higher than the previous one) where the workloads are reproduced with some automation.
The geographical distribution of these regions is also possible, but I would not like to dwell too much on the subject.
The reproducibility of the infrastructure is guaranteed by the same mechanisms of the implementation.
The complete back-up of the infrastructure (and it is necessary to specify that we are not talking about what it contains (VM, virtual appliance, persistent storage volumes, etc.)), can paradoxically be contained in a single floppy disk. A few “K” are enough to completely rebuild the environment in the event of a disaster. This is a typical advantage of software defined infrastructures.
The back-up of the content, i.e. the virtual machines, the storage with persistent data and so on, can be obtained through multiple mechanisms. Even the more traditional ones.
The sum of the two levels of back-up can guarantee any level of RTO / RPO needed. More or less stringent levels of Point or Time will naturally require more or less fast storage systems to obtain it.
The interesting thing from this point of view is precisely a possible simplification of these mechanisms given by the “cloud native” structure of the Infrastructure-as-a-Serivce solutions. How about doing Region A back-up on the Region B Object Store which is 15K away from the first? Why not take advantage of the “synchronous” replication features of OpenStack (Cinder) persistent volumes in remote regions in order to have the stand-by already present in order to minimize service restart times?
Our interview ends here, to find out more, visit the page dedicated to our Open Source Cloud Platform FLUCTUS, or contact us!