Amazon Web Services held its annual AWS re:Invent conference this week in Las Vegas, during which a number of new announcements were made.
After Silicon reported on the conference earlier this week, and how it allowed for face-to-face connections between AWS customers and partners, we now roundup some of the more notable news and innovations the conference has revealed.
This covers AWS subjects in the areas of internet of things, machine learning, databases, chips, 5G and modernisation of mainframe environments, among others.
AWS currently offers more than 200 complete services, and that number is constantly growing as it continues to innovate since its inception 15 years ago.
These innovations over the years have helped dispel reservations and doubts about the migration of business resources to the cloud.
And it should be remembered that these services are accessible to any company, regardless of its size or the sector where it operates.
A few years ago it would have been unthinkable for an SME to access the software or hardware used by a large corporation for obvious budget reasons.
But today, thanks to the capabilities of the cloud and the pay-per-use model, it is a reality to have access to the most advanced databases, artificial intelligence and machine learning models, coupled with the most powerful processing.
And for under pressure IT management teams, all of this can be realised without waiting, usually within reach of a couple of clicks.
These are the reasons why thousands of startups have delivered on their business ideas, and why many SMEs can be competitive against industry giants.
This is the sweetspot for cloud services providers such as AWS and the functionality it can offer, namely: reduce technology costs and grow to scale, as investments in technology can be adapted to the organisational requirement at all times.
However, it will be remiss not to focus on the main developments revealed during this year’s edition of AWS re: Invent in Las Vegas.
Simply put, a digital twin is a replica of an existing system in the physical world at the functionality level.
AWS IoT TwinMaker has been announced to facilitate the creation and development of these digital twins as much as possible, in order to manage the characteristics of their physical namesake such as sensors, cameras or business applications.
It is not just about turning a switch on or off, or displaying any type of alert in a unified way.
TwinMaker allows the collection of all the information sent from multiple sources to generate graphic models that deliver comprehensive control of what is happening in the physical world, which in turn facilitates real-time decision-making, and the automation of all kinds of tasks that improve efficiency operational and reduce downtime.
And this is vital in any industrial activity.
AWS IoT TwinMaker offers a graphical interface that allows any 3D model to be absorbed during the design of the digital twin, so that operators can explore a virtual world in three dimensions to monitor the activity of any sensor.
The injection of artificial intelligence into that huge amount of data, also provides an additional component such as prediction, in such a way that it is possible to anticipate the failures of any mechanism so parts can be replaced, delivering more efficient maintenance of facilities.
AWS Private 5G meanwhile has been announced to add a wireless connectivity component that many industrial companies need in their facilities.
According to AWS, this service, which includes the use of SIM cards provided by AWS itself, allows “the deployment, scaling and management of private networks in days instead of months.”
It is a managed service where AWS also provides the radio antennas, the servers and the RAN infrastructure, in such a way that all the management of these communications is centralised in a single control panel.
Obviously, it is a service that is complemented by others such as the previously mentioned AWS IoT TwinMaker, so as to provide a unified experience.
This was touched upon in the previous article, but it is a well documented fact that for decades, the IT industry had considered the mainframe as a dead, or declining platform.
This belief has always not reflected the reality on the ground, with financial institutions remaining a notable holdout.
At least until now, according to AWS.
The mainframe has enjoyed notable longevity, as the IBM platform has remained the most robust and always available platform for mission-critical environments, such as those running in financial and banking systems.
In addition, the legacy of the mainframe has passed all limits of what is possible due to the millions of lines of code written in programming languages (such as COBOL) that are still being processed today.
Therefore changing all of this can be a major headache for companies.
During the decades since the mainframe arrived, we have seen various attempts at migration and modernisation, some of which certainly has been carried out successfully.
However, until now a system was lacking that could facilitate all of the traditional mainframe processes without a hugely complex migration process.
That is what AWS Mainframe Modernisation is seeking to resolve, by providing a service that makes it faster and easier for customers to migrate their mainframe-based applications and workloads to the cloud, so they can benefit from greater agility, elasticity, and reduction. cost.
And AWS has assured critics that it is possible to convert mainframe applications and workloads to Java-based cloud services with minimal changes to the source code.
This is because a specific runtime environment has been built on this service to provide all the computing, memory, and storage to run those applications.
Additionally, AWS Mainframe Modernisation is designed to automatically handle capacity provisioning, security, load balancing, scaling, or even application health monitoring.
Another news development was the confirmation that AWS has continued to develop (for some time) its own chip design for its EC2 servers.
This time around, Graviton 3 chips (based on ARM architecture) have just been announced as the company’s most powerful chips for intensive general-purpose workloads across Amazon EC2 C7g compute instances.
They have also been released a few other more specific models, namely:
Another AWS development saw the addition of new features in Amazon SageMaker (developed in conjunction with its parent company Amazon).
This is a specific machine-learning platform that can be applied to virtually any other suite available on AWS cloud services.
With the exponential growth of data managed by companies, there has been a proliferation of machine learning processes to automate and achieve more efficient and intelligent processes without the need for human intervention.
The problem in these areas is when designing training and learning models, which is something that has always been in the hands of a few people with deep mathematical and scientific knowledge.
The new features in Amazon SageMaker are directed to make these technologies more accessible to a greater number of professionals, without the need for a person to have that in-depth knowledge.
AWS has featured several announcements regarding machine learning:
Finally, given the need to manage these large volumes of data generated by organisations, the use of a specific architecture for all of them is no longer valid, due to their diversity and nature.
But it is necessary to achieve higher performance and an almost real-time response for any type of query on the databases.
High availability, reliability or versatility are no longer an option for companies, but something must be done to provide the best experience for both employees and customers.
In this sense, AWS has announced three new functionalities to broaden the choice of its customers and improve those aspects when managing databases. They are as follows:
A new low. International Committee of the Red Cross shuts down reunification system, after hackers…