This document includes general information about the Pure Storage architecture as it compares to SolidFire. Not intended to be exhaustive, it covers architectural elements where the solutions differ and impact overall suitability for the needs of the Next Generation Data Center (NGDC).
Oggi la continuità operativa è fondamentale per qualsiasi azienda. Le imprese
sono coinvolte in una trasformazione digitale profonda e si rivolgono all’IT
per tutte le attività più mission-critical. I tempi di fermo possono arrivare a
paralizzare l’intera organizzazione: le aziende più resilienti sono in grado di
gestire i guasti tecnologici e fare in modo che l’azienda resti sempre operativa
e funzionante. Garantire la business continuity, infatti, significa garantire
maggiore vantaggio competitivo, maggiore coinvolgimento dei clienti e
Tuttavia, il raggiungimento di una business continuity di alto livello, con
Recovery Point Objective (RPO) e Recovery Time Objective (RTO) pari a zero, è
tipica delle imprese di grandi dimensioni che, proprio perché grandi, possono
sostenere gli investimenti necessari e gestire la complessità associata. Per la
maggior parte delle aziende, i costi di una business continuity di alto livello
sono sempre risultati eccessivi.
Oggi la s
Digital technology is so intrinsic to our personal lives that we barely think
about the fitness trackers and smartphones that are as much a part of
us as the clothing we wear. For organizations, the shift to digital is more
disruptive and the stakes far higher. Digital transformation has been high
on the executive agenda for a few years and, for many, harnessing data
has become a significant force for value and revenue creation.
Agility has emerged as an organizational superpower as businesses
grapple with change and uncertainty in their own customer bases and in
the global political and economic landscape. IT has been thrust into the
spotlight as the unwitting hero of the story – tasked with delivering on
the digital vision, implementing all manner of applications and building
firm infrastructure foundations to support the latest digital initiatives.
In an increasingly on-demand world, it is this final point that often gets
overlooked in the rush for the next shiny new technologies. E
Merci pour tous les services rendus, chères reprises après sinistre. Sans vous à nos côtés toutes
ces années, rien n'aurait été pareil. Oubliez la logique de sinistre/reprise des années 70.
Adoptez un modèle de continuité opérationnelle adapté au monde d'aujourd'hui, en
constante activité, un modèle :
Alors que le stockage flash se généralise dans le monde de l'informatique, les entreprises
commencent à mieux comprendre non seulement ses atouts en termes de performances,
mais également les avantages économiques secondaires de son déploiement à grande
échelle. La combinaison des avantages des baies 100 % flash - latence réduite, débit plus
élevé et bande passante plus large, densités de stockage plus élevées, consommation
énergétique et encombrement considérablement réduits, meilleure exploitation des UC,
réduction du nombre de serveurs nécessaires et, en conséquence, coûts de licences
amoindris et fiabilité matérielle accrue – en a fait une option économiquement attrayante
comparée aux architectures de stockage d'ancienne génération, mises au point à l’origine
pour les disques durs (HDD). Alors que les baies flash hybrides (HFA) se développent à un
rythme exponentiel et que le taux d’utilisation de baies 100 % HDD chute
vertigineusement, les AFA enregistrent une progression parmi le
Flash-Storage dringt immer mehr in die Mainstream-Datenverarbeitung vor. Dadurch
verstehen die Unternehmen zunehmend nicht nur die Performance-Vorteile, sondern auch
die sekundären wirtschaftlichen Vorteile, die mit einer Flash-Implementierung im großen
Maßstab verbunden sind. Dank der Kombination all dieser Vorteile – geringere Latenzen,
höherer Durchsatz und größere Bandbreite, höhere Storage-Dichten, deutlich geringerer
Energie- und Platzbedarf, höhere CPU-Auslastung, geringerer Bedarf an Servern und damit
verbunden geringere Softwarelizenzgebühren, geringere Administrationskosten und eine
größere Zuverlässigkeit auf Einheitenebene – erweist sich die Verwendung von AFAs als
eine wirtschaftlich überzeugende Alternative zu den herkömmlichen Storage -Architekturen,
die ursprünglich für Festplattenlaufwerke (HDDs) konzipiert waren. Während die
Wachstumsraten für Hybrid-Flash-Arrays (HFAs) und für ausschließlich in Verbindung mit
HDDs verwendete Arrays steil nach unten gehen, gehören die
As flash storage has permeated mainstream computing, enterprises are coming to better understand
not only its performance benefits but also the secondary economic benefits of flash deployment at
scale. This combination of benefits — lower latencies, higher throughput and bandwidth, higher
storage densities, much lower energy and floor space consumption, higher CPU utilization, the need
for fewer servers and their associated lower software licensing costs, lower administration costs, and
higher device-level reliability — has made the use of AFAs an economically compelling choice
relative to legacy storage architectures initially developed for use with hard disk drives (HDDs). As
growth rates for hybrid flash arrays (HFAs) and HDD-only arrays fall off precipitously, AFAs are
experiencing one of the highest growth rates in external storage today — a compound annual growth
rate (CAGR) of 26.2% through 2020.
Data is growing at amazing rates and will continue this rapid rate of growth. New techniques in data processing and analytics including AI, machine and deep learning allow specially designed applications to not only analyze data but learn from the analysis and make predictions.
Computer systems consisting of multi-core CPUs or GPUs using parallel processing and extremely fast networks are required to process the data. However, legacy storage solutions are based on architectures that are decades old, un-scalable and not well suited for the massive concurrency required by machine learning. Legacy storage is becoming a bottleneck in processing big data and a new storage technology is needed to meet data analytics performance needs.
Deep learning opens up new worlds of possibility in artificial intelligence, enabled by advances in computational capacity, the explosion in data, and the advent of deep neural networks. But data is evolving quickly and legacy storage systems are not keeping up. Advanced AI applications require a modern all-fl ash storage infrastructure that is built specifically to work with high-powered analytics.
Interest in machine learning has exploded over the past decade. You see machine learning in computer science programs, industry conferences, and the Wall Street Journal almost daily. For all the talk about machine learning, many conflate what it can do with what they wish it could do. Fundamentally, machine learning is using algorithms to extract information from raw data and represent it in some type of model. We use this model to infer things about other data we have not yet modeled. Neural networks are one type of model for machine learning; they have been around
Everybody’s talking about big data. Huge promises have been made about its role in driving enterprises forward. But few organizations are realizing its true benefits.
For those able to put data to good use, there’s much to be excited about. Data is transforming not only businesses, but entire industries, and the world as we know it. Today organizations are harnessing big data to do things like transform healthcare, provide eyesight for the visually impaired, and bringing us closer to autonomous cars
Apache Spark has become a critical tool for all types of businesses across all industries. It is enabling organizations to leverage the power of analytics to drive innovation and create new business models.
The availability of public cloud services, particularly Amazon Web Services, has been an important factor in fueling the growth of Spark. However, IT organizations and Spark users are beginning to run up against limitations in relying on the public cloud—namely control, cost and performance.
Not all flash storage architectures are created equal. Read this vendor comparison report and learn about the differences between solutions from NetApp® and Pure and how to find the best all-flash arrays to meet your business needs.
To stay relevant in today’s competitive, digitally disruptive market, and to stay ahead of your competition, you have to do more than just store, extract, and analyze your data — you have to draw the true business value out of it. Fail to evolve, and your organization might be left behind as companies ramp up and speed up their competitive, decision-making environments. This means deploying cost-effective, energy-efficient solutions that allow you to quickly mine and analyze your data for valuable information, patterns, and trends, which in turn can enable you to make faster ad-hoc decisions, reduce risk, and drive innovation.
Health systems moving to integrated care business models are crying out for more active repositories to replace image archives as they move toward collaborative models of care. Yet traditional storage vendors continue to rely on three-year buying models and costly forklift migrations – and performance still does not meet clinician’s requirements. Pure Storage offers an alternative: a renewable, upgradable, scale-out, highperformance storage environment for images at a low TCO that ensures the latest technology and marketleading support and maintenance for 10+ years.
In the new age of big data, applications are leveraging large farms of powerful servers and extremely fast networks to access petabytes of data served for everything from data analytics to scientific discovery to movie rendering. These new applications demand fast and efficient storage, which legacy solutions are no longer capable of providing.
The verification workload comprises hundreds of millions of small files, very high metadata, and extremely high performance read, write, and delete requirements.
The Pure Storage FlashBlade product’s innovative design provides high IOPS and throughput, and low latency and fast deletes – yielding an average 25% faster wall clock completion time.
The evolution of genomics in recent decades has seen the volume of sequencing rise dramatically as a result of lower costs. Massive growth in the quantities of data created by sequencing has greatly increased analytical challenges, and placed ever-increasing demands on compute and storage infrastructure. Researchers have leveraged high-performance computing environments and cluster computing to meet demands, but today even the fastest compute environments are constrained by the lagging performance of underlying storage.
FlashBlade fabric modules implement a unified network that connects all blades to each other and to the data center network. With full connectivity, all blades can serve as client connection endpoints, as authorities that process client requests, and as storage managers that transfer data to and from flash and NVRAM.
Pure Storage has significant expertise creating scalable, enterprise-class, flash-optimized storage platforms, and with FlashBlade, Pure Storage has crafted a turnkey, purpose-built platform that is well suited to cost effectively handle the performance and capacity requirements of genomics workflows. Pure Storage has differentiated itself from more established enterprise storage providers by delivering an industry-leading customer experience, as shown by its extremely high NPS, indicating it knows how to meet and is committed to meeting customer requirements. Whether genomics practitioners plan an on-premises deployment or a cloud-based deployment for their genomics workflows, they should consider the performance, cost, and patient care advantages of the Pure Storage FlashBlade when choosing a platform, particularly if they plan to retain data for a long time and use it frequently.
The tremendous growth of unstructured data is creating huge opportunities for organizations. But it is also creating significant challenges for the storage infrastructure. Many application environments that have the potential to maximize unstructured data have been restricted by the limitations of legacy storage systems. For the past several years—at least—users have expressed a need for storage solutions that can deliver extreme performance along with simple manageability, density, high availability and cost efficiency.
As flash costs continue to drop and new, flash-driven designs help to magnify the compelling economic advantages AFAs offer relative to HDD-based designs, mainstream adoption of AFAs —first for primary storage workloads and then ultimately for secondary storage workloads — will accelerate. Well-designed AFAs that still leverage legacy interfaces like SAS will be able to meet many performance requirements over the next year or two.
Those IT organisations that aim to best position themselves to handle future growth will want to look at next-generation AFA offerings, as the future is no longer flash-optimised architectures (implying that HDD design tenets had to be optimised around) —
it is flash-driven architectures.
Within the next 12 months, solid-state arrays will improve in performance by a factor of 10, and double in density and cost-effectiveness, therefore changing the dynamics of the storage market. This Magic Quadrant will help IT leaders better understand SSA vendors' positioning in the market.