We are regularly told that checking our own bodies for signs of change is a
good thing. Early diagnosis of disease gives more of a fighting chance of
curing the problem. So, in the IT world, where we assume all of our backups
have been taken successfully, how often should we be checking the results and
ensuring the backup will work on the fateful day we need to do a restore?
This question was posed by Federica Monsone on Twitter this week. Here’s
an attempt to provide an answer.
First of all, let’s consider the whole point of taking backups. Excluding
the inappropriate use of backup for archiving, the backup process is there to
ensure you can maintain continuous access to your data in the event of
unforseen circumstances. Usually (but not exclusively) these are data loss
due to equipment or power failure, data corruption (whether software bug or
malicious), a... (more)
Like many people, the other week I downloaded and installed Google Drive.
This is the long-awaited competitor to services like Dropbox and
Microsoft’s SkyDrive, offering free online storage with the ability to
upgrade to higher capacity at a cost. Dropbox and the various other
lookalikes have been around for some time, so is Google coming to this market
too late and is the party already over?
The concept of Cloud Storage is pretty simple. Services like Dropbox allow
you to share a local folder on your PC or Mac and have that data replicated
into “the cloud”. Fr... (more)
At the end of August 2012, Amazon Web Services released their latest service
offering – a long-term archive service called Glacier. As a complement to
their existing active data access service S3, Glacier provides long term
storage for “cold” data – information that has to be retained for a
long time but doesn’t require frequent access.
What Exactly is Glacier?
Many organisations need to retain data in archive format for extended periods
of time. This is for regulatory or compliance purposes or may simply be
part of their normal business process. Good examples are medical,
HP have joined the Infrastructure as a Service (IaaS) market and released
their HP Cloud service in public beta. Here’s the announcement press
release. The services on offer are:
Available Now as Public Beta
Compute – on-demand server instances. Cloud Object Storage – object-based
storage using RESTful APIs. Content Delivery Network – local distribution
of web content.
Still in Private Beta
Cloud Block Storage – persistent data for compute images Relational
Database for MySQL – managed cloud databases
There’s also the HP Identity Service for managing key & token access
It’s pretty easy to pick holes in the current legacy storage products,
especially when it comes to integration within both public and private cloud
deployments. However it’s worth discussing exactly what is required when
implementing cloud frameworks, as the way in which storage is deployed is
radically different from the traditional model of storage operations. In
this post we will look at why traditional methods of storage management need
to change and how that affects the way in which the hardware itself is used.
This leads to a discussion on APIs and how they are essential... (more)