We are regularly told that checking our own bodies for signs of change is a
good thing. Early diagnosis of disease gives more of a fighting chance of
curing the problem. So, in the IT world, where we assume all of our backups
have been taken successfully, how often should we be checking the results and
ensuring the backup will work on the fateful day we need to do a restore?
This question was posed by Federica Monsone on Twitter this week. Here’s
an attempt to provide an answer.
First of all, let’s consider the whole point of taking backups. Excluding
the inappropriate use of backup for archiving, the backup process is there to
ensure you can maintain continuous access to your data in the event of
unforseen circumstances. Usually (but not exclusively) these are data loss
due to equipment or power failure, data corruption (whether software bug or
malicious), a... (more)
It’s pretty easy to pick holes in the current legacy storage products,
especially when it comes to integration within both public and private cloud
deployments. However it’s worth discussing exactly what is required when
implementing cloud frameworks, as the way in which storage is deployed is
radically different from the traditional model of storage operations. In
this post we will look at why traditional methods of storage management need
to change and how that affects the way in which the hardware itself is used.
This leads to a discussion on APIs and how they are essential... (more)
At the end of August 2012, Amazon Web Services released their latest service
offering – a long-term archive service called Glacier. As a complement to
their existing active data access service S3, Glacier provides long term
storage for “cold” data – information that has to be retained for a
long time but doesn’t require frequent access.
What Exactly is Glacier?
Many organisations need to retain data in archive format for extended periods
of time. This is for regulatory or compliance purposes or may simply be
part of their normal business process. Good examples are medical,
There are three new white papers available on the site that may be of
interest. They are:
Create a Smarter Storage Strategy
Availability and the Cloud
The Economic Impact of File Virtualization: Reducing Costs and Improving
Efficiency for File-Based Storage
As usual I welcome any feedback as to whether this part of the site is
Disclaimer: For each subscription I receive a payment which goes to fund
the runn... (more)
I’ve been reading a book called “The Mighty Micro” by a namesake of
mine, Dr Christopher Evans, a computer scientist from the 1970s, who sadly
died at the age of only 48, not long after his book had been published.
I’m currently into the chapter discussing middle-term futures, which in
this case refers to the years 1983-1990. It’s interesting to see how some
of his thinking was close to the mark and some wasn’t. Here are some
“By the late 1980’s, if data compression techniques continue on their
present curve, it will be possible to store very large books, perhaps eve... (more)