Useful Links



featured customers

About RapidHost

Careers

Support Center

Our Contact Details

RapidHost Limited,
Keypoint,
17-23 High Street,
Slough,
Berkshire,
SL1 1DY,
United Kingdom

 » Show Map


 

 

network management software

 

Interesting Articles

Useful information related to backup and disaster-recovery systems

If you have any comments, reaction or contributions to the articles below, please feel free to contact us with your views.

 

Interesting Articles - Backup

Here are a collection of interesting articles related to data backup and restoration. If you have any comments or reaction to these articles, please feel free to contact us with your views.

back to category indexIndex


back to topSurviving equipment theft

According to silicon.com there were more than 15,000 laptops stolen in 2006 … in London alone!  And it isn’t just an issue for the careless.  The MOD has reported the theft of 658 laptops in the last four years as well.

Losing your laptop hardware is annoying enough, but when you consider how difficult it might be to rebuild the data that was on it – it could create a serious risk to your business.

Avoiding the difficulty of rebuilding the data is easy … just back it up regularly.  It sounds easy, but begs the question of what is the best way to do that to give you maximum flexibility of restoration at the time you need it (always in an emergency!) along with the certainty and security that the backup is being done regularly and successfully.

A good option is to consider using an internet backup service like Rapid Backup.  For a fixed monthly fee (dependent upon the volume of data to be backed up), the service will regularly make a secure, encrypted copy of your nominated data.  And should you need it back in a hurry, the same service will provide that data across the internet so that you don’t have to be in your office to receive it.

You can set backup intervals to suit your data’s volatility level, right down to backing up literally every minute.  To make the service practicable, the Rapid Backup software is intelligent enough to only backup the data that has changed since the last backup, thus the transfer time of the data across the internet is kept to a minimum.

And should the worst happen whilst you’re away from the office? – well, you won’t know where your laptop is but you’ll be certain of where your data is … and sure that you can get it all back quickly and securely.

To find out more about Rapid Backup, or to sign up for a FREE 30 DAY TRIAL, go to this page.


back to topManaging large volumes of data

We all know that regularly backing up your data is good practice and that using an automatic internet service makes a lot of sense as the backup is then kept off site should a physical disaster strike your offices.

But how sensible is this when the volume of data that needs backing up gets quite large, say over 20GB or so?  The potential problem lies in the speed of your internet connection – for both the backup phase (data upload) and any restorations (data download).

Lets look at the backup phase first.  Most internet backup services, such as our own Rapid Backup, use an incremental/differential approach so that only the data that has changed since the last backup is actually copied over the internet.  This is eminently sensible, but periodically the backup software needs to consolidate all of the incremental changes into a single new backup image – which means uploading the whole 20GB or more.  At a typical upload speed on broadband of 500kbits/second, a 20GB backup would take around 87 hours!

A similar problem exists when you do the first backup on a new internet service before the incrementals can begin.

The restore phase is less troublesome because the data rates for download on broadband are much faster.  However to download 20GB on a dedicated 10Mbits/second connection can still take quite a while – typically around 4.5 hours.  Of course, if the broadband connection is shared by other users during the restore then this would extend the time considerably (at 2Mbits/second it would take about 22 hours).

So what’s the answer?

A technique called ‘seeding’ is the answer.  In ‘seeding’, the backup service provider ships an externally connectable hard disk that is large enough to hold all of the data for the initial backup (or the periodic consolidation backup) to the customer.  The customer copies the all of the to-be-backed-up data onto the hard disk at direct connection speeds (more than 20Mbytes/second for USB 2.0 for example) and ships the disk back to the service provider.

The service provider then ‘backs up’ the data on the hard disk into the backup system at their data centre, again at direct connection speeds.

Thereafter, the customer can use the internet as normal to do the regular incremental/differential backups until the consolidation backup is required – at which point the ‘seeding’ process is repeated.
The ‘seeding’ process can also be used should a large volume of data need to be restored.  This time though, the service provider copies that the requested backup image on to the hard disk at their data centre, then ships it to the customer for them to restore the files that are needed at direct connection speeds.

RapidHost offer a ‘seeding’ service as part of the Rapid Backup service and recommend that customers backing up more than 20GB of data consider adding this to their service options.  Please feel free to contact us and we can chat through whether ‘seeding’ is right for your need.

To find out more about Rapid Backup, or to sign up for a FREE 30 DAY TRIAL, go to this page.

back to topDevelopers: Keeping track of customer-specific application versions

Despite your best intentions at the outset, as a developer you often get drawn into creating customer-specific versions of your application software.  Whilst this gives the customer a more personal service, it gives you … a version control issue to manage and potential upgrade and maintenance headaches!

Adding to this situation is the problem of knowing where the relevant backup copies of each version are.  And as is the way – you only need that backup copy when you are under pressure because things are going horribly wrong.

One solution to ease this problem is to use a remote backup service like RapidHost’s Rapid Backup.  With Rapid Backup your backups are available directly online, which is especially handy if you are on customer site when the problems occur.  They are also centralised, so from a single browser interface you can sift through all of the copies to decide which is the most suitable version of the specific files that you want.

And if the selected ones don’t fix the problem you can keep going further back in time until you find the files that do – all from the internet, wherever you are and whatever the time of day.

Whilst Rapid Backup won’t solve all of your customer-specific software problems, it will help you overcome one of the biggest and most urgent. 

Call us today and see how we can help you manage your backups better.

back to topWhat are the options for backup?

It’s one of those things that we all know that we should prioritise.  Yet, all too often taking full care of an organisation’s data fails to stay on the top of people’s ‘to do’ list.

Sometimes it is a case of not having thought through the data protection regime well enough – leaving major risks unaddressed on the basis that the worst won’t ever happen.  Other times it’s a case of delegating too far – with a non-technical member of staff being ‘nominated’ to do the backup who hasn’t been trained to recognise the significance of various system warnings. 

And surprisingly often … it is both together.  A real recipe for disaster.

Uninspiring but essential

Whilst backing up data is technically uninspiring it is essential and should be given due consideration.  The technologies available are typically simple to understand and relatively low-cost to implement.

The simplest form of backup is to copy key data from one disk to another, even if it is in the same computer.  This would allow the organisation to survive the accidental deleting/corruption of a file or even a single hard disk failure.  But a computer failure would leave the backup disk unreadable until it was relocated into another computer.

The next level of protection is to copy the data to storage media that located in another computer within the organisation’s premises (typically a server).  This approach makes sure that key data is available even when the original computer itself may not be because of a hardware fault (or even theft!).  However, it doesn’t cover the risk of server failure including physical disaster (fire, flood) in the server’s location.

There is an abundance of off-the-shelf software that can schedule and manage the actual copying, some of which is intelligent enough to make incremental copies (rather than whole copies) which can save time. 

By way of increasing protection further, there is a disk management system that is commonly used on servers (rather than PCs) called RAID.  This isn’t a backup regime as such but does offer good data continuity by using multiple disks in the same computer to keep running copies of data.  Being automated it doesn’t need users to remember to make copies periodically and it copes very well with single hard disk failures.  The trade-off is that some of the disk space within the computer is reserved for backups.  By either copying local data to the server or actually locating application data on the server itself, users will get the benefit of the RIAD protection system.

The next level of protection is to copy data (typically from a server) to storage media that is removable (CD, DVD, DAT tape, USB disks, and so on) and store the media in a different location to the server.  This ensures that the data can survive a physical disaster in the server’s location, but does rely on staff remembering to remove the recently used media and cycling previously used media appropriately – even when the ‘usual person’ is away on holiday or ill. 

Combating this risk leads to the highest level of local protection – and that is to automate the removal of the data from the organisation’s site by using the internet.  By intelligently copying the data across the internet to hard disks that are housed on third-party managed servers (which are themselves backed up), the organisation mitigates all of the local risks to its data.  It can survive hard disk crashes, PC and server hardware failures, and physical disasters without needing to rely on staff to remember to start backups or remove/cycle backup tapes/disks.

And by storing the data on internet accessible servers, it also means that the organisation can restore data when needed to a wider variety of locations depending upon the urgency at the time.  The backed up data is transferred and stored in an encrypted format so security is not compromised.

The risk inherent in this approach would be a sustained period of internet down time whereby the backups couldn’t take place.  From a data protection point of view there is one further level of protection that can be deployed to avoid even this risk – and that is to opt for a professionally managed server approach.

In this model the servers themselves, and thereby the data they contain, are physically located in a third-party’s data centre and professionally managed by them.  This management includes a multi-layered backup system based on highly-resilient hardware and networks, typically involving multiple locations for ultimate storage. 

The system is driven automatically but closely monitored by technicians.  Depending upon the nature of the organisation’s business, backups can be taken daily, hourly or even continuously.

Getting the mix right

There isn’t a ‘one size fits all’ approach to protecting data.  The key is to adopt an approach that has been consciously designed to provide the correct level of protection in a way that staff can be sure to manage.
 
It is advisable to think about using multiple levels of protection so that surviving a data problem is the least disruptive.  For example RAID disk systems survive a single hard disk failure without stopping.  Coupled with well managed off-site daily backups to protect against a whole server failure, this could provide the optimum protection for an organisation.

What about price?

One aspect of deciding on the correct backup regime for an organisation that we haven’t touched on yet is price.  Clearly each approach to data backup has a different price point, so which is best?

The answer lies in assessing the value of the data that is being protected.  If losing it, or even just losing a day’s worth of data, would cost the organisation £1,000s then paying even £100s to protect it might make very good business sense.

And bear in mind that assessing the real cost of losing data can be very hard to do.  It is not always just the time cost of re-keying lost data.  If the organisation trades on the web and a number of customer purchases are ‘lost’ then the trust that the customers have built up may be lost as well – which could mean that their future purchases will be made elsewhere and word-of-mouth negativity could impact the future purchases of unaffected customers too.  In this example, the total financial impact on the organisation could be many times greater than the value of the actual transactions that were disrupted.

Finding the right balance between cost and appropriateness can be very hard.  The best approach is to err on the side of caution, but that is not always possible.

If you are unsure of what the best approach to protecting data in your organisation is, then give is a call and we can talk you through the options.

Summary

 Method

 Risks Covered

 Risks Not Covered

Copy to another disk in same computer

Single file corruption or deletion

Single hard disk failure

Local computer failure
Copy to disk in another computer, typically a server

Above

+

Local computer failure

Server single disk failure

Server hardware failure

Fire/flood at server location

Copy to, or locate data on, a disk in another computer, being a server using RAID technology

Above

+

Server single disk failure

Server hardware failure

Fire/flood at server location

Copy to removable media, typically from a server

Above

+

Server hardware failure

Fire/flood at server location

Media not being taken off-site regularly or cycled correctly
Copy to disk in a managed location across the internet

Above

+

Media not being taken off-site regularly or cycled correctly

Prolonged internet downtime at organisation's location so that backups can not be taken
Fully managed servers in a 3rd-party data centre that are themselves backed up to multiple locations

Above

+

Prolonged internet downtime at organisation's location so that backups can not be taken

-

back to topProtecting highly volatile files

Backup/archiving systems, like RapidHost's Rapid Backup, offer users the ability to configure how often to take copies of their data.  This is typically set to 'once a day' by most users, but can be configured down to hourly or even 'every minute' in extreme circumstances, such as when some of the files are highly volatile. 

However, using the backup system this way means that in order to protect a few specific files the system is potentially backing up the whole of the user's data on a near-continuous basis.  This could have a significant effect on system performance.

A better approach is to identify just those files/databases that need this level of protection and have them backed-up on a continuous basis whilst the less volatile files can be copied, say, once a day.

The latest version of RapidBackup does just this in a feature called Continuous Data Protection (CDP).  CDP enables files to be backed up automatically at the time when there are changes rather than on a periodic basis. The means that all intra-day interim changes are backed up automatically. Even if the local computer/server breaks down completely before users have had the chance to backup their data at the end of the day, all changes within the day have been backed up safely by CDP and no data is lost.

For CDP to work, a memory-resident component of Rapid Backup is loaded onto the user's computer/server.  This monitors the nominated files and triggers a backup to copy any changes to the secure server whenever needed.

If you'd like to know more about CDP, or Rapid Backup in general, then gives us a call.