Useful Links



featured customers

About RapidHost

Careers

Support Center

Our Contact Details

RapidHost Limited,
Keypoint,
17-23 High Street,
Slough,
Berkshire,
SL1 1DY,
United Kingdom

 » Show Map


 

 

network management software

 

Interesting Articles

Useful information on new technologies

If you have any comments, reaction or contributions to the articles below, please feel free to contact us with your views.

 

Interesting Articles - New Technology

Here are a collection of interesting articles related to new technologies. If you have any comments or reaction to these articles, please feel free to contact us with your views.

back to category indexIndex


back to topThe advantages of Virtual Private Servers (VPS) versus shared and dedicated

For a long time the only platform choice in hosting was between low-cost shared servers or high-cost dedicated servers.  Now there is a viable third choice – mid-priced Virtual Private Servers (VPS).

Virtual is the key word

A VPS offers many of the advantages of a dedicated server whilst physically running in shared hardware.  This is achieved through smart virtualisation software that creates and manages a number of ‘virtual servers’ within the shared hardware.

Each ‘virtual server’ appears to the customer’s applications, databases and so on as if it really was a dedicated server, with a pre-set memory size, disk size and network bandwidth. It even allows the customer to fully configure the environment for their specific application and security needs.

The virtualisation software then protects that ‘virtual server’ from the actions of all other ‘virtual servers’ that are operating on the same physical hardware. So, should another customer’s application go ‘rogue’ because of a coding error or become swamped with internet transactions, the other ‘virtual servers’ would carry on as normal.  Even a major crash within one ‘virtual server’ won’t affect the others.

The virtualisation software that makes VPS possible has become very advanced indeed and recently we’ve seen the entry of Microsoft into the market with their Hyper-V product.  This is strong evidence that the virtualisation approach is fast becoming mainstream, not just for hosting companies like us, but also for large IT users looking to improve the efficiency of their hardware utilisation.

The advantages of VPS over shared servers

The biggest drawback of using a shared environment for your websites or applications is the impact of on your system’s performance and reliability from those that you share with. 

All you need is one of the other user’s applications to crash badly and the whole shared server would stop and need re-booting.  Similarly another user’s website becoming very popular would slow the system down for your applications as they would consume a disproportionate amount of the shared system resources.

Under the VPS approach these issues just go away.  It’s as simple as that.

The virtualisation software protects each ‘virtual server’ from the others and isolates the key resources that have been configured.  So if one VPS has been configured with 512Mb of RAM, then it always has that amount of memory available to it regardless of what other ‘virtual servers’ are requesting (even though the total pool of RAM is shared amongst all ‘virtual servers’).

It is this protection and isolation that justifies the use of Private in the VPS name.

The advantages of VPS over dedicated servers

Before virtualisation software became available the only alternative to the performance ‘lucky dip’ of shared servers was dedicated configurations.

This required the hosting provider to purchase and configure new hardware for each customer.  This in turn meant a substantial capital investment upfront by the hoster and the rapid consumption of their data centre space, power, network connections and so on.  Because of this the fee for dedicated servers has been set high.

The VPS concept changes the hoster’s cost model considerably.  Now, the hoster can provide a near-dedicated quality of service using shared hardware, which reduces the consumption of their data centre racking, power and network connections.  This cost reduction is passed onto the customer through reduced fees.

Another cost-related issue that using VPS technology changes considerably only comes into play after a number of years of use … hardware refresh.  In the traditional dedicated server model, when the server hardware reached a certain age (often three years) it made sense to refresh it for new hardware.  This would reduce the risk of failure as well as allowing the customer to take advantage of improvements on processors speeds and so on.

The idea of refreshing the hardware after a period of continuous still holds true for a VPS, but the big difference is in who pays.  For a dedicated server, the full cost of the replacement hardware was borne by the customer through the fee levels, sometimes including a new set-up fee as well.  Whereas in the VPS model, the cost is spread over a number of customers, thus the fees can remain low throughout multi-year contracts even when hardware refresh is included.

Conclusion

Virtualisation is being rapidly embraced by both hosting companies as well as large IT using organisations. As an approach it makes strong financial as well as technical sense and even reduces carbon footprints. There will, of course, still be complex computing needs for which true dedicated servers are a necessity.

But for many commercial computing needs virtualisation offers significant resilience and performance improvements over using shared servers with no technical disadvantages. VPS has definitely come of age.

If you’d like to chat through your hosting needs and see whether a VPS solution would be of benefit, just give us a call.

 

back to topMaking ICT costs in education more predictable: a capital idea

Managing your ICT budget for predictability is hard in any organisation, but even more so amongst schools, colleges and universities. 

The specific challenge that educational organisations face is the unusual mix of users for their ICT services.  They have a relatively small number of ‘fixed’ users (the staff) and a relatively high number of transitory users (the students).  And to make matters worse for those organisations working with older students is a continual increase in the number of them expecting to bring their own mobile computing systems into the environment as well as have good offsite access.

An Impossible Task?

Squaring the budget circle can seem like an impossible task, especially for capital areas such as hardware, networking and core software purchases.  Whilst the user demand for more and better ICT systems is unquenchable, the management demand to hold or even reduce capital spend is equally powerful.

The result is that a ‘make do and mend’ approach to ICT delivery is commonplace in many education establishments.  From primary schools through to universities, ICT teams are desperately squeezing every last drop of capability and longevity from their current systems.  Meanwhile the role of ICT within student education is being seen as more and more mainstream and a focus area for both OFSTED and QAA inspectors.

For example Ofsted’s Embedding IT in Schools (2005) report found that “many pupils did not have sufficient access to computers to support their learning across the curriculum on a regular basis.  Flexible deployment of resources was the key to success in secondary schools, rather than overinvestment in bookable ICT rooms.”

There Is A Way …

There is a way out of this Catch-22.  And that is to rethink how ICT reaches the end-users and how such a change will impact the way that the budget is constructed.

Those of us with long memories will remember the days before PCs when mainframe computing was the main way of delivering ICT.  The approach was very simple -  all of the software and data lived on the mainframe, which was cared for by a dedicated team of professionals.  The users were equipped with ‘dumb’ terminals that only allowed them to access the software on the mainframe.

Since then the ICT industry has given us PCs and local servers which has devolved a lot of the software and data to the office and desktop level. At the time this was necessary because network speeds weren’t fast enough to support highly graphical applications on mainframes or other central servers.  A by-product was the user’s feeling of autonomy … which has been since countered by the organisations feeling of lost control!

Spin the clock forward to now and we are in a very different world of ICT.  Networks are very fast, very reliable and relatively low cost, whereas PCs and their many individual copies of software have turned out to be quite an expensive way to deliver end-user services.

The opportunity therefore is to evolve backwards to the mainframe model but using today’s technology.  The ‘mainframe’ will be powerful Windows or Linux servers that are centrally managed, the software will be the same as currently used but hosted on the servers rather than distributed to desktops, the ‘dumb’ terminals will be low-spec’d (or even diskless) PCs that need minimal maintenance and have 6-8 year life-spans, and the network joining it all together will be the Internet.

... With Strong Benefits

The benefits on offer with this approach are massive.  First off, there are the cost reduction opportunities – desktop devices are now very low cost, low maintenance and have much extended life-spans; the software, by being centrally hosted, can be licenced on a metered basis so that you only pay for what you use (especially useful if your user community changes size every academic year), even including Microsoft Office and other ‘personal productivity applications’; and maintenance and support costs are reduced as the core of the computing is done by the central servers alone, thus backups are easier to manage as is operating and application software patching and upgrading.

Secondly, there are service improvement benefits to be had – as the connection technology is the internet, end-users can receive an identical service (including personalised access the same applications in the same way) whether they are onsite or offsite … securely; system and application upgrades are performed centrally with no user disruption; similarly new services can be easily introduced to the whole user community in one go; desktop devices can be quickly swapped out on fail with no loss of user data ; and user activity can be accurately measured in order to better plan for future needs.

Lastly, there is a significant budget management opportunity – with this approach in place it no longer matters to the end-users if the central servers (and the software they run) are owned and operated by the organisation or by a third party.  Provided the service level is agreed and delivered, the physical location and legal ownership of the equipment and software is irrelevant – the internet reaches all end-users equally.  This allows the organisation to ‘rent’ their core ICT service from a professional supplier, which in turn changes the accounting from the capital budget to the operating budget.  The whole service could be priced on a ‘per user per month’ basis that will vary down as well as up when the user count changes.

And a ‘per user’ pricing model is about as predictable as you can get for an ICT budget.  All achieved without compromising on functionality, application availability, performance or security.  Plus the opportunity to take much of the ICT spend away from the capital budget.

It’s definitely a way forward that’s worth looking into.