[an error occurred while processing this directive]

 


Special reports - What's in store? - November 1999
With the current increase in the growth of data, Eric Doyle investigates the latest developments in integrated storage management.
.

Data storage and its management has always been a nightmare and life does not seem to be getting any easier as the range of storage systems grows. Not so long ago storage management was a relatively simple affair and everyone knew their grandfather, father and son routine by heart. Compared to disk, tape was a much cheaper, more reliable, long-term storage option and capacities grew more or less in keeping with disk sizes. Finding time to do the backups may have been hell but the principals were easy to grasp.
Did you know?

The Data storage Market is expected to grow to 87 billion $ in year 2000, a rise of 20 billion $ from ’98.

IDC says DLT market will grow by 30-40% each year for the next 3-4 years. Source: Quantum | ATL

New technologies, new issues

Suddenly, we've seen the advent of laser storage technologies, massively higher capacity disk drives and the accompanying technologies of RAID, clustering and Storage Area Networks (SANs). As if this was not enough to handle, there are new issues to consider: the intranet/e-commerce data boom, tape capacities trailing in the wake of the new disk systems, the growing need for replication of data stores. Even the mechanics of tape backing-up has become more complex with Hierarchical Storage Management (HSM) lurching out from the shadowy world it initially inhabited.

Chris Boorman, marketing director at Veritas, summed up the current situation. "We've become addicted to our computers. We can no longer do business without them. The amount of data that we're storing is going through the roof. These machines are not infallible so we have to backup the data but we also have to manage it more effectively than we did before. Backup is just one component of storage management, managing disks and tapes, managing where and how the information is stored, how we access it and ensuring that we can always access it are all part of this picture."

Now, we have a new phrase to deal with, integrated storage management. According to Boorman: "Our definition of integrated storage management is being able to manage the entire storage environment and integrating management activities in such a way that you gain distinct business benefits." The bottom line reads: the efficient management and protection of digital information and applications wherever and however they may be stored. A key technology in the future will be the Storage Area Network (SAN) with its parallel Fibre Channel network enabling data to be moved where it is most needed or for replication or back-up storage.

Back it up


Backing-up data has always been the preserve of creatures of the night (or the weekend at least). The extra traffic generated by the housekeeping activities often reached such volumes that it could not be undertaken during business hours. With the gradual encroachment of business operations reaching the round-the-clock operations (24x7) levels we see within corporates today, data management has been squeezed into ever smaller time windows. This has brought about an incredible degree of efficiency in the way data is managed and much of this will underpin the new integrated storage management scene.

Boorman explained, "Traditionally, backup software has simply taken data off disk in a relatively coarse manner. For example, if a file changes I can do a full backup and take copies of all the files on disk or, more usually, do an incremental backup of only those files that have changed. In today's computing environment where files are getting bigger and bigger identifying a file that has changed and simply backing the whole thing up is a ridiculous way of operating."

What the leading back-up companies have done is to build intelligence into their software that will examine the changed file and only backup the block that has changed. "In large database environments," Boorman said, "this block level incremental backup concept is quite profound because you may have a file that is gigabytes in size but has only had a few blocks changed."

New possibilities


Another approach is snapshot technology where a secondary server is used to take a virtual copy of its partner's resources. It is a form of fault tolerant clustering but instead of the standby server being redundant it is linked to backup hardware. Chris Lentz, European marketing manager responsible for Legato's high availability division, explained that SnapShot Server was developed by his former company, Vinca, which was recently acquired by Legato. "By providing a virtual copy of the primary server, SnapShot Server provides a way for customers to take a backup of a defined volume without shutting down the real volume. The system was developed for Novell NetWare but is now also available on NT."

The product takes its place in the company's latest initiative, Legato Continuum. This is the broader picture that shows Legato, like Veritas, has aspirations in the integrated data management field. The aim is to provide a single management suite that is multi-platform and intelligent and complies with open standards. Lentz described the concept behind Legato's Continuum strategy: "Legato's concept of Information Continuance has tape backup at one end, moves up through off-site replication, high availability clustering, with more fault tolerant applications such as SAN architectures at the top end."

It is the last category that is currently attracting industry attention as the rise of the SAN opens new possibilities. Ray Rice, business manager with CMS Peripherals, said that there are plenty of examples that show these standards are needed. "You can't be sure that it will work if you buy company A's RAID system, B's tape drive and company C's fibre switches. Currently, switched fibre fabrics are largely sold by companies who have certified that A, B and C will work together. It reminds me of the early days of SCSI products" he commented.

Several companies are pushing ahead with developing an industry standard based on the SCSI Extended Copy specifications from the EMC-initiated FibreAlliance for the Storage Networking Industry Association (SNIA). Other members of FibreAlliance include Computer Associates, Hewlett-Packard, Compaq, Veritas and Legato. CA is the first to break cover and has announced the Storage Area Network Integrated Technology Initiative (SANITI).

E-commerce has brought about a resurgence of interest in storage management


CA's SANITI, working within the company's Unicenter TNG Framework, promises to offer single console management for and autodiscovery of Fibre Channel hubs, SAN switches, and storage devices. Jim Callaghan, CA's product manager for information management products, said, "E-commerce has brought about a resurgence of interest in storage management. Because most of the companies in this area have come to value their data more highly than their physical assets, it has highlighted the need for management to ensure that it's secure and that they can get to it when they need it."

His comments are reflected by Donal Madden, Compaq's storage product manager, who said, "Every customer I speak to is facing the same problems. Data is growing like a weed, particularly in the NT environment so they're now backing up twice as much data as they were a year ago. Added to this there are remote users ringing in at all times of the day and night. So call centres are opening for longer and there's twice as much data to back up and 70% less time to do it in."

In its way, Windows NT has added to the data management problems through its distributed nature. The Distributed File System is helping users to come to terms with distributed storage but there has been little relief for IS managers. Microsoft is well aware of the burgeoning data problem and will go some way to address the problem with Windows 2000. It is working primarily with Veritas (which acquired Seagate Software last year) to provide a degree of hierarchical storage management (HSM) but there is scepticism over how far this will address the issue. Rice from CMS commented, "I'm by no means a Windows 2000 expert but I think the HSM element will be like NT Backup which was Seagate's Backup Exec stripped down. It will introduce the user to HSM but probably won't be something they would continue to use. There'll still be plenty of room for ISVs (Independent Software Vendors)."

Storage Area Networking


Andrew Cheeseman, a senior consultant at Microsoft, predicted, "Microsoft is putting a stronger emphasis on backup and recovery. Storage management will provide a single interface to volumes and the way volumes are mounted is going to be more like the mainframe realm. Storage Area Networking is going into overdrive and Windows 2000 is really going into these technologies. A lot of solution providers of SANs have added some very standard interfaces into their boxes to allow you to use volume pools and everything that goes along with SANs in a Windows 2000 environment. You can now build some very high availability solutions using SANs - it's a good complement for Windows 2000."

Boorman at Veritas believes that SANs provide the answer to many of the current problems surrounding data protection and availability. The fact that data storage can be shared across all servers on the SAN means a change in emphasis in the backup and restore arena. "SAN is going to enable you to provide high levels of availability within Windows 2000 or NT in a way you couldn't do before," he explained. "When a server or an application falls over you can move processing across to a similar server and resume the service."

In the realm of backing-up there will no longer be a need to divert server processing power to pump data out to tape archives because the SAN will contain its own intelligent transport. The introduction of this hostless backup regime also has the potential to move data around faster and the added bonus of leaner block level backups means that online systems can be constantly backed-up throughout the day with zero impact on the users. The intelligence will also mean that applications, databases and documents can be treated differently and this could mean the end of separate proprietary backup systems for databases - another advantage of integration.

Addressing concerns


One issue that will have to be addressed within Windows 2000 is the unsociable NT 4.0 habit of grabbing for itself every bit of storage that it sees. Rice pointed out that on a SAN this would be disastrous: "Unix has much better volume mapping. There are some patches available for NT 4.0 but it is very much a patch. Windows 2000 has to come up with something better."

The data stored on corporate servers and desktop systems may be well served by the SAN but what about the growing mobile workforce? The rule in the past has either been to bring the notebooks into the LAN environment for backing-up or to quickly transfer as little data as possible. In some cases, users have to be allocated specific times to log-in to the corporate network to avoid exaggerating the end of the day/ early morning network traffic rush hour. With the bandwidth provided by Fibre Channel, these concerns will be eased but remote users will need to have access to the applications and data stored on the user network. It is possible, but not yet apparent, that a new breed of connection will enable the user to forget about backing up because it will be done through the SAN while they are logged onto the in-house network.

Fibre Channel promises to be the analgesic that will relieve the data management headache. Even if there are still those out there who need convincing of the need for backups, the Y2K threat will probably change their attitude. If systems fall over at the end of this year, data backup may be the only way to ensure that no data is lost - assuming of course that the backup system is Y2K proof. To be truly integrated, data management must work everywhere, with any device. In short, backing up data should be a no-brainer. It should be a background process that only becomes apparent when something goes seriously wrong. If an application becomes corrupted and falls over, the management software should see that the service and users are switched to another server. Meanwhile, the failed application should be automatically rebuilt so that the server can be brought online as quickly as possible once the cause of the crash has been ascertained and fixed. If a drive fails, a hot swap should be replenished with data regardless of whether it is in a RAID array or not. All of this on top of the background noise of constant backing up of data.

At the end of the last century wealthy families employed servants who had their own quarters linked to rooms around the house through a back stairs arrangement. This allowed them to move from place to place to do the housekeeping, unseen by the family. At the end of this century, we are seeing the SAN providing a similar backstairs system to allow IS managers to access their systems and allow them to do their data "housekeeping" invisibly and in a more integrated and better managed environment.