Mix Color Masterbatch,Functional Masterbatch,Pigment Masterbatches,Polypropylene Masterbatch Hebei Tangnai Technology Co., LTD , https://www.hbtangnai.com
In general, the implementation of storage virtualization is usually divided into three types: virtualization of the switching architecture, virtualization of the disk array, and virtualization integrated into the application device. For three different virtualization approaches, storage vendors have their own unique weapons. Since IBM introduced the SVC (SAN Volume Controller) product two years ago, it has taken the lead in this area. Last year, HDS (Hitachi Data Systems Co., Ltd.) subsequently released the TagmaStore Universal Storage Platform (USP), which is a disk array-based solution. In recent months, EMC's newly announced Invista network storage virtualization solution is based on a storage switching solution.
So which kind of technology is it, which vendor's solution is the best? Which solution will become the final winner in the storage virtualization competition? Now more experts believe that this competition does not have the final winner. The more people think that these three technologies should be combined with the increasingly vague boundaries. If we put vendors and their respective virtualization technologies to the table, then all three virtualization camps have their own representatives. Representatives of the virtualization application camp include SVC, StorAge, Network Appliance, and DataCore. In disk arrays and Fibre Channel camps, HDS, Sun, HP, and Acopia provide a diverse architecture. The switch camp includes Invista, McData, Brocade, QLogic, and Cisco.
However, McData, Brocade, Cisco and other companies have conducted a series of corporate acquisitions and cooperation for Fibre Channel-based virtualization. It seems that the dividing line between different types of solutions has become blurred. Some of the vendors in the other two camps are slowly moving beyond their own areas, even if they are not really completely across the boundary.
Due to many issues such as virtualization performance, application flexibility, and virtualization engines, early advocates of the two camps of storage virtualization and disk array virtualization have been widely questioned by the industry. The initial implementation of virtual storage vendors relies on distributed solutions based on existing components or port-based processing engines to provide the required functionality. Application appliance virtualization solutions are considered the easiest to configure, but they often have application limitations. Therefore, some vendors prefer storage virtualization, and believe that intelligent SAN virtualization processing components are the model for the next generation of virtual storage.
Similarly, HDS also made similar criticisms for application virtualization solutions and network switching virtualization solutions. HDS believes that their Universal Storage Platform (USP) is a storage controller that deploys virtualization at the edge of the storage network rather than a switch or appliance deployed on the host or network core. They think that this is a performance and security factor. Best location.
NetApp, a firm supporter of application virtualization, believes that virtualizing storage devices on storage networks is the best solution. A NetApp spokesperson explained that after choosing a disk array solution, the storage network can provide customers with the greatest flexibility to not lock customers into disk array solutions like the TagmaStore universal storage platform. The cost of client code in a host-based virtualization solution is required. Within the storage network, application devices can be flexibly placed.
Virtualization of the melee Although the promotion of virtualization is overwhelming, the adoption of virtual storage technology in the business world is still slow. According to IDC's survey of 269 IT managers from companies of different sizes, only 8% of enterprises are applying any form of virtualization. And only an average of 23% of companies said they plan to implement a certain degree of storage virtualization in the next 12 months.
Mid-range storage users are mainly expected to manage data migration and reduce the management burden. Large-scale enterprises mainly expect to use data replication and volume management in virtual storage for storage provisioning. No matter which virtualization camp vendors are facing different pressures, they have to be tested in the real environment.
At present, no one has firmly occupied the market. So far, IBM seems to have the highest sales record, but it is only close to the leadership. According to Steve Duplessie, senior analyst and founder of Enterprise Strategy Group (ESG), SVC has sold more than 1,500 systems. The data was also confirmed by a British research company.
Cisco Systems recently acquired Topspin, and has the ability to connect server virtualization, storage virtualization, and network virtualization. Topsin's virtualization core technology can bring a large amount of technical wealth to Cisco. If Cisco chooses to fully implement virtual performance after acquiring it, the result is bound to be noticeable.
Although it has achieved a lot of achievements and status, Cisco is still a playful participant in storage. The challenge for Cisco is that all data replication, storage provisioning, and other intellectual property rights for core storage functions are in the hands of storage vendors. In order for Cisco to gain an advantage, in addition to its own product development and market, it is necessary to strengthen these mainstream storage systems. Manufacturers' cooperation and communication.
In this contest there is also a low-profile force that is Microsoft. In the past two years, Microsoft has quietly built itself into a powerful storage area* and has recently overcome issues such as impeding the development of virtualization licenses. In this virtualized melee, Microsoft may be a bit late, but with Microsoft's absolute position in the software field, Microsoft is likely to burst into some amazing technology, perhaps turning virtualization into a part of the server operating system. .
The direction of future virtualization As the boundaries between different categories of virtual storage are increasingly blurred, the boundaries between storage virtualization and server virtualization are increasingly blurred. In addition to Microsoft’s efforts with Windows Storage Server 2003, NetApp has added virtualization performance to the V-Series (formerly Filer series) arrays in the Data ONTAP operating system.
Virtualization software is becoming more dynamic and more complete. Its direction of development is more like a comprehensive operating system. Industry insiders have fully realized that there is no point in arguing for virtualization through switches, disk arrays, or application devices. Future virtualization should be achieved through these technologies and then combined by a single major layer of virtualization. .
Virtualization refers to adding a management level, activating a resource and making it easier to control transparently. Ten years later, when we look back at the current operating system, we may be surprised to see the earth-shaking changes in the operating system field. The future virtualized operating system should be a highly distributed, enterprise-class operating system.
If we look further, virtualization may also evolve into an element of a distributed operating system that includes servers, networks, and storage devices, and all three of these virtualizations are receiving attention. However, any one of these three types of virtualization can cause trouble. For example, with server virtualization, some of the original server virtualization projects had problems with the advanced features of storage addresses and other storage management. To virtualize normal operations, server virtualization must improve virtual storage performance, or it will become an obstacle.
Similarly, network equipment or storage switches can use various intelligent packet inspection techniques to understand the nature of the data being migrated and make decisions about how to deliver or store it effectively. Although the network can recognize that a data stream is actually a JPEG file, there is no way to distinguish between radiographs and pornographic photos. Also, the virtual storage or virtual server pool can only figure out how this data is doing. It can only be so advanced—for example, when other processes are sleeping, the server pool may choose to allocate an additional process to a process, but It is unable to make the most basic distinction between the running of the payroll process and DOS attacks against the server (denial of server attacks).
Therefore, industry experts believe that it is very important to consider virtualization in three regions. It is also necessary to integrate management tools to understand the needs of the application layer and make virtualization decisions based on the situation. However, there is still a long way to go to achieve such a dream.
Understanding the Three Technical Directions of Virtual Instrument Storage
Abstract: In general, the implementation of storage virtualization is usually divided into three types: virtualization of the switching architecture, virtualization of the disk array, and virtualization integrated into the application device. For three different virtualization approaches, storage vendors have their own unique weapons. The author of this article explains the three major technical aspects of virtual storage for everyone to readers.